Wireless networks allow computers and other technology devices such as cell phones, game systems, printers and more to communicate without wires. Wireless devices sometimes communicate via infrared waves similar to those used by a television remote control, or more commonly they use radio waves similar to the frequencies you tune into via a music radio.
The most common type of wireless network type is a Wireless local area network (Wireless LAN), often referred to as Wi-Fi which stands for wireless fidelity. Wireless LAN’s are commonly found in homes and businesses and they are used to connect numerous technology devices (“e.g.” laptops, iPhone, etc.) to a local area network without wires. A wireless network setup is typically installed in a home so individuals can access the Internet anywhere in the local area via their laptop including sitting on the couch or even in the backyard.
In fact we’re willing to bet you may even be surfing the Internet right now over a Wireless connection. That’s how common wireless LAN’s are today, they are literally everywhere.
Head over to your favorite Starbucks coffee corner and you will most certainly find a Wi-Fi connection available, Wi-Fi is also commonly found in schools and public libraries too.
There are other wireless network setups too such as a Wireless personal area network (Wireless PAN). Wireless PAN’s are typically used to connect devices across a shorter distance. An example of a common Wireless PAN is Bluetooth technology found in most cell phones, PDA’s, some automobiles, and many other devices.
Then there is a wireless MAN, yet another type of wireless network setup. Wireless mans are also known as Worldwide Interoperability for Microwave Access (WiMAX). Wireless MAN’s are used to connect several wireless local area networks (LANs).
As you see there a number of wireless networks setups out there with Wireless local area networks (WLAN’s) being the most common. And as you will see in the next section, wireless technologies are going to offer you a number of key benefits over standard wired technologies.
There are many benefits to using a wireless network setup and because setting one up these days is a fairly simple process taking advantage of the technology is even easier than ever.
With a wireless network setup you have more convenient options to access the Internet, especially useful for homes and small business. In the past if you wanted to network computers and other devices in a home or business you would have to run a cable across or even through the floor, etc. or hire an expensive cabling company if a professional cabling job was necessary.
Wireless network setups solve the cabling issues of the past in many situations. Now you can sit on the comfort of your couch and access your favorite website via your laptop, or even sun in the backyard during the summer surfing the Internet at the same time via your laptop, iPhone, or Personal Digital Assistant (PDA).
And as you see cost is also a major benefit of a wireless network setup. Running cables or even worse, hiring a professional cabling company can become very costly indeed. Wireless networking solves these problems in many situations.
Another thing worth mentioning is security, for you may have heard that wireless (Wi-Fi) is a security risk. While many can argue that wireless still isn’t as secure as a direct wire connection, these days wireless local area network technological improvements have definitely bridged that gap making wireless networks just as safe as wired networks when configured correctly with high end security encryption features enabled, etc.
As you see wireless network technologies are going to be beneficial to many individuals and organizations in more ways than one. Now with a brief introduction and a list of key benefits out of the way, let’s briefly go over how the most common type of wireless setup. In this next section we’re going to explain how a wireless local area network (WLAN) works in a typical home or small business setup.
A wireless local area network (WLAN) wireless network setup works via a combination of devices such a wireless router or access point, a wireless/Wi-Fi card, DSL or Cable modem, etc.
When setting up a wireless local area network (WLAN), a wireless router or access point is the most important piece of networking pie needed to make everything work.
The wireless router or access point acts as a central hub to which computers, printers, PDA’s and other devices connect. Common wireless router manufacturers include Linksys, Netgear, Belkin, Cisco, D-link, Apple, Trendnet, Asus, 3Com, Buffalo Technology, and SMC.
Every device that connects to the wireless router does so with a wireless network interface card or NIC. Wi-Fi cards receive and transmit signals from the device to the router and vice versa. But before a Wi-Fi enabled device can connect, it must first be connected to the router.
Typically this requires you to search for the wireless routers Service Set Identifier (SSID) which is the name given to the wireless network setup. On each device that you want to connect to the wireless network setup you must search for the SSID of the network and then connect accordingly. Normally a special encrypted password sometimes called a WEP key or WPA key is then needed to initiate the connection, at least on a properly setup wireless networks that is.
Some wireless network setups may be setup without a password and are usually referred to as unsecured wireless networks. If you setup your own router definitely be sure to password protect it during setup for an unsecured wireless network is exactly that, unsecure. Your neighbors could for example connect to your network and at the minimum steal your network bandwidth (Internet connection speed available) which you pay your Internet Service Provider (ISP) for. Or worse Johnny Hacker may live next door and steal your identity.
If you connect to any unsecured wireless network that you aren’t familiar with also be weary for this may be a security risk since there could be a large number of who knows who connected to the same network and once again Johnny Hacker may be lurking.
Another thing you may encounter when discussing wireless networks is the difference between an infrastructure and ad hoc wireless network setup. The more common mode is an infrastructure mode wireless network. In this mode computers are connected with other networked computers via the wireless router. The less common ad hoc network mode in comparison allows wireless devices/clients to connect to each other without a router/access point.
Wireless network technologies make the impossible possible, at our finger tips is the capability to connect our high tech devices to a high speed network without wires. In the good ole days wires would have to be run from room to room or floor to floor, price for setup was costly, and the time to setup a wired network was vastly increased from a wireless network among other things.
Today purchasing and setting up a wireless network setup is a snap, and there are a ton of wireless products to choose from in addition to plenty of resources available to help you with setup and configuration of the wireless network if needed.
Once it is all said and done and your router is setup and configured you can begin sharing files with another computer, an Internet connection, printers, and much more. Like magic the wireless router sends out invisible signals to each device communicating on the network making everything work seamless with no wires being necessary for each wireless device that communicates through the router.
“Wi-Fi congestion is a very real and growing problem” according to then-FCC Chairman Julius Genachowski (Genachowski, 2013). He continued, “And Wi-Fi congestion isn't just a problem at airports or public venues. It's becoming a problem in the home, where it's increasingly common to have multiple data-hungry devices using Wi-Fi at the same time.” Such sentiments are commonplace, as we will detail below.
On the other hand, press reports suggest that Wi-Fi service at the 2013 Super Bowl, among the most challenging wireless data environments imaginable, was more than adequate with users being served at blinding speeds: more than 20 Mbps from the Internet to at least one user, and 40 Mbps on the return path (Brodkin, 2013a). Therefore, difficulties using Wi-Fi are not inherent in the current spectrum allocation or technology. The engineering studies on the topic of Wi-Fi congestion we review in Section 4 do not provide definitive answers to the question of whether congestion is widespread, or even a uniform approach to defining it.
We are therefore left with the questions: Is Wi-Fi congestion actually a real and growing problem? And what does the term "congestion” mean? This paper explores the state of the art in this area, and comes to two conclusions: that the term congestion has no unequivocal meaning, and to the extent that it can be quantified, there is no hard evidence that Wi-Fi spectrum congestion is a substantial problem.
To foreshadow the discussion in Section 4, one might say that talking about Wi-Fi congestion confuses causes and symptoms. Link congestion, e.g. when a communications channel is close to being completely utilized, can cause various difficulties, from delays in data delivery to disruption of a user's intended activity. Talk of Wi-Fi congestion seems to be a proxy for user dissatisfaction; but, as we will show, the connection between congestion of a Wi-Fi link and service degradation, let alone widespread user dissatisfaction, is tenuous at best.
A key point in this paper is that congestions is as much an economic problem as a technical one. Service degradation due to capacity constraints can be addressed by more intensive frequency reuse through the deployment of more infrastructure, investment in more spectrally efficient technology, and/or the use of price to reduce demand as well as by the allocation of more frequencies. Policy decisions are about economics as much as law or engineering. This leads us to recommend using net economic utility rather than engineering metrics to judge service degradation; not only does this include economics in the calculus, but it more accurately reflects end user concerns.
To place the question in a broader context, consider this: Is there congestion on the freeways? Certainly there is road congestion at some places at some times, but a generic claim of traffic congestion seems nonsensical on its face. Similarly, claiming the existence of Wi-Fi congestion (whatever that might mean) without further qualification about incidence and impact does not provide a sound basis for regulatory action. Advocates for both cellular and unlicensed allocations claim actual or looming "spectrum exhaust”.1 However, localized "spectrum shortages” may be more effectively addressed by using other bands, imposing pricing to manage scarcity, or improving technology.
This paper is agnostic about whether regulators should reallocate frequencies from their current uses to support more wireless data service, whether licensed or unlicensed. That is a decision about the public interest and consumer welfare that should take into account many factors, including the cost and benefits to society of alternative uses of spectrum, and who should bear the costs of providing more wireless data capacity. However, to the extent that regulators wish to be guided by current or future claims of "congestion,” they should ensure that the claims have a solid factual basis.
The goal of this work is to explore what it means to say that there is congestion in a particular spectrum allocation. We will study claims of Wi-Fi congestion in the 2.4 GHz ISM band as a case study. That requires an analysis of the technical term "congestion.” Since the technical meaning appears to be inapplicable to policy decision making - while it might be intelligible to say there is congestion in a wireless link, or even perhaps a network, it appears meaningless to say that there is congestion in a wireless band - we then seek a reformulation of "congestion” that can be used in making policy judgments.
This section surveys the background to the "spectrum crunch” debate in the United States.
The claim that there is a shortage of spectrum now, or that it is just around the corner, is being used to justify action by regulators to re-allocate bands from one use to another. This has been a key justification for calls by regulators and cellular industry advocates for new allocations for mobile broadband on the basis of a "spectrum crisis” or "spectrum exhaustion.” The same language is being used about unlicensed spectrum. The rulemaking to extend U-NII operation in the 5 GHz band takes it for granted that there is a congestion problem: "This additional spectrum will increase speeds and alleviate Wi-Fi congestion at major hubs, such as airports, convention centers and large conference gatherings” (FCC, 2013). The same "spectrum exhaust” language used by Beard, Ford, Spiwak, and Stern (2012) to describe the cellular situation is invoked by CableLabs to describe the prospects for unlicensed (Alderfer, 2013).
However, the claim of a wireless spectrum crisis has been contested, both by parties with a competing interest, such as the National Association of Broadcasters whose television spectrum is being auctioned to cellular operators (Onyeije, 2011), and analysts ( Farrar, 2012a, 2012b; Infonetics Research, 2013). Claims of spectrum shortage are often based on a comparison of demand measured by aggregate data traffic (e.g. the annual Cisco Visual Networking Index [VNI] reports ), and supply measured by allocated bandwidth. Debate then ensues about how to estimate demand (e.g. arguments about various parties' traffic forecasts) and how to estimate supply (e.g. weighting the various factors such as allocated bandwidth vs. spectral efficiency vs. increasing access point density, cf. Zander & Mahonen, 2013). One can also consider trends in spectrum prices (Wallsten, 2013). Critics suggest that technology will keep pace with growing demand (Talbot, 2012), or that claims of a crisis are self-interested (Bode, 2012; Crowe, 2012). They also point out that capacity issues are localized. For a snapshot of one moment of the debate, see Goldstein (2012).
While there is abundant folklore about Wi-Fi congestion, there are also reports indicating that good site engineering can obviate service problems even at very crowded locations (Abramson, 2011; Solomon, 2011). As explained in Vos (2009, 2012), the poor Wi-Fi service quality at conferences is due to many factors, including hotel infrastructure, poor engineering and reluctance to invest. Data traffic during the 2013 Super Bowl was 388 gigabytes, according to AT&T (Moritz, 2013). Press reports suggest that data throughput was more than adequate; about 700 access points were used to serve 30,000 simultaneous users, and one user reported consistently getting more than 20 Mbps down and 40 Mbps up (Brodkin, 2013a). The network builders for the new San Francisco 49ers stadium plan to provision Wi-Fi for 68,500 fans at once, with a terabit/s of capacity (about 15 Mbps per fan) within the stadium itself (Brodkin, 2013b).
Few if any doomsayers are willing to declare that a crisis exists today. For example, CableLabs will only say that "WiFi spectrum is likely to be exhausted in the near-term” (Alderfer, 2013), whereas Verizon’s CFO said in an earnings call in July 2013 that "we are not under any spectrum pressure” (Thomson Reuters, 2013). Since voters and decision makers are not easily swayed by hypothetical problems, the crisis needs to be made concrete. The claims of Wi-Fi congestion, such as those by former FCC Chairman Genachowski, therefore appear to arise because they are politically necessary.
Claims of spectrum congestion or exhaust are at root economic, even though they may be framed in terms of consumer benefit. It may well be cheaper to expand network capacity by obtaining spectrum allocations (particularly unlicensed ones that do not have to be obtained at auction) than by incurring hardware costs to enable more intensive spatial reuse or more spectrally efficient technologies. For example, obtaining 20 MHz of national unlicensed 600 MHz spectrum will do little to reduce such Wi-Fi congestion as it may exist in the 2.4 and 5 GHz bands, but it could be helpful to cable companies in deploying wireless voice services.
A policy challenge is that congestion, or wireless service degradation more generally, tends to be limited to some times and places. The policy questions therefore become: what is the threshold for taking action? Is congestion in one high profile location (downtown Manhattan, say) sufficient to justify a new nationwide allocation? Since a lack of congestion in Wyoming does not help someone in Manhattan, should one take the most congested locale as a benchmark for new allocation? Or conversely, should there be congestion everywhere, all the time, before action is taken? If only some but not all locations are suffering congestion, should users and operators in those places use ad hoc mitigation, e.g. deploying a denser network of access points or using alternative bands? Choosing the congestion point that would trigger regulator action is a political decision, and we will not venture to make a recommendation
In order to define a terminology for congestion, we present in this section a brief introduction to wireless communications, and spectrum sharing among IEEE 802.11 (aka Wi-Fi) devices. This will allow us to define a distinction between "congestion- by-demand” and "congestion-by-interference”, which will cover typical spectrum sharing scenarios. Readers familiar with the technology may wish to skip ahead to Section 3.3.
As we will see, the term congestion unfortunately does not have a universally agreed meaning or metric. As a rule of thumb, a network could be considered to be congested when the amount of data to be sent exceeds the capacity available; in the words of a recent engineering consensus, "congestion occurs when instantaneous demand exceeds capacity” (BITAG, 2013).
Figure 1 Wi-Fi network elements and interference environment. IEEE 802.11 user devices register to access points (APs) to connect to the Internet. Devices associated with one AP need to share the band among themselves and with other parties' devices (e.g., other access points in homes or hotspots). NonWi-Fi interference can originate from other data communication standards such as Bluetooth, or non-communication devices such as microwave ovens. Companies and Internet service providers (ISPs) are usually connected to the Internet via edge routers that multiplex multiple connections from different sources.
Wireless devices send data by emitting electromagnetic waves, which, by their shape and duration encode information. Other wireless devices can decode this information by observing these emissions (Rappaport, 2002). However, the received wave pattern may become distorted for two main reasons, making decoding of the original information difficult. Firstly, due to fluctuations in electronic components and the environment, a noise component is always observed in the received signals. At larger distances, receivers will not be able to distinguish the original wave patterns from the noise, because the distinctiveness of the transmitted wave front gradually diminishes. This limits the spatial extent of wireless communication systems, and precludes communication and coordination between too distant devices.
Secondly, distortions can be caused by other transmitting wireless devices. If multiple devices transmit at the same time, their signals will overlap with each other at a receiver. For transmitters close by, this interference can lead to an immediate disruption of their respective communication links, motivating the establishment of a common agreement on how multiple wireless devices can share spectrum in a fair and non-disrupting way. Whether or not devices implement such a medium access control (MAC) protocol determines how effectively they can share spectrum. Generally, wireless communication systems share spectrum more efficiently if they use the same coordination method. Thus, it is necessary to differentiate between interference among Wi-Fi devices, and interference to Wi-Fi caused by devices that do not implement the IEEE 802.11 standards.
The IEEE 802.11 family of wireless communication standards (IEEE Computer Society, 2012) is the most popular personal wireless communication standard for the unlicensed 2.4 GHz band. The IEEE 802.11 specifications describe the means by which compatible devices encode information onto radio waves, and also define how different devices orchestrate their use of the shared radio medium using MAC protocols to avoid mutual interference.
IEEE 802.11 devices typically use one of 14 overlapping channels in the 2.4 GHz band to connect to the Internet via a network access point (Fig. 1). Channels are smaller sub-bands within a larger spectrum allocation; assigning users to different operating channels constitutes a basic level of coordination. In practice there are only 4 non-overlapping 20 MHz channels in the 2.4 GHz band that provide effective frequency separation of concurrent transmissions; they must be shared among the plethora of IEEE 802.11 devices.
IEEE 802.11 implements a decentralized MAC method called carrier sensing medium access with collision avoidance (CSMA/CA), which aims at minimizing potential disruptions caused by multiple transmitters trying to send at once in the same channel. Like most other digital communication standards, IEEE 802.11 devices divide an information stream into smaller chunks, named frames, and transmit these frames one by one. In the absence of centralized coordination for channel access, each device will first "listen to” or sense the channel to determine if any other transmission is ongoing before sending a frame. If the channel is idle, it will start transmitting. However, if another device is active, it will defer its transmission in order to not cause disruption. In order to prevent all waiting devices from sending their frames at the same time once the channel becomes free, every transmitter will wait for a different random back-off time before trying to send its frames. The duration of this back-off period is increased if, despite randomization of channel access between the different transmitters, collisions still occur. The only way a transmitter can detect that no collision has occurred is through an acknowledgment, a small frame that is sent back by the receiving device, confirming that it has successfully received the frame
As the abbreviation CA ("collision avoidance”) suggests, the IEEE 802.11 CSMA/CA protocol aims at avoiding concurrent transmissions, but does not completely eradicate them. With increasing demand for channel access, collisions are more likely to occur, and will only be resolved through several retransmissions. This increasing overhead due to the need for coordinating increasing demand, and the resulting lower total number of frames that can be successfully transferred within a given time frame, constitute the root cause for the first type of congestion we consider relevant for our analysis: congestion-by-demand. In our terminology, congestion-by-demand occurs if, despite implementing a homogeneous coordination process on all devices (e.g. the IEEE 802.11 CSMA/CA MAC), the coordination process becomes inefficient due to the high number of simultaneously contending parties, and will eventually result in extensive collisions or low spectrum utilization.
Unlicensed rules typically allow manufacturers to implement arbitrary communication standards in the band. In the absence of mutual coordination processes, devices are likely to interfere. Since such disruptive events are beyond the control of a particular device, we define this as congestion-by-interference. However, incompatibilities in the coordination processes are only one reason for potential congestion-by-interference. Coordination processes only function within the range at which contending Wi-Fi devices can decode each other's transmissions. If coordination is hampered due to long distances between nodes, patterns of interference similar to the no-coordination case will arise.
The above definitions allow us to differentiate between root causes of congestion in the ISM bands, but leave open the question of how one measures the presence or absence of congestion in a sensible way. In the next section we will review metrics at several levels of the communication process that have been used in the technical literature to make or refute claims of Wi-Fi congestion.
This section will describe how network engineers measure utilization and performance in Wi-Fi networks, and how they define whether a network is in a state of congestion. We will review the Wi-Fi network performance studies that we believe represent the state of the art and have a bearing on the policy question of whether there is congestion in these wireless deployments. We have selected papers that represent the various categories of analysis approaches that have been pursued. The results are summarized in Table 1.
The metrics used in the literature can be coarsely grouped into three ways of thinking about utilization and potential congestion that provide a framework for analyzing congestion claims: (1) channel utilization and spectrum occupancy as metrics to determine the extent of spectrum usage for communications; (2) retry rates as a metric to determine the efficiency of the coordination between Wi-Fi devices and their ability to deal with interference; (3) throughput and mean opinion scores (MOS) as service and user experience performance metrics, respectively.
A summary of congestions metrics and our commentary is given in Table 1; see also Table 2 in De Vries, Simic, Achtzehn, Petrova, and Mahonen (2013). A good literature review can be found in MASS Consultants (2013).
There is a well-developed literature on the measurement of spectrum occupancy (see e.g. references in Patil, Prasad, & Skouby, 2011), i.e. the degree to which radio transmissions are present at a given time, place and frequency. Such observations only measure.
Table 1 Summary of congestion tests.
Table 2 Criteria for complelling congestion claims
Raghavendra, Padhye, Mahajan, and Belding (2009) focused on systematically examining wireless medium utilization by Wi-Fi networks in a range of representative deployment scenarios: residential apartments, single-family houses, planned enterprise networks, large conference gatherings, and coffee-shop hotspots. Remarkably, given the prevailing belief that Wi-Fi service collapse is imminent due to congestion, their results show that the median channel utilization, using the same method as Jardosh et al. (2005), is under 40% in all the studied scenarios even at the peak busy times, and much lower otherwise. Raghavendra et al. concluded that the low link utilization was due to low demand.
The method used in Jardosh et al. (2005) and Rodrig et al. (2005) requires substantial recording and analysis capabilities, may be infeasible for measuring the congestion in real-world Wi-Fi networks over a long period of time, e.g. in an urban setting, and does not provide much information on the real demand in Wi-Fi networks, rendering it a weak indication of a potential congestion. These shortcomings have motivated researchers to look for other metrics that use the specifics of the IEEE 802.11 CSMA/CA coordination algorithm to identify excessive demand. The most extensive study of this kind was done by MASS Consultants (2009) for the UK regulator Ofcom, and is motivated by the assumption that high demand in wireless networks will manifest through high channel contention.
MASS Consultants (2009) introduce the term degradation to indicate when a network "cannot provide maximum performance to the users,” and measure it by the fraction of frames that have to be transmitted multiple times before they have been successfully acknowledged, called the retry rate. MASS Consultants (2009) argue that as such, retry rate "is a useful measure of network problems without being overly specific about what those problems are.” MASS Consultants (2013) calls this parameter "MAC stress,” and while admitting that "the correlation with user experience is not clear,” contend that it is a useful indicator of the state of the link layer.
Retry rates are an ineffective metric for directly gauging the level of network degradation due to increased load. Retransmissions in Wi-Fi networks can occur for many reasons including not only (1) collisions with other IEEE 802.11 frames, but also (2) frames (or their acknowledgements) getting lost due to low signal quality thus triggering frame retransmission thereby increasing the overall retry rate in the Wi-Fi network. Consequently, the retry rate is a necessary, yet not a sufficient, metric to identify congestion-by-interference; furthermore, no results have been reported thus far which could demonstrate conclusively that retry rate is also applicable to determine the overall demand of, or the resulting congestion experienced by, Wi-Fi users.
When communication systems are studied from the perspective of individual users, without further analyzing root causes of quality variations, commonly used technical metrics are throughput, the number of bits that are successfully transferred per unit of time, and latency, the time until a transmission starts (Tanenbaum, 2002). These measures are equally applicable in wired and wireless communications, and have established themselves as universal metrics to study the quality of a network connection.
Sicker et al. (2006) were the first to explicitly consider congestion in unlicensed spectrum from a perspective similar to ours, using throughput and latency as primary metrics to identify a potential "tragedy of commons.” Their study introduces the concept of a "hard” tragedy, which results from excessive use of the unlicensed bands causing the sum throughput of all users to decrease when the number of users is increased beyond a certain level. This tragedy resembles Jardosh et al.'s (2005) understanding of congestion, but considers throughput in relation to the number of users, not channel utilization. A user-centric understanding of congestion also becomes apparent in Sicker et al.'s second definition of a "soft” tragedy, which maps individual user's throughputs and latency to their utility, arguing that, despite perfect sharing of spectrum access, a wireless connection with low throughput or excessive latencies will eventually be of no use. This distinction between tragedies complements quantitative technical metrics with qualitative user perceptions.
Sicker et al. (2006) report on computer simulations that show first indications of a hard tragedy when more than 16 users are contending for channel access. Using U.S. Census data of population density with the estimated interference range of Wi-Fi signals, the analysis concludes that 90% of the U.S. population would be within interference range of fewer than 20 other people, which the authors claim is below "thresholds for acceptable contention in either voice or web browsing in an 802.11 g network” observed in their simulation studies.
Demand variability, as measured via throughput and latency metrics, constitutes a technical metric for the quality of service (QoS), but, as indicated by Sicker et al. (2006) through their soft tragedy definition, user utility ultimately depends on how users experience the quality of their Wi-Fi connectivity (their quality of experience or QoE). In order to map technical network degradation metrics to user utility, MASS Consultants (2009) conducted laboratory experiments with artificially degraded network links and measured user satisfaction levels. The authors were unable to derive a direct relationship between MOS and network degradation, and resorted to defining a Mean Opinion Score Lower Bound (MOSLB) metric as the "lowest value of MOS that is expected at the measured mean retry ratio.” They abandoned this metric in MASS Consultants (2013). The difficulty of mapping engineering quality of service metrics to user quality of experience is widely acknowledged in the nascent literature in this field (Schatz, HoKfeld, Janowski, & Egger 2013), due to the many complex non-technical and often highly subjective factors underlying the perception and qualitative assessment of user satisfaction.
This section broadens the framework of analysis beyond the engineering criteria discussed in the previous section to include economic considerations, specifically the net utility of wireless communications services. This allows us to shift the analysis from congestion to degradation, and suggest criteria that regulators can use to judge the persuasiveness of congestions claims.
In general engineering usage, a network is said to be congested when the offered load (the amount of data that all users want to send at their desired levels of loss and delay) exceeds the capacity of all available network links to meet this demand. The congestion metrics discussed in the previous section are used to assess whether load has exceeded capacity.
Both parts of this definition are difficult to measure. First, since the experimenter cannot observe the desires of all (or any) users, the offered load is unknown. Second, capacity is difficult to quantify since the exact location of nodes and interactions between them that cause mutual interference are hard to observe. The congestion metrics therefore tend to focus on indicators that load has exceeded capacity, rather than observing capacity as such; see Table 1.
However, such metrics only measure the aggregate use of the channel and not individual user satisfaction. This leads to attempts to measure user experience, e.g. mean opinion score as in MASS Consultants (2009), and the concept of utility, used e.g. in Sicker et al.’s (2006) discussion of "soft” tragedies.
Nonetheless, we believe that measurement-based engineering evidence ought to be provided whenever a claim of congestion is made. Such data should be generated for the purpose of the claim; data collected in the context of R&D studies are unlikely to be adequate, particularly in terms of their spatial and temporal scope. The challenge of defining a suitable measurement framework for collecting regulator-relevant data as evidence of congestion is non-trivial and would benefit from careful study by a multi-stakeholder expert group. It is thus outside the scope of this article to give detailed guidance to the regulator on what specific data to require in particular cases; we will, instead, briefly discuss some challenges.
There is a wide variety of parameters that shed light on resource utilization in different contexts, and it is difficult if not impossible to come up with a general purpose list. In the case of centrally-controlled cellular networks there are well-defined technical metrics which one can measure readily, e.g. exact network topology including user terminals, traffic flow patterns, throughput distributions, use of so-called resource blocks in the case of LTE, etc. Providing robust and large-scale data for packet- switched decentralized Wi-Fi-type networks is inherently more difficult than for cellular-type of networks, especially those operated in circuit-switched mode. Complete access to the infrastructure of a decentralized network such as one finds in unlicensed bands is infeasible; a third party observer has at best incomplete and at worst no access to the network elements. Therefore, while the exact network topology is a desirable measure that is theoretically available both for LTE and Wi-Fi, in practice it can only be provided with any accuracy at reasonable cost for LTE. In the case of Wi-Fi it would be impossible to measure it with the same accuracy, and even estimating/approximating it would be prohibitively complex and expensive.
A more fundamental difficulty is that congestion is not a directly measurable parameter. It is a broad concept that is not only a function of a variety of measurable engineering parameters but also has user experience and economic dimensions as we will explain in more detail below. Even if one focuses on technical parameters, one ends up estimating a "congestion” parameter as a function of observed technical
Although the engineering community prefers to focus on spectrum occupancy and network metrics such as those described in the previous section, end users care about the quality of their experience. They care about the net utility of the technology and service they have purchased, that is, the value of activities they can engage in minus the costs of equipment and connections. Network metrics like link utilization or throughput are therefore unlikely to reveal much about the aggregate utility of the wireless technology to society.
We use the term utility to refer to fitness for purpose. Our intended meaning is related to the usage in economics and optimization theory. A variety of goods are valuable to a user of a wireless system, including speed (measured by throughput), the lack of delay in getting a response from a remote site (measured by latency), and robustness (e.g. the absence of connection failures). The satisfaction a user obtains from a given amount of one of these goods is the utility of that good. As the throughput increases, say, its utility increases; the user is happier. The satisfaction of performing an activity using the network suggests that the network as a whole has utility as well as do the activities one performs using it. In the optimization theory sense, we assume that there is a possibility and desire to find optimal net utility, i.e. the utility minus the cost.
The utility depends on the service or application being used. For example, for voice service a very low throughput satisfies the user, and any extra capability in of the network will not increase satisfaction; returns rapidly diminish. On the other hand, a connection that is more than adequate for a voice connection would not suffice for video, and therefore the utility of voice-adequate throughput would be minimal for video. Moreover, engineering metrics are only a sub-set of the attributes that determine the utility of the overall network experience; cost, ease of use, user expectations, and so on, also matter.
Sum utility in this paper means the weighted sum over different utility functions, where weights reflect the relative value of different utilities. The sum utility of a network to a user would depend on the value they ascribe to component goods, such as throughput and lack of delay. For example, someone who wants to watch a streamed video at the highest possible resolution will value throughput over delay, whereas someone playing a fast-paced online game will value rapid network response over the ability to move large amounts of data. The overall utility for a particular user is encoded in an indifference curve that shows the relative value of different goods to them: a video watcher prefers throughput over low delay, and a gamer the reverse.
It is important to note that utilities are not easily comparable or even measurable. It is a truism in economics that utility cannot be observed directly, and needs to be inferred from observing choices such as user’s willingness to pay certain fees. Perceived value can be boosted in many ways, e.g. through reduced cost or increased service quality.
Utility depends on perspective. For an operator, e.g. a licensed cellular company or an unlicensed hotspot provider, the utility may be profits that are generated from the customer’s use of this good. Content providers also see a utility: the ability of customers to connect to their search engines, social networks, online games etc. Since the operators of both licensed networks and unlicensed hotspots measure their utility through profits, market share, disadvantaging competitors etc., it is not unlikely that the majority of users could have very different views from operators or content providers about the optimal operation point, for example as measured by link congestion, of a network or hotspot. Citizens at large may have a different preference compared to heavy network users. In other words, the single user utility function maybe very different from the user population utility function, which can be different again from the operator utility function. The composition of utilities appears to be very complex, both in terms of measuring individual utilities and knowing how to combine them; some sort of stochastic estimation may be the best one can do.
One therefore cannot build a network, or provide spectrum allocations, so that everyone’s wishes are always fulfilled. It is difficult if not impossible to calculate the allocation of resources that would maximize the sum over the utilities of the users using a particular spectrum allocation. Any congestion claim thus needs to be qualified by information about spectrum capacity required to provide a specified user experience for a specified fraction of the overall user population.
As is evident from the engineering review in Section 4, it is difficult to establish congestion on the basis of engineering metrics, especially if the experimenter does not have access to wireless network components that can reveal diagnostics such as buffer states and real throughput levels. Even if congestion can be reliably observed, understanding the reason for it requires information about both demand and supply, i.e. the amount of data users would ideally want to send and receive (known as offered load) and the available capacity in the network (understood here as a collection of links).
While such engineering analysis is important, technical performance indicators at specific network nodes do not provide sufficient information for policy decisions that also have to take into account variations in performance from place to place, and differences in the value of various performance attributes to different end users. Congestion on a wireless link due to excess traffic is not the only cause of user dissatisfaction; it may also be caused by interference from non-Wi-Fi radio systems, and variations in signal quality due to intermittent obstructions.
An individual end user is typically concerned with the performance of the communication system as a whole, rather than that of a single link. Even if the wireless link itself is congested, the end user may still experience acceptable information transfer rates if the communication demands are sufficiently low. Conversely, if there is limiting congestion in other parts of the network connecting a user to their desired end-point, e.g. in the local backbone or the Internet at large (see Fig. 1), the user’s dissatisfaction is no less - and may be laid unfairly at the door of the wireless connection. Naturally, adding more wireless infrastructure or allocating more spectrum for wireless operators will not ameliorate the situation in this case.
We therefore propose that congestion be replaced by the degradation of net utility when making policy determinations. In order to demonstrate that such degradation is occurring, or will occur, advocates will need to provide information about the context of degradation, such as usage scenarios. Since a rigorous treatment of wireless network utility is in its infancy, we approximate its degradation using the qualitative criteria defined in Section 5.4. Engineering congestion is certainly an indicator of disutility, but the relation between the end user perspective and engineering definition is not straightforward.
The literature reviewed above suggests that congestion claims are often on shaky ground for at least two reasons: (1) there are many ways to measure congestion at the network level, none of them completely satisfactory and none of them with an easy application to decisions in the policy realm; (2) to the extent that we have been able to assess claims of congestion, we find them unpersuasive.
There are a variety of ways to make congestion metrics more relevant to policy making, including (1) the submission into the policy making record of more detailed engineering metrics; (2) the incorporation of economic considerations through using measures like the degradation of net utility; and (3) the regulator’s use of pre-advertised qualitative criteria to judge the persuasiveness of congestion claims.
Regarding the first step of improving the consideration of engineering metrics, we do not feel qualified to make detailed recommendations. This is a complex matter that covers many technologies and interests, and the best result would be a working consensus that is developed by all stakeholders working together. However, we believe the onus in on a regulators to lead the discussion by (1) asking for specific metrics, as discussed in above; (2) developing its own evidence base by commissioning research to develop criteria and/or validate claims, an approach that is used more effectively by some regulators than others; and/or (3) seeking input from experts, industry and other interested parties on how to do use metrics to assess congestion, including opening a rule-making consultation, requesting feedback from standing or ad hoc advisory groups. An international study group sponsored by a few leading national regulators may be helpful. It may also be helpful if the assignment is split, with one group (or groups) tasked with specifying congestion criteria, and another with testing whether congestion claims are met in a specific case.
The difficulty of judging congestion is another case of the regulatory dilemma that technology policy makers cannot make good rules without understanding engineering concerns, but that technical considerations cannot be encapsulated in a single metric that regulators can use blindly; the onus is on engineers to explain the parameter space they are using, and all parties should take care to call out the connections between technology and policy (Mahonen, Simic, Petrova, & De Vries, 2012).
A more rigorous framework of congestion metrics that have a bearing on policy decisions is necessary, but not sufficient. In order to provide a more reliable basis for debate and decision making, we propose that claims of congestion be embedded in a wider context. The party claiming congestion should clearly describe the conditions under which congestion is observed or will be expected to be observed.
Specifically, we propose that anyone claiming congestion, or more generally the degradation of service in a band due to insufficient spectrum, needs to provide support for the claims in Table 2.
We will now briefly discuss each of the criteria, which are intentionally qualitative rather than quantitative. Deciding how many scenarios are persuasive, whether degradation is significant or persistent, or what "best” means, is in the final analysis a matter for policy judgment after evidence has been presented - the essence of regulatory practice.
The scenarios noted in criterion 1 are user activities, such as engaging in a particular task or using a class of services like video streaming, reading web pages, or playing online games, performed at particular kinds of venue such as a home, apartment, conference center or airport. Since public policy addresses the wider good, it is not persuasive if problems occur in only one place, or for only one kind of activity. We do not believe that one can set the required number of scenarios upfront; "more is better” for the purposes of making a persuasive case—which is why we suggest two or more—but it is a matter of judgment since the social and economic value of scenarios, let alone the contours of what counts as a scenario at a particular moment in time, differ.
The definition of a valuable task in criterion 2 is a matter of judgment, similar to the criterion of reasonableness that has grown up in jurisprudence; the legal process is accustomed to making determinations about such ambiguous terms. It will require balancing the interests and preferences of different user groups. Thus, a claim against criterion 2 should include information on what percentage of users in different scenarios are affected by degradation of net utility, i.e. are not able to complete a valuable task in spite of being willing to pay for it. Such statistics should ideally show time-evolution of the degradation situation; the claim is stronger if the number of dissatisfied users increases over time, and if the absolute number of users is large.
Criterion 3 requires that problems are not infrequent or localized to only a few places, such as special events or exceptional locales. The claim against criterion 3 becomes stronger if there is evidence that service degradation is increasingly persistent in time, and pervasive in space. While the demonstration of Wi-Fi service degradation (or "congestion”) just in downtown Manhattan or Silicon Valley apartment complexes on its own should not be persuasive, the regulator will have to judge whether showing degradation in many major cities would be sufficient to justify nationwide spectrum re-allocation, or whether it should be in most or all.
The existence of widespread problems by criteria 1-3 is not sufficient, though. As we saw in the discussion about conference center Wi-Fi in Section 2.1, user problems are often due to inadequate infrastructure and poor engineering; a claim of congestion problems should therefore not be admitted unless it can be shown that they occur in spite of the use of industry best practices. There is no free lunch for end users, either: they should not complain about congestion if they have declined to use an available premium-priced service that offers better quality of service.
To sum up with an example, the mere claim that network congestion has occurred at some major conferences would not be compelling. To be persuasive, degradation should also occur in homes and/or airports, say (criterion 1, more than one scenario); should significantly reduce the number of users who cannot stay in touch with friends regardless of venue (criterion 2); occur not only in one or two cases, but repeatedly in most homes and conference centers (criterion 3); and occur in spite of the use of the state of the art technology and engineering best practices, and the availability of premium services (criterion 4).
However, even if a degradation of net utility is proven, it does not necessarily follow that re-allocation of spectrum is the appropriate regulatory remedy. The next step in the case for a regulatory remedy is to examine the economic and regulatory impact of different solutions, and the change in net social welfare resulting from a re-allocation. A social welfare analysis may also be at odds with certain criteria; for example, paying a high market rate for good service (criterion 4 b) may not be socially optimal if it results from usable additional frequency bands being allocated to services with a low social surplus due to (say) a small number of users who can avail themselves of alternatives.
Before discussing alternatives to new allocations in the unlicensed case, let us consider the case of cellular operators claiming a spectrum crunch. They have fundamentally three different ways to solve the problem: obtaining more spectrum, investing in new technology and/or infrastructure to increase the capacity of their existing spectrum, or using price to balance demand and supply. The situation is not fundamentally different in unlicensed operations, as link congestion in this case can be also solved by using additional bands, end-users and service providers investing in technology that improves spectral efficiency and improves network performance, or hotspot service providers pricing different levels of service differently.
Hot-spot Wi-Fi networks deployed by operators in public places have the benefit of best engineering practices and can be thus expected to survive higher loads and competition from other devices; congestion in such situations should be scrutinized very carefully before it is used to justify new re-allocation. However, as personal unlicensed systems are deployed ad hoc by non-experts, widespread congestion in homes is more likely to be persuasive in justifying new allocations. However, our reading of the situation (see Section 2.1) is that the evidence for pervasive congestion in homes is even weaker than that for public venues such as conferences and airports.
This work was motivated by ubiquitous claims that "Wi-Fi is congested.” We tried to understand what this claim might mean, and to test if it was true. We discovered that there are many ways to characterize wireless congestion, no unanimity on how to characterize service degradation, and little research about the connection between congestion and degradation. We concluded that there is as yet no hard evidence that congestion is rising to the level that would justify regulatory action.
The much-hyped degradation of Wi-Fi service due to inadequate spectrum allocation is rarely observed, and very seldom well documented. Where the appropriate investment is made in infrastructure, as at the Super Bowl or well-run conference venues, lack of spectrum is not the binding constraint. Network management folklore suggests that user dissatisfaction does not correlate with measured congestion in the network; an unhappy user does not necessarily mean that the wireless network is actually congested.
However, a more pressing challenge is that congestion, as the term is commonly used, is too ambiguous a concept to inform an evidence-based public debate. Further, congestion claims are inherently tied to economic considerations that cannot be ignored. The challenge is not to prove congestion claims right or wrong; it is more important to understand the economic and regulatory impacts of different solutions. Debate should revolve around the most efficient set of solutions to solve the potential congestion problem: infrastructure, pricing, and/or spectrum.
In order to provide a framework for a reasoned regulatory decision, we emphasized the notion of net user utility to place focus on the end-user experience, to highlight that utilities need to be summed over different scenarios and user preferences, and facilitate making distinctions between operators, vendors and citizens, who have different motivations and utility functions. This led us to define a suite of user-oriented congestion criteria that can be used to judge claims that congestion merits regulatory intervention.
Our analysis does not focus on the validity of claims of spectrum congestion; we are more interested in how one tests such claims. We are not suggesting that the absence of evidence of congestion amounts to evidence for the absence of congestion. However, we would question the argument that congestion occurring somewhere, sometimes is a justification for regulatory intervention. The burden of proving that congestion is a regulatory problem lies with those making that assertion, hence the criteria in Section 5.4 for testing such claims.
The challenge for the regulators (as for service providers) is to define the reasonable net utility, i.e. service quality at a given price that users will accept. This is similar to the debate on broadband Internet access: what is the minimum throughput for broadband at a reasonable price that citizens should expect to have as a universal right regardless of where they live? Just as we cannot afford to provide fiber optic capability to every corner of even developed countries, one cannot expect that there is enough spectrum and wireless infrastructure to provide ultra-fast wireless connectivity everywhere, or that even in Manhattan everyone should have access to extremely high-speed wireless capability.
Recently the number of mobile users accessing wireless and mobile Internet services has been increasing spectacularly. The overall mobile data traffic is expected to grow nearly 11 fold between 2013 and 2018 [1]. Network operators must adapt for this traffic explosion. The traditional mechanisms to expand the capacity of the network require high-cost, large-scale modifications; however, the goal of the network operators is the optimization of usage of the available network resources with low-cost investments. Due to the spreading of WLAN networks and the proliferation of multi-access (3G/4G and Wi-Fi) mobile devices network operators are capable to design cost-effective resource management strategies based on data offloading from 3G/4G to Wi-Fi. These strategies could be even more efficient if decisions are made on the user flow level, based on application requirements in means of Quality of Service (QoS) and/or Quality of Experience (QoE) [2]. However, dynamic migration of ongoing sessions between different radio access technologies requires special mobility management solutions. These mechanisms can be divided into two main groups, namely network- based and client-based approaches . Client-based approaches provide higher level of freedom to users by allowing them completely maintain every aspect of handover decision and execution based on the context information available at the terminal side. On the contrary, in case of network-based mobility management the overall control falls into the hands of operators: by decreasing freedom of choice at the user side, network and traffic management can be enhanced from a completely operator point of view. This work considers a client- based solution.
Network efficiency can be further increased by reducing the mobile data traffic in 3G macro network segments using femtocells. Femtocells are able to expand cell coverage and extend radio resources. As integrated femtocell/Wi-Fi networks are getting more and more widespread, femtocell/Wi-Fi offloading schemes will also come into picture. Femtocell/Wi-Fi offloading is able to move data traffic from femtocell radio interface to Wi-Fi interface and with this scheme network operators can alleviate the load of the network while also providing higher data rates to the end users. The growth of heterogeneous and overlapping wireless access networks demand to design and implement aforementioned algorithms, which are able to exploit the available network resources. These facts motivated us to design and develop an extensive, client- based, flow-aware, cross-layer optimized mobility management scheme to Android Smartphones, and evaluate our proposed mechanism in a femtocell/Wi-Fi based testbed environment.
Mobile Internet traffic is growing dramatically due to the penetration of multi-access smartphone devices, data-hungry mobile entertainment services like video, music, games, and new application types, such as social media, M2M (machine-to-machine) and C-ITS (Cooperative Intelligent Transport Systems). This fact forced the standardization bodies to design and develop wireless standards, namely 3G UMTS, LTE, LTE-A, WiMAX, 802.11n/ac/ad WLANS, etc. The complementary characteristics of the above architectures motivate network operators to integrate them in a supplementary and overlapping manner. The data traffic offloading between 3GPP (Third-Generation Partnership Project) access networks and WLAN networks has been recognized as a key mechanism to exploit the available network resources in an efficient way. Authors of introduce the basic conception of 3G/Wi-Fi seamless offloading and an application layer based switching scheme. In the architecture and the protocol stack of an Ethernet-based offloading technology and a testbed environment are presented. Also the measurement results of their system are introduced here. Local IP Access (LIP A) and the Selected IP Traffic Offloading (SIPTO) solutions can also play important roles at network operators in realizing cost-efficient offloading techniques. In the two aforementioned technologies are discussed, however this paper only introduces theoretical results and doesn’t contain real implementations and measurements. Likewise the Multi Access PDN Connectivity (MAPCON) provides a solution, which allows the mobile terminal to establish more PDN connections to different access networks (3GPP and also non 3GPP accesses are supported).
The articles above present recommendations for data offloading mechanisms between 3G/4G and Wi-Fi. To expand the radio resources of legacy cellular mobile networks the femtocells are turning into a promising solution. Several researches in the femtocell topic have been published. The vast majority of these articles discuss the architecture of femtocells and the detailed mechanism of handovers between femto and macro cells. Jaehoon Roh et al. propose multiple femtocell traffic offloading scheme and analyze the performance of the proposed scheme. However, our scope within this work is more close to existing femtocell/Wi-Fi offloading schemes and the flow-aware decision algorithms. The IP flow mobility and seamless Wireless Local Area Network offload (IFOM) standard in Rel-10 has been created to ensure finegrained and seamless offload strategies. IFOM has been designed to define different IP flows belonging to the same PDN connection and registering to different network interfaces. Benefits of IFOM can be exploited efficiently only if the User Equipment (UE) is capable to communicate via 3GPP access and WLAN simultaneously. This solution is based either on Dual-Stack Mobile IPv6 as per 3GPP Rel-8 (DSMIPv6), thus the IP address preservation and session continuity for mobile users in the course of their movement is guaranteed during the movement of the UE. Also Proxy Mobile IPv6 (PMIPv6) could be used for IFOM purposes, however it has not yet been standardized. Further optimization can be achieved by using intelligent decision engines, which are capable to assign application flows to the appropriate interface. In Rel-8 the Access Network Discovery and Selection Function (ANDSF) assists to the UE to discover wireless access networks and provides routing policies, rules and discovery information to facilitate the appropriate network selection for the UE.
All the mechanisms introduced above are designed to manage handovers by the network operators. Contrarily we designed and implemented a client-based mobility management scheme based on MIP6D-NG, which is a client-based, multi-access Mobile IPv6 implementation with various extensions and an advanced cross-layer communication API. Both the network discovery mechanism and the flow mobility algorithms are handled by our highly customized Android Smartphone having MIP6D-NG integrated; the network operators have no influence on the selection and decision process. The first publicly available Flow Bindings implementation was designed for Linux distributions by the authors of, however their implementation supported only NEMO environments, regular mobile nodes were not able to register or update network flows. Most of the papers in the subject discuss the definition and management of different flows in protocol level. In our solution the advanced toolset of MIPD6-NG solves all the protocol level questions of flow mobility management by relying on the MCoA and Flow Bindings RFCs, so we do not detail them in this paper. Instead, we focus on the flow-aware offloading schemes based on a built-in decision engine. In a multi-criteria decision engine is presented based on network cost, signal strength, packet loss and predefined weight of the flows, however this paper introduces only theoretical results and doesn’t contain evaluation based on real implementations. Francois Hoguet et al. showed a Linux based flow mobility environment on the basis of Android Smartphones. Although their paper introduces a real implementation, it does provide neither flow mobility management nor complex decision engine. Ricardo Silva et al. examine the mobility management on Android systems. They created a custom Android ROM to use the 3G and Wi-Fi interfaces simultaneously. IEEE 802.21 Media Independent Handover framework is applied to support IPv6 based mobility. From this article also the flow mobility and the flow based decision mechanism are missing compared to our architecture.
Fig. 2 presents the architecture of the proposed highly customized Android-based system, where cross-layer information transfer plays an essential role. We introduce each part of the system in a bottom-up approach.
Our architecture requires special kernel configuration extended with Mobil IPv6 support, MIP6D-NG patches and also kernel module modifications. To apply these changes in the kernel level of the system we had to recompile the whole kernel source.
As Fig. 2 shows the native layer contains all of the used, cross-complied native binaries and associated libraries such as Lighttpd, Pingm6, and Socat. Also MIP6D-NG binaries and libraries are located in this layer.
For multi-access communication, the Mobile Node (MN) needs the ability to communicate via two (3G and Wi-Fi) network interfaces (with IPv6 support) simultaneously, however even the newest Android OS versions (Android 4.4) do not allow the simultaneous usage of them. This fact forced us to modify the application framework layer. An overall Android OS and kernel source code recompiling is required to apply aforementioned modifications.
Figure 2 The proposed Android architecture for Femto/Wi – Fi offloading
In the Java layer we devised and implemented a modular Android application comprising three main parts. The first is the so-called Radio Access Network Discovery Module (RANDM), which is designed to measure the different parameters from multiple layers of the available networks (e.g., signal strength, delay, and packet loss). The Handover Decision and Execution module (HDEM) can be divided into two parts: Handover Decision (HDM) and Handover Execution Module (HEM). HEM communicates with the native MIP6D-NG daemon, creates and sends flow register and flow update messages induced by the advanced decision algorithm. The register message allocates and initializes a new flow entry to the selected network interface while the update message updates an existing flow by the Flow Identifier (FID). For the cross-layer information exchange a socket based communication scheme was designed and developed. HDM decides about the necessity of the handoff based on the decision algorithm (see later in more detail). HDM directs the HEM to send flow register or update command to MIP6D-NG. The HDM is a modular, exchangeable part of the architecture, thus we can modify the offloading decision scheme easily.
Fig.3 presents the operation of our cross – layer optimized offloading scheme incorporating the decision algorithm. The most important input parameters are the actual measurement data, the static information obtained during the network measurements in the currently used networks, and the user preferences.
Figure 3 The proposed cross – layer optimized flow mobility mechanism
The first step of the algorithm is checking the available wireless access networks. The default interface is the 3G access, therefore the system registers data flows to the 3G interface using cross-layer communication between the application and the network layers. After this step and if there is at least one available Wi-Fi network, the algorithm starts the phase of passive measurements of Wi-Fi networks. If there are no available WLANs, the algorithm holds the flows on the 3G interface and waits for the appearance of new Wi-Fi access points. Otherwise starts the cross-layer measurements, in which it measures the signal strength from link-layer, and packet loss, RTT and jitter from network layer. If our decision engine does not find the parameters of the current measured network suitable for the application flow's QoS profile, the scheme starts to measure the next available network. If the measured QoS values are appropriate, the MN connects to this Wi-Fi network and moves the corresponding flows to the Wi-Fi interface based on the flows QoS profiles. After that, the application waits for a random time to avoid the ping-pong effect similarly to the solution applied in. (Note, that in situations where the MN moves around the border of wireless accesses, a series of unnecessary handovers may occur during a very short time, such creating the so-called ping-pong effect). Cross-layer communication mechanism allows us to trigger and execute flow updates in a different layer of the stack which further increases the efficiency of our system. In each case when MIP6D-NG executes a flow registration or update, sends a Flow Binding Update (FBU) message to Home Agent, who sends back a Flow Binding Acknowledgement (FBA) message according to. The third and last part of the Java layer application is the Source of Data Flows which serves as a simple traffic generator: produces an UDP audio stream and/or a TCP file transfer.
Fig. 4 presents the overall architecture of our testbed environment designed and implemented for real-life femtocell-based evaluation of advanced cross-layer optimized, flow level mobility management protocols and algorithms. In this section we introduce the main parts of our testbed.
In our proposed testbed environment the UE entity is realized by an Android Smartphone, namely a HTC Desire S device. This Smartphone must be able to run the MIP6D-NG daemon and requires special kernel environment. Therefore we modified the kernel part with the required extension. The porting of MIP6D-NG to Android systems was a non-trivial task, because it required libraries and header files that do not exist on Android OS or if exists, differ from their original GNU Linux implementations. To make up the missing requirements we created a cross-compiler toolchain which contains the ARM compatible versions of all the necessary components. We made the aforementioned compiler pack based on the NDK stand-alone toolchain and extended with our own libraries and header files. MIP6D-NG requires multi-access communications via two (3G and Wi-Fi) network interfaces (with IPv6 support) simultaneously. Despite the fact that recent Android devices usually possess multiple radio interfaces, the Android OS is currently pushing a solution which saves battery power so only one interface can be active at the same time. In fact, the built-in mechanisms for network interface management in Android phones are very simple: if a 3G interface is active and Wi-Fi is available, the 3G will shut down, while if only a 3G network is available, then the Wi-Fi interface will be in down state. To change the mechanism described above it was necessary to modify the source code of the Service module of the Android OS managing network connections. The Service module contains the ConnectivityService.java where the NetworkStateTrackerHandler class is responsible for the state management of network interfaces: a switch-case statement contains the implementation of each scenario. We implemented a new statement as an extension: if the 3G interface is active and Wi-Fi is available, then 3G should remain active, therefore real multi-access became usable. It meant that the Android OS itself also required modifications. Another issue to be solved was that the 3G interface doesn’t support native IPv6 on most Android devices. In order to solve this problem we configured an Open VPN connection with a bridged interface on the Android Smartphone. The Open VPN server is located on a router, which provides an appropriate IPv6 prefix for the Android Smartphone’s 3G interface through the OpenVPN tunnel.
Figure 4 Overall testbed setup
In order to perform the required modifications inside the source code of the Android OS and the kernel, a build environment was created in which we were able to make a custom ROM image with our MIP6D-NG ready kernel source code and with our modified Android OS code. We used CyanogenMod^ Android sources and Andromadus§ kernel tree distribution as a base code platform for our extensions. The result is a highly customized Android 4.1.2 and Kernel 3.0.57 with the appropriate patches and settings.
To measure the different parameters of the network in the Java layer we use built-in APIs, and external binaries. TelephonyManager API provides the signal strength. The packet loss and delay are calculated from the output of Pingm6. To run Pingm6 (which is not a so called system binary but a part of the MIP6D-NG distribution package) from Java layer we had to use an external library, the RootCommands. The HEM module of our application is able to direct the Android OS to connect an available WiFi network using the WifiConfiguration and WifiManager APIs.
The femtocell in the architecture provided 3G access over UMTS band 1 with HSPA support. It supports HSUPA up to category 6 allowing 5.76 Mbps max. in uplink, and HSDPA up to category 10 allowing 14Mbits max. in downlink. The applied solution is typically used by network operators as a residential gateway to extend network coverage inside a building either for an enterprise or an end users home. Its transmission power can be set up to 5dBm. The femtocell within our architecture is controlled by its own network infrastructure that is embedded in an associated PC (not depicted in Fig. 3). The controller software provides the 3G core network infrastructure features for authenticating the subscribers (i.e., the mobile devices), for providing voice call services, and for accessing to the IP networks.
The essential role of the Home Agent entity is to manage flow bindings for MN Care-of Addresses (CoA) and Home Address (HoA) according to the MCoA standard. In the used terminology the flow is defined as a set of IP packets matching a traffic selector. A traffic selector can identify the source and destination IP addresses, transport protocol number, the source and destination port numbers and other fields in IP and higher- layer headers. The different flows are referred by the Flow Identifier (FID), which is a unique identifier. MIP6D-NG routes the incoming (outgoing) packets from HA (MN) to MN (HA) based on the defined router and rule policies for the diverse flows. The Home Agent is realized by a Dell Inspiron 7720 notebook running a MIP6D-NG daemon configured for Home Agent functionality. This entity requires special kernel configuration, which means the need of a MIP6D-NG compatible kernel.
In our testbed the core router is an ASUS WL500 with DD-WRTv24 OS (CrushedHat distribution). Two OpenVPN daemons are running on this router. On one hand an OpenVPN Server provides an appropriate IPv6 address for the 3G connection of Android Smartphone using RADVD. On the other hand an OpenVPN client operates as an IPv6 over IPv4 or IPv6 over IPv6 tunnel, interconnecting the testbed with our University’s IPv6 network, independently of the router’s actual IP access. It means that the overall architecture could be portable and in the worst case only recovers legacy IPv4 connection for the core router. Wanulator network emulator node is also applied in the environment. This entity is a Linux distribution which allows us to manipulate the QoS parameters (e.g., delay, packet loss, jitter etc.) of the link to which it is connected (i.e., the Wi-Fi connections in the depicted setup). Using Wanulator we were able to evaluate different decision algorithms in any set of network QoS parameters.
In order to present the feasibility of our scheme and to evaluate the proposed offloading algorithm, we implemented two measurement scenarios in the testbed. In the first scenario we measured the throughput of a HTTP video stream (H.264 encoded video with 854x480 resolution originated by a Lighttpd Webserver) over TCP transmitted from the mobile node towards a correspondent node, with and without cross-layer mobility support. We defined and registered two different types of data flows: the HTTP video stream over TCP and a VoIP flow over UDP. According to the QoS policies, the UDP flow is routed through the 3G interface during the entire measurement session, however the TCP flow is moved by our decision engine between the 3G and Wi-Fi accesses. The first and the second boxes on Fig. 5 (with captions of Wi-Fi and Femtocell) depict reference scenarios where both TCP and UDP flows are transferred via Wi-Fi or 3G respectively, without any flow handover initiated. Contrarily in the third box the application moves the TCP flow from 3G to Wi-Fi after 30 seconds (and from Wi-Fi to 3G in the fourth box). The measurement session took 90 seconds, the average bandwidth of the used Wi-Fi network was 300 KBps, and the femtocell was able to provide 100 KBps. The average bandwidth was calculated as the ratio of the amount of the transmitted data and the elapsed time. As Fig. 5 shows the execution of the vertical handover does not generate significant reduction of the throughput. If both TCP and UDP flows were assigned to the Wi-Fi interface the quality of VoIP and video stream was deteriorated, thus the separation of flows to different interfaces improved the quality of aforementioned applications. In this scenario the operation of the proposed flow mobility management scheme was the following between MN and HA:
Figure 5 Throughput measurement results
Fig. 6 presents the components of the average vertical handover latency measured during flow mobility events. The two considerable parts are the latency between the Java and the native layer, and the delay between the reception of the flow update command and the sending of FBU message towards the HA by the MIP6D-NG daemon. In our testbed the aforementioned latency values are quite significant thanks to performance limitations of the used HTC Desire S device: this is model is an old-version Smartphone (announced in 2010) optimized to Android 2.2. However, our modifications resulted in a highly-customized Android 4.1.2 requiring a lot more resources from the Smartphone hardware. This quite high latency doesn’t imply serious practical issues, because during these procedures the registered flows will not be affected: data transfer will harmlessly continue till the third phase (FBU->FBA signaling) starts. This last component is the delay between the sent FBU and the received FBA messages. The average latency of this component is less than 1 seconds.
Figure 6 The total vertical flow handover latency in its three main components
Current offloading standards are defined by 3GPP so that the operator can manage the way flows are routed either through the operator network or through the Internet. This allows the operator to pass the “important” traffic on its network, and the “best effort” traffic (e.g., Youtube) to the Internet. Our introduced client-based, flow-aware, cross-layer optimized offloading scheme would allow to offer similar type of services, but on overlay of the network operator, where either someone outside of the network operator, or the mobile itself can control how the flows of the mobile node would be routed using its multiple access. We confirmed the applicability of our solution by evaluating it in an integrated femtocell/Wi-Fi testbed environment with the help of extensive real-life measurements. As a part of our future activities we are planning to refine our algorithm (e.g., decreasing the measurement period), optimize our implementation (reduce inside Android signaling delays), combine our client-based approach with network-based mobility management techniques (e.g., by integrating Home Agent initiated handovers into the scheme), and also to further enhance our decision engine.