Current and next generation orthogonal frequency division multiplexing-based wireless cellular systems strive to maximize spectral efficiency and meet the increasing demand for higher data rates despite being severely constrained by the limited spectrum availability. Rate adaptation and scheduling are two key enabling techniques employed to achieve these challenging goals. In them, the determination of the user the base station (BS) transmits to and its modulation and coding scheme (MCS) for downlink transmission over a group of subcarriers, which is called a subchannel, is driven by the channel conditions. To do so, the BS requires feedback of channel state information (CSI) from the users in the uplink.
In order to ensure that the feedback overhead does not overwhelm the uplink, several reduced feedback schemes have been proposed in the literature. However, reduced feedback leads to a loss in throughput since CSI available at the BS is not sufficient to effectively carry out rate adaptation and scheduling. We propose novel BS-side estimation techniques to mitigate this loss in throughput. These techniques incorporate statistical information such as mean signal- to-noise ratio and correlation across SCs along with the fed back CSI to systematically determine the MCS and user for transmission.
We consider the practically relevant best-M and threshold-based feedback schemes. In the former, each user feeds back CSI only for its M strongest SCs, while in the latter, each user feeds back the index of the highest rate MCS at which it can reliably receive for each SC. We develop two BS-side estimation techniques, namely, a minimum mean square error estimator- based approach and a throughput-optimal rate adaptation and scheduling policy. The proposed techniques significantly improve the cell throughput without requiring any additional feedback. The throughput-optimal policy also leads to a new benchmark for the achievable cell throughput for the respective feedback schemes. Further, for the threshold- based feedback scheme, it decouples the resolution of feedback from the number of MCSs available at the BS. This enables a system designer to reduce the feedback resolution without lowering the cell throughput by provisioning for more MCSs at the BS.
Members: Jobin Francis, Vineeth Kumar
Opportunistic selection which aims to select the best node from a set of available nodes in order to improve the overall system performance has gained attention in many wireless systems such as relay-aided co-operative communication systems, wireless sensor networks, and ad hoc networks. In it, each node possesses a real-valued metric based on which the best node is selected. However, a major challenge that needs to be tackled here is that the nodes are geographically seperated from each other and a node cannot decide by itself if it is the best node. Distributed selection algorithms such as timer-based back-off algorithm and splitting-based selection algorithms have therefore been proposed for this.
The splitting-based selection algorithm is a time-slotted algorithm in which each node locally decides to transmit in a slot if its metric lies between two thresholds. At the end of each slot, a coordinating node called sink feeds back to all the nodes one among the following outcomes: idle (if no node transmitted), success (if exactly one node transmitted), or collision (if two or more nodes transmitted). Based on the feedback, the nodes update their thresholds and the algorithm continues. A success feedback implies that the best node has been selected; hence the algorithm terminates.
The splitting-based selection algorithm has attracted a great deal of research interest due to two reasons (i) it guarantees to select the best node (ii) it is a fast and scalable algorithm as it provably selects the best node in 2.467 slots, on average, even when the number of nodes tends to infinity. However, several other problems related to splitting algorithm remain open and we focus on developing efficient mechanisms to solve these.
Members: Reneeta Sara Isaac
Device-to-device (D2D) communication is a promising solution for next generation cellular systems, in which D2D users share the available spectrum with the cellular users under the control of the base station. The D2D users can bypass the base station and can communicate with each other directly. D2D communication is practically appealing because it reduces the traffic load on the base station, improves frequency reuse, increases cell throughput, and reduces latency.
Given a set of user equipments, some of which are D2D pairs and the rest are CUs, several new challenges such as mode selection, user scheduling, rate adaptation, subchannel allocation, and power control arise. Mode selection involves determining whether an subchannel should be allocated to only a D2D pair (Dedicated Mode); to only a cellular user (Cellular Mode); or to both together (Underlay Mode). A critical problem that arises due to the addition of D2D users in the cell is that, in underlay mode, additional co-channel interference due to simultaneous data transmissions from the cellular user and the D2D pairs reduces the cell throughput.
A critical and common assumption is that the complete instantaneous channel state information of all the links is available at the base station including the channel state information of the interference link between cellular user and D2D receiver. This assumption increases the feedback overhead in the system. Hence, our focus is to analyze the methods that reduce the interference link feedback overhead in the system, optimal resource allocation by the base station when it has partial channel state information, and an evaluation of their ability to improve cell throughput.
Members: Sai Kiran B., Bala Venkata Ramulu G.
In-band full duplex enhances spectral efficiency and reduces latency of wireless network by enabling simultaneous transmission and reception of the signal at the wireless devices in the network over the same channel.It is a promising technology that enhances the performance of the wireless base stations with small coverage areas. Therefore, it is one of the candidate technologies being considered for 5G cellular networks.
However, several new system design challenges arise in implementing full-dupex in cellular wireless networks. It introduces new interference scenarios in the network. The simultaneous transmission and reception at the wireless device causes transmission power leakage to the receiver of the device, which isknown as self-interference. The self-interference at the receiver degrades signal to noise interference ratio of the received signal and can even drown out the received signal if not cancelled. This self-interference is mitigated by advanced circuit design techniques such as the analog and digital domain signal cancellation, and antenna isolation.Another new source of interference arises because the uplink and downlink users communicate simultaneously over the same channel in networks that employ full-duplex base stations. The simultaneous presence of the uplink with the downlink over the same channel causes cross-link interference to the downlink user.This severity of this interference depends on the scheduling and power control algorithms used at the base station. We are working on the design and performance analysis of cross-link interference-aware radio resource allocation algorithms for scheduling usersand controlling their transmit powers and transmit rates.
Wireless sensor networks (WSN) are finding increasing applications in environmental monitoring, military surveillance, industrial automation, infrastructure security, smart homes, and intelligent transportation. They consist of sensing nodes that probe the environment and report the sensed data to a fusion node. In the practical deployment of WSNs, running cables to power the sensors is often cumbersome or impractical. Hence, the sensor nodes are equipped with an energy storage device such as a pre-charged battery.
In many applications, the most significant source of energy consumption comes from message transmission. As the sensor nodes transmit data to the fusion node, the energy stored in the batteries gets depleted and eventually the sensor network dies. Therefore, ensuring energy-efficiency is crucial to increase the lifetime of the network. Ordered transmissions is an approach to improve the energy-efficiency. In it, more informative messages are transmitted first followed by lower informative messages. The transmissions are halted upon accumulating sufficient evidence at the fusion node. This reduces the number of transmissions by a factor of at least two without degrading the system's detection performance.
Energy harvesting (EH) is a different approach to address the issue of limited network lifetimes. In the WSNs with EH, the sensor nodes harvest energy from natural or man-made sources in the environment such as solar, thermal, electromagnetic, or mechanical. The harvested energy is used to replenish the batteries of the sensor nodes. Therefore, an EH sensor node no longer permanently runs out of energy in its battery. However, it can occasionally run out of energy, in which case it fails to transmit its messages to the fusion node. These networks, thus, promise perpetual operation and have attracted considerable attention recently.
We focus on the ordered transmissions approach in which some of the transmissions can be missing. This is motivated by the use of EH in WSNs. The absence of the readings requires a new set of decision rules to be used at the fusion node. We develop a new timer scheme to order the transmissions and use the extra information gained from ordering to setup new decision rules. These new rules not only reduce the average number of transmissions but also give better detection performance compared to that of the conventional approach where the transmissions are not ordered.
Members: Sai Kiran P.
The aim of the project is to design and deploy an energy harvesting wireless sensor network for airplane environments. Specifically, we are interested to increase the security features in an airplane by monitoring the status of the overhead storage using energy harvested sensor nodes. In order to save energy further, the sensor nodes use Bluetooth-Low-Energy (BLE) protocol for communication. As a result of lack of professional BLE simulators, our task is to develop the BLE protocol module in Network Simulator-3 (NS-3). Our next objective is to study the feasibility of such an energy efficient network for a given scenario.
Members: Debanjan Sadhukhan, Chinmay Bhat
Antenna selection (AS) provides a low hardware complexity solution for exploiting the spatial diversity benefits of multiple antenna technology. In receive AS, the receiver does not receive and process signals from all its receive antennas. Instead, it dynamically selects a subset comprising antennas with the best instantaneous channel conditions to the transmitter, and only processes signals through them. This enables the receiver to employ fewer expensive radio frequency (RF) chains, which consist of components such as a low noise amplifier (LNA), down-converter, and analog-to-digital converter (ADC). Similarly, in transmit AS, the transmitter employs fewer transmit RF chains than the available number of antennas. In addition to its lower complexity, AS is attractive because it provably achieves full diversity order, i.e., it provides as much robustness against fading in wireless channels as a full complexity receiver.
Our research has focused on training for receive antenna selection. The training procedure is the crucial first step that enables AS in practice. It is through training that the receiver estimates the channel gains of the various antennas and determines which antenna subset to select. The inevitable presence of noise during the estimation process and the time-varying nature of the wireless channel can lead to inaccurate selection and incorrect data decoding, both of which together increase the overall symbol error rates (SERs) observed in the system.
Cooperative communications is a promising technology in which different wireless nodes help each other at the physical and multiple access layers in forwarding messages to their intended recipients. Doing so enables the wireless networks to exploit their inherent geographically distributed nature and their large coverage.
In a relay-based multihop cooperative wireless network, such as the one shown in Fig. 2, one or more relays forward a message from the source to the destination. Ideally, having more relays to forward a message helps improve performance. However, this is practically difficult since tight symbol-level synchronization needs to be ensured at all the separated relays. Selection is a solution that solves this problem. In selection, depending on the current channel realizations, only the best relay, which is the one that has the most favourable channel realization, is selected to forward the message. Selection circumvents the synchronization problem since only the selected node transmits. Yet, despite its simplicity, it achieves the highest diversity order - as was also the case with antenna selection
Splitting-based selection: The splitting-based selection algorithm, which we analysed, is a time-slotted multiple access contention algorithm in which each node autonomously decides whether or not to transmit in a certain time slot. This decision by a node is based on its local channel gain knowledge, which is captured in the form of a real-valued metric. In each slot, only those nodes whose metrics lie between two thresholds transmit. At the end of every slot, the controller feeds back a three-state outcome indicating idle (when no node transmitted), success (when exactly one node transmitted and was decoded successfully), or collision (when two or more nodes transmitted and none could be decoded). The nodes update their thresholds in each slot based on the feedback from the coordinator. The algorithm guarantees that the first node to successfully transmit to the controller is the best node, i.e., the one with the highest metric among all the nodes.
The splitting-based selection algorithm is attractive because it is extremely fast. For example, it can select the best node in just 2.47 slots on average, even in the worst case where the number of available nodes is infinity. It can be speeded up even further by using power control.
Timer-based selection: The timer-based selection algorithm instead uses a back-off timer mechanism to select the best node. Every node sets its timer as a function of its metric and transmits a packet when its timer expires. The key idea is that the metric-to-timer mapping is a deterministic monotone non-increasing function. This is enough to ensure that the first node that transmits is the best node. The timer-based selection algorithm is attractive because of its simplicity and its distributed nature. However, if two timers that expire within a time Δ of each other collide and cannot be decoded. Therefore, the performance of the algorithm is fundamentally constrained by the vulnerability window, Δ.
Unlike the splitting-based algorithm, it requires no feedback during the selection process. The optimal metric-to-timer mapping, which maximizes the probability of selection success given a pre-specified maximum duration of selection Tmax, was derived by us. It was shown to be a practically feasible staircase mapping, which is depicted in Fig. 3. The optimal 'stair' lengths, as shown in the figure, were also derived.
Selection and Data Transmission Trade-off: In every cooperative system that uses selection, one invariably faces a fundamental trade-off that is related to the selection duration. The larger the selection duration, the more accurate the selection. However, this reduces the relative fraction of time available for data transmission using the selected node. On the other hand, shrinking the selection duration increases the odds that the selection algorithm fails to select the best node. As a result, the system loses out on the spatial diversity benefits of selection. This trade-off was addressed by us. We considered both non-adaptive and adaptive rate and power modes of transmission, and showed that the selection process is often far from ideal and perfect at the optimal system operating point. This is unlike what has been assumed in the literature.
Energy harvesting (EH) networks offer the possibility of a maintenance-free and perpetual network operation. In these networks, the nodes harvest energy from the environment using solar, vibration, thermoelectric effects, and other physical phenomena. Unlike conventional battery-powered nodes that die once their pre-charged batteries drain out, an energy harvesting node can harvest energy and remain available - essentially forever. This is possible provided the communication protocols can be engineered to ensure that no more energy is drained than has been harvested. The use of the energy harvesting functionality motivates a significant redesign of the physical and multiple access communication protocols. This is because minimizing energy consumption ceases to be the dominant goal in the design of the protocols. Instead, the problem requires handling the randomness in the energy harvested and ensuring that the energy required for communication and sensing needs, if any, can be met. Further, the inclusion of even a few energy harvesting nodes promises significant improvements in the lifetime of conventional wireless networks.
A simple theoretical model that captured the interactions between important parameters that govern the communication link performance of a single EH node was developed. In this work, each EH node is allowed a fixed number of retransmission attempts to send information to a destination, as shown in Fig. 4. Unlike conventional nodes, the packet may not reach the destination (outage) not only because of fading and noise in the communication channel but also because transmission did not take occur due to insufficient energy stored in the battery. The outage probability analysis developed brings out the critical importance of the energy profile and the energy storage capability on the EH link performance. The insights turn out to be different for slow and fast fading channels. The analysis showed that properly tuning the transmission parameters of the EH node and having even a small energy storage capability considerably improves the EH link performance.
Interference plays a crucial role in code division multiple access (CDMA) based cellular communication systems, which use pseudo-random spreading codes for transmitting data. Therefore, an accurate characterization of the statistics of interference is an essential first step in cellular system design and analysis. While downlink co-channel interference is well characterised, the same is not true for uplink interference from neighbouring cells. This is because the interfering signals from the neighouring cells undergo shadowing and fading in their respective wireless channels. The number of interfering signals is random since the number of users per cell that generate them is random. Power control and cell selection, which affect the interfering users’ transmit powers, further exacerbate this randomness. An alternate characterization of the uplink co-channel interference was developed by us. It showed that the lognormal distribution characterizes the uplink interference statistics well, and is significantly better than the conventionally assumed Gaussian distribution.