Research

Curriculum Learning for mmWave Beam Selection

Curriculum Learning for mmWave Beam Selection

Curriculum Learning Curriculum Learning [1] is a biologically inspired procedure to train machine learning models. While it is usual practice to train Neural Networks (NNs) using batches of data sampled uniformly in random from the training dataset, curriculum learning mimics the way humans learn, starting from simpler training samples that unveil general and raw concepts,…

Anomaly Detection

Anomaly Detection

Managing and monitoring the performance of Internet of things (IoT) systems is a chore, albeit a necessary one in today’s life. With hundreds of thousands of things to monitor anomaly detection can aid in identifying where an error is occurring, improving root cause investigation and allowing for faster tech assistance. Anomaly detection aids in monitoring…

More humanoid agents with Hierarchical Reinforcement Learning

More humanoid agents with Hierarchical Reinforcement Learning

As you may have guessed from its name, Hierarchical Reinforcement Learning (HRL) is a family of Reinforcement Learning algorithms that decompose the problem into different hierarchies of subproblems or subtasks, and the higher-qlevel tasks invoke the primitive lower-level tasks. The goal of HRL is to learn a multilayer policy to perform control at different levels…

Collaborative Sensing in Cellular Networks

One of the potential solutions for tackling the problem of RF congestion is integrating sensing and communication infrastructures in a manner where they use the same frequency band in a controlled way. In our last paper, which is submitted to Journal of Selected Areas in Communications (JSAC), we have proposed a mechanism to perform multi-radar…

Grassmann Manifolds – Subspace Comparisons

In this article we will review the basic concepts related to Grassmann Manifolds and how they are applied in subspace comparison. The original motivation for this post can be found in the field of clustering of multi-antenna wireless communication [1], which is the focus of my research in the context of the Windmill project.  Quick…

What is Open-RAN and How ML Plays a Big Role in its Development?

In this post we will be talking about the following topics: What is Open RAN? Open RAN and 5G. Current challenges and its limitations. The importance of AI/ML for Open-RAN. Open RAN vs Legacy network architecture To start talking about open-RAN we can first have a look of how a traditional network architecture is designed:…

Beyond mMIMO Large-scale Wireless Sensing

The aim of this post is to describe the work “Assessing Wireless Sensing Potential with Large Intelligent Surface” published in IEEE Open Journal of the Communications Society. I will try to show the main ideas behind it in a more simplified manner. Please note that for a more detailed description of the methods and the…

Slotted ALOHA with Reinforcement Learning

In this article we will discuss the potential of reinforcement learning (RL) to learn a backoff control policy for slotted ALOHA-type random access. We will use deep reinforcement learning (DRL) to learn a policy for multi user random access system. Slotted ALOHA Background Slotted ALOHA (sALOHA) protocol [1] for random access in wireless networks has…

Emergence of MAC Protocols with Multi-Agent Reinforcement Learning

In this post we will discuss how can we allow medium access control (MAC) protocols to emerge with multi-agent reinforcement learning (MARL). Current MAC protocols are designed by engineers as a predefined set of rules, but have we addressed the question of what happens if we let the network come up with its own MAC…

Optimizing On-demand Slices Selection in Core Networks

The evolution of wireless core networks provides a flexible framework for network traffic management. The most famous concept is network slicing. In this article an on-demand network slices selection problem in the core network will be introduced in section 1, and the optimization formulation of the slices selection problem will be defined in section 2….

  • 1
  • 2