Resource management in fog computing: Overview and mathematical foundation
Source Title: Swarm Intelligence: Theory and Applications in Fog Computing, Beyond 5G Networks, and Information Security, DOI Link
View abstract ⏷
Fog computing is a distributed computing paradigm that extends the capabilities of cloud computing to the edge of the network, closer to the data source or user. Resource management in fog computing is a complex task due to the heterogeneity of devices, dynamic workloads, limited resources, energy efficiency, task offloading, load balancing, quality of service (QoS) management, security, and privacy concerns. It plays a crucial role in optimizing the performance and efficiency of fog computing systems. The chapter delves into the challenges posed by the diverse nature of devices, dynamic workloads, and distributed architecture, emphasizing the need for adaptive resource allocation strategies. It provides a systematic and mathematical approach to resource management, including the formulation of optimization problems such as the Knapsack Problem, Traveling Salesman Problem, Transportation Problem, Vehicular Routing Problem, and N-Queens Problem. Furthermore, it underscores the significance of load balancing, task offloading, and resource provisioning as adaptive strategies to dynamically allocate resources, ensuring optimal utilization without causing underutilization. It offers valuable insights into the complexities of managing resources in fog computing and provides a holistic view of the challenges, strategies, and mathematical formulations involved in resource management across various contexts
Evolutionary Algorithms for Edge Server Placement in Vehicular Edge Computing
Source Title: IEEE Access, Quartile: Q1, DOI Link
View abstract ⏷
Vehicular Edge Computing (VEC) is a critical enabler for intelligent transportation systems (ITS). It provides low-latency and energy-efficient services by offloading computation to the network edge. Effective edge server placement is essential for optimizing system performance, particularly in dynamic vehicular environments characterized by mobility and variability. The Edge Server Placement Problem (ESPP) addresses the challenge of minimizing latency and energy consumption while ensuring scalability and adaptability in real-world scenarios. This paper proposes a framework to solve the ESPP using real-world vehicular mobility traces to simulate realistic conditions. To achieve optimal server placement, we evaluate the effectiveness of several advanced evolutionary algorithms. These include the Genetic Algorithm (GA), Non-dominated Sorting Genetic Algorithm II (NSGA-II), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Teaching-Learning-Based Optimization (TLBO). Each algorithm is analyzed for its ability to optimize multiple objectives under varying network conditions. Our results show that ACO performs the best, producing well-distributed pareto-optimal solutions and balancing trade-offs effectively. GA and PSO exhibit faster convergence and better energy efficiency, making them suitable for scenarios requiring rapid decisions. The proposed framework is validated through extensive simulations and compared with state-of-the-art methods. It consistently outperforms them in reducing latency and energy consumption. This study provides actionable insights into algorithm selection and deployment strategies for VEC, addressing mobility, scalability, and resource optimization challenges. The findings contribute to the development of robust, scalable VEC infrastructures, enabling the efficient implementation of next-generation ITS applications