2nd Workshop on Flexible Resource and Application Management on the Edge


CHARITY project partner Hanna Kavalionak (National Research Council of Italy (ISTI-CNR) will chair the 2nd Workshop on Flexible Resource and Application Management on the Edge. This online workshop is co-located with ACM HPDC 2022 (31st International Symposium on High-Performance Parallel and Distributed Computing, which will take place in Minneapolis, Minnesota, United States.

The Workshop agenda is available here: https://www.accordion-project.eu/frame-2nd-workshop-on-flexible-resource-and-application-management-on-the-edge/ 

You may register to FRAME here: https://www.hpdc.org/2022/registration/ 

The project partners will present two papers: the first one analyses the work performed under CHARITY in the field of Storage, the second one in the field of Fault Tolerance.

Paper “Towards a Distributed Storage Framework for Edge Computing Infrastructures”

Due to the continuous development of Internet of Things (IoT), the volume of the data these devices generate are expected to grow dramatically in the future. As a result, managing and processing such massive data amounts at the edge becomes a vital issue. Edge computing moves data and computation closer to the client enabling latency- and bandwidth-sensitive applications, that would not be feasible using cloud and remote processing alone. Nevertheless, implementing an efficient edge-enabled storage system is challenging due to the distributed and heterogeneous nature of the edge and its limited resource capabilities. To this end, we propose a lightweight hybrid distributed edge/cloud storage framework which aims to improve the Quality of Experience (QoE) of the end-users by migrating data close to them, thus reducing data transfers delays and network utilization. The proposed edge storage component (ESC) exploits the Dynamic Lifecycle Framework, in order to enable transparent and automated access for containerized applications to remote workloads. The effectiveness of the ESC is evaluated by employing a number of resource utilization and Quality of Service (QoS) metrics.

Paper “An Automated Pipeline for Advanced Fault Tolerance in Edge Computing Infrastructures”

The very fabric of Edge Computing is intertwined with the necessity to be able to orchestrate and manage a huge number of heterogeneous computational resources. On top of that, the rather demanding Quality of Service (QoS) requirements of Internet of Things (IoT) applications that run on these resources, dictate that it is essential to establish robust Fault Tolerance mechanisms. These mechanisms should be able to guarantee that the requirements will be upheld regardless of any potential changes in task production rate. To that end, we suggest an Automated Pipeline for Advanced Fault Tolerance (APAFT) that consists of various components that are designed to operate as functional blocks of an automated closed-control loop. Furthermore, the suggested pipeline is able to carry out the various Horizontal Scaling operations in a proactive manner. These Proactive Scaling capabilities are achieved via the use of a dedicated Deep Learning (DL)-based component that is able to perform multi-step prediction. Our work aims to introduce a number of mechanisms that are able to leverage the benefits that are provided by the multi-step format in a more refined manner. Having access to information regarding multiple future instances allows us to design automated resource orchestration strategies that cater to the specific characteristics of each type of computational node that is part of the Edge Infrastructure.