H. Morshedlou; A.R. Tajari
Abstract
Edge computing is an evolving approach for the growing computing and networking demands from end devices and smart things. Edge computing lets the computation to be offloaded from the cloud data centers to the network edge for lower latency, security, and privacy preservation. Although energy efficiency ...
Read More
Edge computing is an evolving approach for the growing computing and networking demands from end devices and smart things. Edge computing lets the computation to be offloaded from the cloud data centers to the network edge for lower latency, security, and privacy preservation. Although energy efficiency in cloud data centers has been widely studied, energy efficiency in edge computing has been left uninvestigated. In this paper, a new adaptive and decentralized approach is proposed for more energy efficiency in edge environments. In the proposed approach, edge servers collaborate with each other to achieve an efficient plan. The proposed approach is adaptive, and consider workload status in local, neighboring and global areas. The results of the conducted experiments show that the proposed approach can improve energy efficiency at network edges. e.g. by task completion rate of 100%, the proposed approach decreases energy consumption of edge servers from 1053 Kwh to 902 Kwh.
A.R. Tajary; H. Morshedlou
Abstract
With the advent of having many processor cores on a single chip in many-core processors, the demand for exploiting these on-chip resources to boost the performance of applications has been increased. Task mapping is the problem of mapping the application tasks on these processor cores to achieve lower ...
Read More
With the advent of having many processor cores on a single chip in many-core processors, the demand for exploiting these on-chip resources to boost the performance of applications has been increased. Task mapping is the problem of mapping the application tasks on these processor cores to achieve lower latency and better performance. Many researches are focused on minimizing the path between the tasks that demand high bandwidth for communication. Although using these methods can result in lower latency, but at the same time, it is possible to create congestion in the network which lowers the network throughput. In this paper, a throughput-aware method is proposed that uses simulated annealing for task mapping. The method is checked on several real-world applications and simulations are conducted on a cycle-accurate network on chip simulator. The results illustrate that the proposed method can achieve higher throughput while maintaining the delay in the NoC.