Latency Optimizer 3 0 98 !!INSTALL!!
Network Function Virtualization (NFV) is an emerging technology to consolidate network functions onto high volume storages, servers and switches located anywhere in the network. Virtual Network Functions (VNFs) are chained together to provide a specific network service, called Service Function Chains (SFCs). Regarding to Quality of Service (QoS) requirements and network features and states, SFCs are served through performing two tasks: VNF placement and link embedding on the substrate networks. Reducing deployment cost is a desired objective for all service providers in cloud/edge environments to increase their profit form demanded services. However, increasing resource utilization in order to decrease deployment cost may lead to increase the service latency and consequently increase SLA violation and decrease user satisfaction. To this end, we formulate a multi-objective optimization model to joint VNF placement and link embedding in order to reduce deployment cost and service latency with respect to a variety of constraints. We, then solve the optimization problem using two heuristic-based algorithms that perform close to optimum for large scale cloud/edge environments. Since the optimization model involves conflicting objectives, we also investigate pareto optimal solution so that it optimizes multiple objectives as much as possible. The efficiency of proposed algorithms is evaluated using both simulation and emulation. The evaluation results show that the proposed optimization approach succeed in minimizing both cost and latency while the results are as accurate as optimal solution obtained by Gurobi (5%).
Latency Optimizer 3 0 98
Over the past few years, user traffic and the use of virtualization technologies have been growing very fast in communication networks. The excessive needs for developing new services and deployment of the required network resources as well as maintaining, upgrading and expanding physical infrastructures emerge a remarkable operational expenditure (OPEX) and capital expenditure (CAPEX) for the network service providers. NFV is able to significantly decrease the capital and operational cost and outperform resource allocation more efficient and flexible. An efficient deployment of SFCs plays a major rule in decreasing monetary costs of the service providers. However, sometimes, there would be a tradeoff between different objectives as they may be contradictory. For example, although increasing the resource utilization in order to minimizing the number of active physical nodes can reduce deployment cost, but at the same time it may lead to an increase in latency of demanded services. It can happen since the aggregation traffic on the physical nodes and links increases and it degrades the efficiency of latency objective. On the other side, minimizing network latency can be resulted in an increase in deployment and network cost because of more resources needed for providing services .
In this paper, we propose a multi-objective optimization model to minimize joint deployment cost of service providers and end-to-end latency of SFCs. We formulate the problem of SFC placement using Mixed Integer Programming (MIP) model which takes into account a range of constraints such as resource capacity, acceptable latency requirements, affinity and anti-affinity. The proposed model enables service providers to accept and serve more user requests with strict latency needs while keeping overall costs low.
Although much research attention is given to VNF placement, as far as we know, none of the previous works investigated the optimization problem of joint deployment cost and end-to-end latency of SFCs to find a pareto optimal solution which can be used to make a tradeoff between monetary cost and network delay cost in cloud/edge service providers. In addition, for the sake of achieving an efficient and accurate solution needed for real-world networks, we propose comprehensive cost and latency models taking into consideration different effective factors as energy consumption, software license, computation resources and network usage (to define cost model), as well as queuing delay, processing and virtualization delay, and propagation delay (to define latency model).
We propose a multi-objective optimization model formulated as MIP, to jointly minimize the deployment cost and the end-to-end latency regarding a range of constraints such as routing, capacity, delay, location constraints.
The rest of the paper is organized as follows. The related works are discussed in Section 2. In Section 3, we formulate the problem of SFC placement in order to jointly minimize the deployment cost and network latency. In Section 4, we illustrate our proposed algorithms to solve the optimization problem in details. Section 5 provides the performance evaluation of the proposed algorithms compared with the benchmark algorithms. Finally, we conclude the paper in section 6.
VNF placement has recently obtained much attention in the literature. The related works can be categorized regarding various aspects such as system models, objectives and proposed solutions. A number of studies have formulated the placement problem as integer programming problems, and solved them using optimization solvers; or applied heuristics or greedy approaches to place VNFs. Bari et al.  proposed a Mixed Integer Linear Programming (MILP) model to optimize utilization and to reduce network operational costs, and solved it using CPLEX. Authors in  proposed an ILP formulation to decrease the number of deployed instances. In [6, 7], the authors proposed a MILP and Mixed Integer Quadratically Constrained Program (MIQCP) for VNF placement in data centers, respectively. These approaches are effective to find the exact or near-optimal solutions, but only for small size infrastructures and are not suitable for the networks including large set of physical nodes. A polynomial complexity heuristic has been proposed for VNF placement in . The placement problem is modeled as a Multi-Stage directed graph with associated costs and then, VNFs are placed leveraging a Viterbi algorithm . Agrawal et al.  introduced .a .queuing-based .system model along with a heuristic to minimize the network latency, but they considered only CPU in their work and neglected other resources. The authors in [10,11,12] proposed QoS prediction strategies for mobile ecommerce environments, edge environments and IoT services, respectively.
An online application placement for mobile edge computing (MEC) has been proposed by Wang et al. . The authors modeled the placement problem using Markov Decision Processes (MDP) and tried to reduce the state space of the problem. They derived a new MDP model in which only distance between servers and users define the states and then, they proposed a greedy algorithm to place the applications. Authors in  introduce an edge server placement approach to minimize the energy efficiency in MEC. They have proposed a particle swarm optimization to find the optimum solution. Another research  proposed a service chain placement system to minimize the queuing delay on nodes and infrastructure resources. Two techniques are proposed for the placement problem: a round-robin based heuristic, and a MIP-based optimization method. The results have been compared with the general MIP. However, this work only considers physical nodes and neglect the impact of routing on service latency.
The previous works either consider cost and latency as the separate objectives or consider latency as a constraint in optimization model. However, we propose a multi-objective optimization model considering joint cost and latency for the problem of VNF placement and link embedding in cloud edge systems. The proposed model is able to provide near-optimal solution for only cost optimization, only latency optimization or a pareto optimal solution considering joint cost and latency. Therefore, service providers are able to set up a weight factor based on their needs to make a tradeoff between cost and latency. In addition, the previous works considered some important factors to model the cost and especially latency functions and neglected others which may lead to SLA violation in real networks. However, we propose a comprehensive model regarding various effective factors which makes this model more practical for real edge cloud systems.
In this section, the service function chain placement in a cloud/edge network is formulated as an optimization problem. The goal of optimization model is mapping the service chains onto physical nodes so that cost and latency are minimized satisfying placement constraints. Tables 2 and 3 show the variables used in this paper.
Considering the above-mentioned issues, we present an optimization model for VNF placement and link embedding problem which aims to 1) minimize deployment cost of service chains, 2) minimize end-to-end service latency, 3) minimize joint cost and latency. In the latter case, we aim to obtain a pareto optimality between cost and delay in the mentioned problem.
VNFs, in each service chain, need to communicate with each other and forward packets according to the virtual links between them. These virtual links are mapped on the physical paths comprised of one or several physical links. Depending on the topology in a cloud/edge system, these physical connections may have different hops and different hop distance which can incur various amount of delay while processing a service chain. In the other hands, gathering VNFs on a limited number of nodes can increase traffic congestion in the network and increase latency of service chains. Based on the expected QoS and Service Level Agreement (SLA) between users and service providers, each service chain may have a deadline to be executed. Moreover, latency-critical services like remote surgery require stringent latency requirements in NFV infrastructures.
In this section, we propose a model which aims to minimize the end-to-end latency of service chains in the system. As we can see in Eq. (6), the latency model is formulated as propagation delay , processing and virtualization delay , and queuing delay Neglecting each of the delay functions may result in violation of the end-to-end latency requirements.