Telecommunication cloud (telco cloud)

Telecommunication systems scalability

As we discussed in our problem statement, telecommunication specific concerns in the cloud have not yet reached maturity. The telecommunication domain brings its own set of necessities that are not usually found in the general IT cloud. In order to better understand those specificities, we first look at the issue of scalability for telecommunication systems in the cloud and attempts at cloudifying telecommunication systems.
The Third Generation Partnership Project (3GPP) standardization effort has attempted to design the IMS in such a way that its core functionality is to some extent scalable as (Glitho, 2014) reminds us, although scalability does not translate into elasticity. Some work has addressed the scalability of parts of the IMS.
In (Lu et al., 2013), the authors propose a resource allocation scheme which satisfies the time requirements of a telecommunication network. They achieve those results by using static and dynamic groups for assignment of virtual computation units to physical computation units.
The approach allows for coarse grained elasticity through the dynamic allocation of VMs providing node-based functionality of the IMS core. It is an interesting foray into cloud territory but relies mainly on virtualization technology and is not a cloud-native deployment.
In (Yang et al., 2011), the authors focus on the scalability aspect of an individual functional unit of the IMS. In this case the scalability of the Home Subscriber Server (HSS) is addressed through the concepts of distributed databases, but it does not address any other nodes.

Cloud and QoS

A lot of research has been done on the IT cloud and some of it addresses the QoS issues for large scale systems. Once again in order to understand the problematic of the telecommunication cloud, we look at some of the work done for IT cloud and how QoS can be insured in the IT cloud.
Work has been done on assuring the QoS on a per user basis for IT clouds (Turner, 2013). However, the approach is applied to web technologies and specifically to e-commerce, and would require adaptations for the stateful telecommunication cloud.
In (Verma et al., 2015), the authors describe a large-scale cluster management system at Google named Borg. It explains how jobs are distributed on a large-scale cluster using master managers (the BorgMaster) and multiple instances of slave resource supervisors (the Borglets). Jobs are scheduled and deployed on the cluster by these entities based on configuration files, command line or web browser inputs. The large-scale management concepts are interesting and we can certainly learn from their implementation. However, the batch deployment is not particularly suitable for the telecommunication domain as requests must be handled as they come and not in bundled batch. In general the whole approach would require a redesign for the telecommunication domain.

Heterogeneous cloud research

One of the problems we identified is the necessity to deploy a single software base on many different cloud platforms. In this section, we take a look at research made on heterogeneous cloud in order to determine what is missing for the telecommunication domain.
Some work has been done on the heterogeneous cloud. In (Xu, Wang et Li, 2011), the authors propose a solution to schedule tasks based on their computing requirements, memory usage requirements, storage requirements, etc. to the hardware that best fit these. In (Crago et al., 2011), the authors propose a cloud built of a mix of Central Processing Unit (CPU) based computing resources and Graphical Processing Unit (GPU) based computing resources through virtualization. Another CPU/GPU study is described in (Lee, Chun et Katz, 2011) and looks at how proper allocation and scheduling on such heterogeneous clouds can benefit the Hadoop workload. All these works propose interesting ideas but are not specifically adapted to the telecommunication application space where, the definition of heterogeneous cloud is different, and would require adaptation for that domain.

Actor model

One of the problems we identified is to determine which cloud architectural patterns are good matches for telecommunication cloud software architecture. We found that the actor model presents some nice attributes that are well suited for the nature of telecommunications systems, and provides the potential to be distributed on a cloud. As a proof of suitability of the actor model, we can mention that Erlang (Armstrong, 1997), a programming language developed at Ericsson for the telecommunication domain, has been built on the principles of the actor model (Vermeersch, 2009). In this section we describe the actor model.
Hewitt (Steiger, 1973) defines the formalism of the actor model, where actors communicate through message-sending primitives. His work was geared toward artificial intelligence; however, the actor model mechanism fit well the separation of functionality found in telecommunication networks when we look at how telecommunication standards are defined.
Kheyrollahi (Kheyrollahi, 2014) abridges the definition of actors. An actor can have the following response to a received message:
Send a finite number of messages to the address of other actors (the actors are decoupled and only know of each other via a way to address each other);
Create a finite number of actors; Set the behavior for the next messages it will receive (state memory).

Microservices and actor model (y-axis)

The actor model is meant to enable concurrent computation where actors are the central primitives. An actor can create more actors, and send messages based on local decisions in response to a message that it receives. Messages are exchanged between actors via the use of addresses. As such, microservices can be seen as a subset of the actor model from the communication point of view, where each microservice can send messages based on a local decision in response to a message that it receives, but normally will not create another microservice. Messages are exchanged between microservices via the use of an address which is known or discovered via another service. Microservices are usually responsible for their own elasticity and scaling and are thus independently responsible for creating their own instances if needed. Besides this difference, microservices are also a means to enable concurrent computation. One could say that the actor model is a computation model while the microservice model is an implementation model. Microservices are well covered in (Newman, 2015).
Decomposition of an application into actors or microservices enables distributing the computational load of the application among different hardware instances or even different geographical locations. This in turns allows maximum flexibility for the management of computational resources that can deploy an actor where it best fits.

Le rapport de stage ou le pfe est un document d’analyse, de synthèse et d’évaluation de votre apprentissage, c’est pour cela propose le téléchargement des modèles complet de projet de fin d’étude, rapport de stage, mémoire, pfe, thèse, pour connaître la méthodologie à avoir et savoir comment construire les parties d’un projet de fin d’étude.

Table des matières

1.1 Context 
1.1.1 Traditional telecommunication networks
1.1.2 Network Function Virtualisation
1.1.3 Cloud technology
1.1.4 The Ericsson Software Model
1.2 Problem statement
1.2.1 Telecommunication cloud (telco cloud)
1.2.2 Heterogeneous deployments
1.2.3 Cloud programming paradigm
1.3 Research questions
1.3.1 Elasticity issue
1.3.2 Quality of Service provisioning issue
1.3.3 Statefulness issue
1.3.4 Communication issue
1.3.5 Heterogeneous deployment issue
1.3.6 Cloud architectural pattern selection issue
1.4 Objectives
2.1 Telecommunication systems scalability 
2.2 Cloud and QoS 
2.3 Heterogeneous cloud research 
2.4 Actor model 
2.5 Cloud architectural patterns 
2.5.1 Scaling axes
2.5.2 Microservices and actor model (y-axis)
2.5.3 Horizontal scaling (x-axis)
2.5.4 Data sharding (z-axis)
2.5.5 Round robin scheduling
2.5.6 Modulo hashing
2.5.7 Consistent hashing
2.5.8 Rendezvous hashing
2.5.9 Auto-scaling and busy signal pattern
2.5.10 MapReduce principles
2.5.11 Node failure
2.5.12 Collocate pattern
2.6 Discussion 
3.1 Requirements for the proposed architecture 
3.2 Application of architectural patterns 
3.2.1 Application of horizontal scaling and sharding patterns
3.2.2 Application of auto-scaling and busy signal patterns
3.2.3 Application of the MapReduce pattern
3.2.4 Application of the collocate pattern
3.2.5 Application of the actor model
3.3 The pouch concept for resource allocation 
3.3.1 Platform service pouches
3.3.2 Application pouches
3.4 Software architecture high level view 
3.5 The platform framework
3.5.1 Unit start-up
3.5.2 Unit configuration
3.5.3 Unit shutdown
3.5.4 Unit hibernation
3.5.5 Unit state storage
3.5.6 Unit wakeup
3.5.7 Maintaining pool of units
3.5.8 Communication
3.5.9 Logging
3.5.10 Service resolving
3.6 Other software architectural elements 
3.6.1 Meta Management and Orchestrator (MMO)
3.6.2 Element Manager (EM)
3.6.3 Communication Middleware (CMW)
3.6.4 Load Distribution Service (LDS)
3.6.5 Node Selection Service (NSS)
3.6.6 Information Distribution Service (IDS)
3.6.7 Deployment Database Service (DDS)
3.6.8 Log Gathering Service (LGS)
3.6.9 Persistence Service (PS)
3.6.10 State Database Slave (SDS)
3.7 Telecommunication application software architectural elements 
3.7.1 SIP Handler (SIPh)
3.7.2 Call Session (C)
3.7.3 Orchestrator (O)
3.7.4 HSS Front-End (H)
3.7.5 Database (DB)
3.7.6 Diameter protocol Handler (Diah)
3.7.7 Media Processor (M)
3.7.8 Anchor Point Controller (A)
3.7.9 Telephony Server (T)
3.8 Discussion 
3.8.1 Service orientation and re-usability
3.8.2 Scalability from one computing unit to infinity
3.8.3 Model driven friendliness
3.8.4 Testing
3.8.5 Software problems lessened
3.8.6 Upgradability
3.8.7 QoS
3.8.8 Integration with legacy systems
4.1 Raspberry Pi deployment 
4.1.1 Pouch allocation on Raspberry Pi
4.2 OpenStack managed cluster deployment 
4.2.1 Pouch allocation on OpenStack managed cluster
4.3 Hybrid deployment on OpenStack and Apcera Cloud Platform 
4.3.1 Pouch allocation for the hybrid deployment
4.4 Activity display 
4.5 Measurement scenario 
4.6 SIPp 
4.7 Discussion 
5.1 Measurements on Raspberry Pi 
5.1.1 Control plane measurements
5.1.2 Data plane measurements
5.1.3 CPU usage vs number of calls
5.1.4 Memory usage vs number of calls
5.2 OpenStack measurements
5.2.1 Control plane measurements
5.2.2 Data plane measurements
5.2.3 CPU usage vs number of calls
5.2.4 Memory usage vs number of calls
5.3 Comparison of direct and tunnelled communication 
5.4 Comparison of HTTP and TCP tunnelling 
5.5 Comparison of distributed and node-based architecture 
5.6 Hibernation demonstration 
5.7 Elasticity demonstration 
5.8 Hybrid deployment
5.9 Discussion

Rapport PFE, mémoire et thèse PDFTélécharger le rapport complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *