Verification

What the V&V Platforms is?

The V&V Platform is a central pillar in the SONATA DevOps workflow by performing automatic verification and validation of services and managing the test results to form a continuous improvement feedback loop. It allows the qualification of services across multiple different orchestration platforms, testing both the functional and non-functional aspects of their deployments. This process is fully automated to ensure that can take place with little or no human intervention.

The V&V Platform is designed to be generic and easily adaptable to any NFV-compliant infrastructure so it can be easily replicated in any operator's NFV infrastructure or even multi-operator infrastructures. 

V&V Platforms main goal

Service Operators need to be confident that services will behave as expected when launched in production and before being offered to their customers.  Comprehensive testing in a production-like environment improves the quality and reliability of the services, what benefits both service developers and service providers but also leads to a superior end-user experience. The V&V Platform addresses this very real problem of validation and verification of services for service operators.

V&V Components description

The V&V Platform consists of three main V&V-specific components (Planner, Curator, and Executor), several shared common components with the Service Platform and the SDK, and an additional component that makes possible the execution of tests over different Service Platforms. 

Planner

The Planner manages the overall planning of the test execution. The Planner's responsibilities are as follows:

  • Accept new package notification.
  • Find Test Descriptors (TDs) that match Network Service Descriptors (NSDs) received in new packages.
  • Find NSDs that match TDs received in new packages.
  • Generate a test plan for each (NSD, TD) pair.
  • Send a test plan execution request to Curator.
  • Accept test-plans status updates from Curator.
  • Accept changes in the plans' sorting.
  • Accept ad hoc new test plans requests.
  • Accept deletion of existing test plans.

Curator 

The Curator receives the test plan request from the Planner and sets up the test environment for the Executor to launch the tests. To prepare the test environment, it instantiates the NSD to be tested in the corresponding Service Platform, gets the probe images from the repository, and receives the instantiation parameters from the Platform Adapter (PA) to send the required information to the Executor. These parameters are encoded in the TD and allows the developer to map a parameter known only after execution to a test. The Curator also releases resources at the end of each test. 
  
Executor

The Executor is responsible for the execution of the test against a target Service Platform. It receives a test request with the associated TD, a file that contains the test configurations, dependencies, validation and verification conditions, etc. With all this information, this module generates a docker-compose file and executes the test sequence using the docker-compose tools. For every test plan, the executor launches a set of probes. Once the test is finished, the Executor checks the validation and verifications conditions, stores the results in the V&V repository and generates a “Completion Test" response that is sent to the Curator.

Platform Adapter

The Platform Adapter manages the lifecycle of the service instances, from creation to termination, once the order is received from the Curator.  It can be adapted and extended to work with any Service Platform. Currently, it supports SONATA, OSM and ONAP Service Platforms.

Analytics Engine

The Analytics Engine is responsible for the realisation of analysis and extraction of insights and profiles at VNF / NS level from the results of the tests executed either in an experimental or operational context. Under this perspective, the Analytics Engine can either perform the analysis based on results coming from the SDK benchmarking tool or the V&V Platform. In the first case, the analysis results are mainly given as feedback to software developers in order to identify performance issues, capacity limits, etc. in the developed NFV / NS so they can proceed to implement corrective actions or appropriately dimension the requirements for the efficient deployment and operation of the software. In the second case, the results can also lead to the design and specification of effective policies or the incorporation of machine learning models for forecasting purposes

The resulted insights cover many different aspects, including: Resource Efficiency Analysis, Elasticity Efficiency Analysis, Correlation Analysis and Machine Learning Models, Time Series Decomposition and Forecasting or Graph Analysis. They are made available in the form of URLs, while they are also stored in the analytics engine database. In this way, they can be easily consumed by the interested parties.

The Analytics Engine is based on OpenCPU, a system for embedded scientific computing and reproducible research. The OpenCPU server provides a reliable and interoperable HTTP API for data analysis based on the R programming language, while big data analysis can be also realised based on SparkR. Furthermore, there is potential for incorporating other Python analysis scripts by using R packages that support embedded Python code. 

The Decision Support Engine 

This tool aims to unburden the test developers from the selection of the tests to perform to check the functionality of their Network Services. The goal of this tool is to automate the process of the test selection, providing some recommendation preferences based on the users´ previous activity. Thus, the Decision Support Engine is a recommendation system that uses Collaborative Filtering Methods based on collecting and analyzing large amounts of information on users' behaviors, activities or preferences and predict what users will prefer based on their similarity to other users. Singular-Value Decomposition (SVD) is used for measuring the user/item similarity.

The tool can be used as a standalone micro-service. However, the V&V Platform needs to be installed and configured in order to make an effective use of the tool. An API has been implemented for interacting with it.

Repository

The V&V Repository is responsible for storing the results of the tests and the test plans, keeping also track of the runtime tests deployed on the Service Platform. 

The Repository is composed of a REST API in the northbound interface that allows the external request to a backend database engine. The database engine used by the API is MongoDB and supports CRUD functions to handle the information of test suite results and the test plans. Additionally, the REST API validates each record that is being added to the database against the SONATA information model. 

The programming language used to develop this component is Ruby 2.4.3. It can be used as a microservice. 

Gatekeeper

The Gatekeeper is the component responsible to expose the V&V Platform's APIs to the outside, ensuring that only authenticated and authorized users can access the Service Platform. The Gatekeeper a shared component being also used by the Service Platform.

Portal

The Portal is a user-friendly Graphical User Interface (GUI) developed in Angular 7 which allows users to use the different functionalities of the SONATA integrated Platform depending on their role.

Focusing on the V&V Platform´s features, it allows:

  • The management of the Service Platforms on which the V&V Platform will perform the tests. Note that to extend its usability, the V&V Platform is designed to work with different Service Platforms (SONATA, OSM, ONAP). 
  • The execution of tests and the visualization of test results. 
  • Information relating to packages, Network Functions and Services are also displayed in the Portal user interface. 
  • Additionally, monitoring data related to the Service Platform and the deployed services can be seen in the Portal by authorised users and roles. 

The Portal interacts with the Gatekeeper through its set of exposed APIs.

Monitoring Manager

The Monitoring Manager is responsible for collecting the monitoring data related to the VNFs / NSs under the qualification and instantiated in the V&V test environment.

The Monitoring Manager instance within the V&V environment is based on the implementation of the Monitoring Manager used as part of the Service Platform but enhanced according to the specific requirements of the V&V component. For example, in the V&V environment, the Platform Adapter interacts with the Monitoring Manager in every test execution and the test results are stored with specific parameters so that other V&V components (Analytics Engine, Curator) can use after test execution.