User Tools

Site Tools


ip_multimedia_subsystem

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
ip_multimedia_subsystem [2014/11/13 18:41]
Martin Taylor [Phased Testing of vIMS]
ip_multimedia_subsystem [2014/11/14 12:22] (current)
Martin Taylor [Subsequent Phases of vIMS Testing]
Line 30: Line 30:
 By contrast, the control plane functions in an IMS network are not packet- or bandwidth-intensive. ​ Their performance tends to be compute-limited,​ not network-limited,​ and they therefore do not tend to suffer performance degradation introduced by limitations in the network virtualization infrastructure. By contrast, the control plane functions in an IMS network are not packet- or bandwidth-intensive. ​ Their performance tends to be compute-limited,​ not network-limited,​ and they therefore do not tend to suffer performance degradation introduced by limitations in the network virtualization infrastructure.
  
-It is therefore proposed that the first phase of performance testing in the vIMS domain be focused on vSBC.+It is therefore proposed that the first phase of OPNFV performance testing in the vIMS domain be focused on vSBC.
 ===== vSBC Testing ===== ===== vSBC Testing =====
  
-work in progress+In this proposed first phase of vIMS testing, a single VNF comprising a virtualized session border controller is deployed on the OPNFV platform, with the aim of characterizing the performance impact of different approaches to network virtualization within the platform on the throughput of the vSBC.
  
 +It is worth noting that vSBC is unique among the set of proposed test cases for performance characterization of OPNFV in that its user plane function handles exclusively small packets. ​ Other proposed test cases such as vEPC, vBRAS and vPE all handle a mix of small and large packets that is typical of broadband user traffic. ​ With this type of traffic, a 1 Gbps load in the user plane will comprise of the order of 150k packets per second. ​ Whereas with a vSBC performing relay of RTP streams carrying audio payload, a 1 Gbps load in the user plane will comprise of the order of 1.25M packets per second. ​ The vSBC use case therefore puts particular emphasis on the effect of per-packet overheads in the network virtualization implementation within OPNFV. ​ Real-world experience with virtualization of the SBC function suggests that this presents some significant challenges.
 +
 +Load is applied to the vSBC with the aid of a SIP traffic generator. ​ This simulates the effect of a large number of IMS endpoints establishing audio sessions. ​ The traffic generator emulates both ends of a voice call by sending SIP INVITE requests towards the vSBC, and answering these call setup attempts with 200 OK responses. ​ The traffic generator then sends bi-directional RTP streams via the vSBC to simulate the user plane load.  The number of concurrent sessions is ramped up until an appreciable level of packet loss is experienced in the user plane. ​ The number of concurrent sessions at which this occurs provides an indication of the maximum effective user plane throughput of the vSBC.  While performing the test, latency and latency variation of RTP packet transmission should also be monitored to verify that they remain within acceptable limits.
 +
 +The performance of the vSBC software should first be characterized on bare metal, to take the network virtualization element of the OPNFV platform out of the equation. ​ The same compute node hardware as is used in the OPNFV platform should be used for this purpose.
 +
 +The vSBC should then be deployed on the network virtualization solution to be tested in the OPNFV platform. ​ This may be based on Open vSwitch (OVS) or some fork of OVS that incorporates data plane acceleration,​ for example by incorporating DPDK.  It may be deemed useful to test the performance of the vSBC using hypervisor bypass techniques, for example Single Root I/O Virtualization (SR-IOV), for comparison purposes. ​ It may also be deemed useful to test the performance of the vSBC on the OPNFV platform with commercial software solutions for data plane acceleration.
 +
 +The vSBC implementation being proposed for this testing is the Perimeta SBC from Metaswitch Networks. ​ Key points about this implementation:​
 +
 +  * The software is mature and production quality. ​ It is deployed in hundreds of networks today (on bare metal COTS x86 platforms).
 +  * Its software data plane is highly optimized for x86 hardware, and leverages DPDK to deliver excellent performance on bare metal COTS without any hardware acceleration.
 +  * Its implementation of SBC functionality is fully-featured and therefore provides a realistic view of the performance of a real-world VNF.
 +  * It is available today packaged and supported as a VNF and is deployed as such in at least one production network.
 +  * The software is available under a commercial license.
 +
 +A SIP traffic generator is required to test vSBC performance. ​ Commercial solutions are available from vendors such as Ixia and Spirent. ​ Alternatively,​ a solution based on open source software (e.g.SIPp) may be deployed on COTS hardware.
 +===== Subsequent Phases of vIMS Testing =====
 +As described above, the proposed vSBC test case is specifically intended to explore the performance of OPNFV in the IMS user plane. ​ For a broader characterization of OPNFV in the context of vIMS, a more complete implementation of vIMS should be tested on OPNFV.
 +
 +The vIMS test case can be progressively expanded by adding more and more of the distinct functions defined by the IMS architecture. ​ A logical next step beyond vSBC testing would be to add a virtualized IMS core.  The VNF implementation proposed for this purpose is Clearwater Core, an open source implementation of the IMS core functions from Metaswitch Networks. ​ See [[http://​www.projectclearwater.org/​]].
 +
 +Clearwater is implemented with a scale-out architecture using stateless SIP processing elements allied with a distributed,​ scalable, fault-tolerant state store based on well-known open source elements. ​ With Perimeta SBC providing a virtualized P-CSCF and Clearwater Core providing virtualized S-CSCF, I-CSCF, BGCF and (optionally) AS, all the key call processing functions of a vIMS could be tested. ​ The same SIP traffic generation capabilities used to test vSBC would also be needed to test this more complete implementation of vIMS. With this test setup, the performance of OPNFV could be characterized in the context of vIMS in both control plane and user plane.
 +
 +An even more complete test of vIMS on OPNFV could be accomplished by adding further functional elements such as vHSS, vPCRF etc.  Also, the vIMS could be layered on top of vEPC to enable the testing of a complete virtualized VoLTE solution.
  
ip_multimedia_subsystem.1415904081.txt.gz ยท Last modified: 2014/11/13 18:41 by Martin Taylor