This tutorial is to create a L3VPN service with an auto assigned route distinguisher. It also performs an OAM test to ensure the service is correctly configured and applies statistics collection to monitor performance.
The basic steps are:
This tutorial requires at least two model driven SROS based network elements that have been discovered by NSP. There must be a MPLS topology connecting the nodes, including LSPs and Paths. BGP peering must be enabled and configured with vpn peering enabled. The examples require that the hardware ports 1/1/10 and 1/1/11 are available on both nodes and can be configured as ethernet access ports with QinQ encapsulation.
This tutorial has been tested with and is supported in NSP 22.11.
All steps in this tutorial are provided in the End to End Service Tutorial Postman Collection. Example responses for each request are also included.
The collection can be run using "Postman Runner" to perform the complete configuration. The following environment variables must be defined in order to run the collection:
Where required, "tests" have been provided in the Postman examples that retrieve identifiers from results that are used in subsequent requests. Tests have also been provided that retry certain requests for polling purposes (i.e. retrying until the service is in a fully deployed state) or that introduce a slight delay to ensure objects are created and propagated through the system.
The following artifacts from ALED are required to run this tutorial:
The BGP auto RD Range intent type artifact is also required and is available from intent-bgp-auto-rd-range-v1_2022-11-10 21_19_13.zip
This tutorial is meant as a proof of concept and as an example method for configuring a service. It is not designed to be implemented, and must not be used as is, in a production network. It is intended to guide development of an OSS.
The intent type for BGP auto RD range creation in particular has the following limitations:
Many of the requests for this tutorial are asynchronous, meaning a response is received before the request has been fully processed. For the purposes of the Postman collection, polling is used to monitor the creation of objects and their current status.
Instead of polling, an application can use Kafka notifications to monitor these events. Where applicable, example notifications are provided below in order to facilitate a Kafka integration.
For further details on using Kafka, see Kafka Notification Service
This tutorial makes use of three intent types:
All three need to be imported into Intent Manager. The VPRN IT is subsequently imported into service fulfillment and the ICM ITs are imported into Intent Configuration Manager. Before executing the Postman examples for this step they must be updated to point to files on the local file system.
Intent types for VPRNs, Service Tunnels, Access Ports, and BGP Auto RD Ranges can be imported into Intent Manager with the "import" API:
POST /mdt/rest/ibn/import HTTP/1.1
Content-Disposition: form-data; name=""; filename="vprn_2210.zip"
Content-Type: application/zip
The VPRN and Service Tunnel intent types need to be exported to Service Fulfillment using the "exportToIbsf" API:
POST /mdt/exportToIbsf/vprn/1 HTTP/1.1
The RD Range and Access Port intent types can be imported to ICM using the import-intent-types API:
POST /restconf/operations/nsp-icm:import-intent-types HTTP/1.1
{
"input": {
"imported-intent-types": [
{
"name": "bgp-auto-rd-range",
"version": 1
},
{
"name": "icm-equipment-port-access",
"version": 1
}
]
}
}
In order to synchronize services between the network and service fulfillment, data mapping files are required to be installed. The mapping files are part of the predefined intents and can be installed using the command line from the deployer host.
.../tools/mdm/bin/json-files.bash --user admin --pass NokiaNsp1! --add ...pathTo/intentType/data-sync-mapping/MDM/MDC/Sros/operational-model/
The NSP is preinstalled with a limited number of statistics that can be augmented by adding the yang model that represents the statistics of interest. This can be done by using the API, with reference to the corresponding yang file.
.../tools/mdm/bin/json-files.bash --user admin --pass NokiaNsp1! --add .../pathTo/telemetryMapping
.../tools/mdm/bin/yang-files.bash --user admin --pass NokiaNsp1! --add .../pathTo/telemetryMapping/sros/sros-service-vprn-telemetry.yang --modulesetname telemetry
This tutorial creates a service that takes advantage of the SR functionality to automatically assign a route distinguisher. This functionality requires the creation of a BGP auto RD range on each network element in the service. ICM can be used to create these ranges on the node.
To create the Configuration Template for BGP auto RD range, use the "templates" API:
POST /restconf/data/nsp-icm:icm/templates/ HTTP/1.1
{
"template": [
{
"name": "BGP Auto RD Range",
"description": "",
"life-cycle-state": "released",
"intent-type": "bgp-auto-rd-range",
"intent-type-version": 1,
"schema-form-name": "default.schemaForm"
}
]
}
The required BGP auto RD ranges can then be configured on the node by creating configuration deployments using the "create-deployments" API:
POST /restconf/operations/nsp-icm:create-deployments HTTP/1.1
{
"input": {
"deployments": [
{
"template-name": "BGP Auto RD Range",
"target-data": "{\"bgp-auto-rd-range\":{\"ip-address\":\"10.0.0.100\",\"community-value\":{\"start\":1500,\"end\":3000}}}",
"targets": [
{
"target": "/nsp-equipment:network/network-element[ne-id='{{nodeA}}']",
"target-identifier-value": "bgprange"
},
{
"target": "/nsp-equipment:network/network-element[ne-id='{{nodeB}}']",
"target-identifier-value": "bgprange"
}
],
"deployment-action": "deploy"
}
]
}
}
Service fulfillment has the capability to run a workflow at various stages in the service lifecycle. This tutorial uses that functionality to administratively enable the endpoint ports before creating the service. This step is to create and publish the workflow that sets the admin state on the network element.
For the purposes of this tutorial it is assumed that access ports are administratively disabled until services are provisioned to them.
The "workflow" API accepts as input a YAML document that describes the workflow. The adminUpPort workflow iterates over the endpoints of the service being configured and sends a netconf request to each applicable node in order to administratively enable the access ports:
POST /wfm/api/v1/workflow HTTP/1.1
{
"yaml": "version: '2.0'\n\nadminUpPort:\n type: direct\n\n input:\n - serviceName\n - payload\n - intentType\n - user: \"admin\"\n - pass: \"admin\"\n - token_auth: \"\"\n \n tags:\n - ServiceFulfillment\n \n vars:\n restconfUrl: https://restconf-gateway/restconf\n\n tasks: \n\n getPorts:\n action: std.noop\n publish:\n systemId: <% $.payload[\"nsp-service-intent:intent\"][0][\"intent-specific-data\"][\"vprn:vprn\"][\"site-details\"].site.select({site => $.get(\"device-id\"), saps => $[\"interface-details\"].interface.select($.sap[\"port-id\"])}) %>\n on-success: getIpAddress\n \n getIpAddress:\n with-items: neId in <% $.systemId %>\n action: nsp.https \n input:\n method: POST\n url: <% $.restconfUrl %>/operations/nsp-inventory:find\n body:\n input:\n xpath-filter: \"/nsp-equipment:network/network-element[ne-id='<% $.neId.site %>']\"\n depth: \"2\"\n fields: \"ip-address;ne-id\"\n publish:\n nsSites: <% dict(task().result.select($.content[\"nsp-inventory:output\"].data.select([$[\"ne-id\"],$[\"ip-address\"]]).flatten())) %>\n on-success:\n - genXML\n\n genXML:\n with-items: neId in <% $.systemId %>\n action: std.js\n input:\n context: <% $ %>\n script: |\n var xml = \"\"\n ports = <% $.neId.saps %>\n for (port in ports) {\n xml = xml + \"<port><port-id>\" + ports[port] + \"</port-id><admin-state>enable</admin-state></port>\";\n }\n var result = [\"<% $.nsSites[$.neId.site] %>\", xml ]\n return result\n publish: \n configXml: <% dict(task().result) %>\n on-success: adminUpPort\n\n adminUpPort:\n with-items: \n - neId in <% $.configXml.keys() %>\n - xml in <% $.configXml.values() %>\n action: netconf.configure\n input:\n connectInfo:\n host: <% $.neId %>\n username: <% $.user %>\n password: <% $.pass %>\n hostkey_verify: False\n content: <configure xmlns=\"urn:nokia.com:sros:ns:yang:sr:conf\"><% $.xml %></configure>\n publish:\n response: <% task().result %>\n"
}
Before it can be executed, the workflow must be published, which can be done with the "status" API:
PUT /wfm/api/v1/workflow/{{workflowId}}/status HTTP/1.1
{
"status": "PUBLISHED"
}
In order to configure services on a port they must first be configured as access (or hybrid) ports with the required ethernet parameters. ICM will be used to configure the access ports with:
Use the "templates" API to create a configuration template that will allow for Access Port configuration:
POST /restconf/data/nsp-icm:icm/templates/ HTTP/1.1
{
"template": [
{
"name": "Default Port",
"description": "Configure an access port",
"life-cycle-state": "released",
"intent-type": "icm-equipment-port-access",
"intent-type-version": 1,
"schema-form-name": "default.schemaForm"
}
]
}
Use the "create-deployments" API to deploy the correct access port configuration to the nodes:
POST /restconf/operations/nsp-icm:create-deployments HTTP/1.1
{
"input": {
"deployments": [
{
"deployment-action": "deploy",
"template-name": "Default Port",
"target-data": "{\"port\":{\"description\":null,\"ethernet\":{\"dot1q-etype\":null,\"pbb-etype\":null,\"qinq-etype\":null,\"speed\":\"100\",\"hold-time\":{},\"down-when-looped\":{},\"lldp\":{\"dest-mac\":[]},\"autonegotiate\":\"limited\",\"encap-type\":\"qinq\",\"mtu\":1492,\"mode\":\"access\"},\"admin-state\":\"disable\"}}",
"targets": [
{
"target": "/nsp-equipment:network/network-element[ne-id='{{nodeA}}']/hardware-component/port[component-id='shelf=1/cardSlot=1/card=1/mdaSlot=1/mda=1/port=1/1/11']",
"target-identifier-value": "1/1/11"
},
{
"target": "/nsp-equipment:network/network-element[ne-id='{{nodeA}}']/hardware-component/port[component-id='shelf=1/cardSlot=1/card=1/mdaSlot=1/mda=1/port=1/1/10']",
"target-identifier-value": "1/1/10"
},
{
"target": "/nsp-equipment:network/network-element[ne-id='{{nodeB}}']/hardware-component/port[component-id='shelf=1/cardSlot=1/card=1/mdaSlot=1/mda=1/port=1/1/11']",
"target-identifier-value": "1/1/11"
},
{
"target": "/nsp-equipment:network/network-element[ne-id='{{nodeB}}']/hardware-component/port[component-id='shelf=1/cardSlot=1/card=1/mdaSlot=1/mda=1/port=1/1/10']",
"target-identifier-value": "1/1/10"
}
]
}
]
}
}
Creating deployments can take a significant amount of time, especially if many ports are being configured at once. Kafka notifications can be used to monitor when each port has been aligned with the intended configuration and that the configuration is correct.
{
"nsp-model-notification:object-modification": {
"changes": [
{
"name": "status",
"old-value": "intent-aligning",
"new-value": "intent-aligned"
},
{
"name": "message",
"old-value": "alignment started",
"new-value": "Alignment Successful"
},
{
"name": "detailed-message",
"old-value": "alignment started",
"new-value": "Alignment Successful"
},
{
"name": "deployment-status",
"old-value": "aligning",
"new-value": "deployed-aligned"
},
{
"name": "deployment-status-message",
"old-value": "alignment is in progress",
"new-value": "Alignment Successful"
},
{
"name": "last-modified-time",
"old-value": "2022-11-22T14:37:16.193Z",
"new-value": "2022-11-22T14:37:20.454Z"
},
{
"name": "align-end-time",
"old-value": "2022-11-22T14:37:16.193Z",
"new-value": "2022-11-22T14:37:20.454Z"
}
],
"schema-nodeid": "/nsp-icm:icm/deployments/deployment",
"instance-id": "/nsp-icm:icm/deployments/deployment[template-name='Default Port'][target='/nsp-equipment:network/network-element[ne-id='92.168.96.46']/hardware-component/port[component-id='shelf=1/cardSlot=1/card=1/mdaSlot=1/mda=1/port=1/1/10']'][target-identifier-value='1/1/10']",
"context": "nsp-infra-config-server-app",
"tree": {
"/nsp-icm:icm/deployments/deployment": {
"@": {
"nsp-model:schema-nodeid": "/nsp-icm:icm/deployments/deployment",
"nsp-model:identifier": "/nsp-icm:icm/deployments/deployment[template-name='Default Port'][target='/nsp-equipment:network/network-element[ne-id='92.168.96.46']/hardware-component/port[component-id='shelf=1/cardSlot=1/card=1/mdaSlot=1/mda=1/port=1/1/10']'][target-identifier-value='1/1/10']"
}
}
},
"event-time": "2022-11-22T14:37:20.475Z"
}
}
{
"nsp-model-notification:object-modification": {
"changes": [
{
"name": "encap-type",
"old-value": "nullEncap",
"new-value": "qinq"
},
{
"name": "mtu-value",
"old-value": "null",
"new-value": "1492"
},
{
"name": "port-index",
"old-value": "10",
"new-value": "11"
},
{
"name": "port-mode",
"old-value": "trunk",
"new-value": "access"
},
{
"name": "rate",
"old-value": "null",
"new-value": "100"
}
],
"schema-nodeid": "/nsp-equipment:network/network-element/hardware-component/port/port-details",
"instance-id": "/nsp-equipment:network/network-element[ne-id='92.168.96.46']/hardware-component/port[component-id='shelf=1/cardSlot=1/card=1/mdaSlot=1/mda=1/port=1/1/10']/port-details",
"context": "nsp-db-synchronizer",
"tree": {
"/nsp-equipment:network/network-element/hardware-component/port/port-details": {
"@": {
"nsp-model:schema-nodeid": "/nsp-equipment:network/network-element/hardware-component/port/port-details",
"nsp-model:identifier": "/nsp-equipment:network/network-element[ne-id='92.168.96.46']/hardware-component/port[component-id='shelf=1/cardSlot=1/card=1/mdaSlot=1/mda=1/port=1/1/10']/port-details",
"nsp-model:sources": [
"fdn:yang:nsp-network:/nsp-network:network/node[node-id='92.168.96.46']/node-root/nokia-state:state/port[port-id='1/1/10']",
"fdn:app:mdm-ami-cmodel:92.168.96.46:equipment:PortDetails:/port[port-id='1/1/10']",
"fdn:yang:nsp-network:/nsp-network:network/node[node-id='92.168.96.46']/node-root/nokia-conf:configure/port[port-id='1/1/10']"
]
},
"rate": "100",
"port-mode": "access",
"actual-rate-units": "mbps",
"mtu-value": 1492,
"duplex": "unknown",
"port-type": "ethernet-port",
"auto-negotiate": "unknown",
"operational-duplex": "unknown",
"port-index": 11,
"actual-rate": 0,
"encap-type": "qinq"
}
},
"event-time": "2022-11-22T14:37:21.844Z"
}
}
Before service creation, the following must also be configured:
Service tunnels are created by first configuring a tunnel template using the "templates" API:
POST /restconf/data/tunnel-template:templates HTTP/1.1
{
"template": [
{
"name": "TunnelTemplate",
"description": "This is a service template for creating Service Tunnels",
"intent-type": "tunnel",
"intent-version": 1,
"state": "released",
"ui-config": "default",
"workflows": []
}
]
}
The service tunnel is then created using the "intent-base" API (note this step is done twice, with endpoints reversed, as tunnels are required in both directions of the two site VPRN that is being created):
POST /restconf/data/nsp-tunnel-intent:intent-base HTTP/1.1
{
"nsp-tunnel-intent:intent": [
{
"source-ne-id": "{{nodeA}}",
"sdp-id": "5000",
"intent-type": "tunnel",
"intent-type-version": "1",
"olc-state": "deployed",
"template-name": "TunnelTemplate",
"intent-specific-data": {
"tunnel:tunnel": {
"destination-ne-id": "{{nodeB}}",
"admin-state": "unlocked",
"transport-type": "mpls",
"signaling": "tldp",
"mpls": {
"mixed-lsp-mode": false,
"enable-ldp": false,
"enable-bgp-tunnel": false,
"sr-isis": false,
"sr-ospf": false,
"lsp": "{{lspAtoB}}"
},
"hello-parameters": {
"keep-alive-enabled": false
},
"name": "SDP5000"
}
}
}
]
}
Service tunnels are created asynchronously, so the response will be received before the request is fully processed. The Postman collection has implemented a delay to wait for the service tunnel to be created, but Kafka notifications also indicate when the tunnels have been successfully deployed. The following notification is an example of a service tunnel being updated to "deployed" after it has been configured in the network:
{
"nsp-model-notification:object-modification": {
"changes": [
{
"name": "last-updated-time",
"old-value": "2022-11-22T14:24:57.153Z",
"new-value": "2022-11-22T14:25:01.579Z"
},
{
"name": "occurred-at",
"old-value": "2022-11-22T14:24:57.153Z",
"new-value": "2022-11-22T14:25:01.579Z"
},
{
"name": "from-state",
"old-value": "saved",
"new-value": "planned"
},
{
"name": "to-state",
"old-value": "planned",
"new-value": "deployed"
},
{
"name": "logs",
"old-value": [
"Successfully updated the tunnel slc state from saved to planned"
],
"new-value": [
"Successfully deployed tunnel to network",
"Successfully updated the tunnel slc state from planned to deployed"
]
}
],
"schema-nodeid": "/nsp-service:services/tunnel-layer/mpls-tunnel/service-extension:mpls-tunnel-ext/slc-details",
"instance-id": "/nsp-service:services/tunnel-layer/mpls-tunnel[id='5000'][source-ne-id='92.168.96.46']/service-extension:mpls-tunnel-ext/slc-details",
"context": "sf_app",
"tree": {
"/nsp-service:services/tunnel-layer/mpls-tunnel/service-extension:mpls-tunnel-ext/slc-details": {
"@": {
"nsp-model:schema-nodeid": "/nsp-service:services/tunnel-layer/mpls-tunnel/service-extension:mpls-tunnel-ext/slc-details",
"nsp-model:identifier": "/nsp-service:services/tunnel-layer/mpls-tunnel[id='5000'][source-ne-id='92.168.96.46']/service-extension:mpls-tunnel-ext/slc-details",
"nsp-model:sources": []
}
}
},
"event-time": "2022-11-22T14:25:01.640Z"
}
}
The "customers" API is used to create the customer:
POST /restconf/data/nsp-customer:customers HTTP/1.1
{
"customer": [
{
"id" : 25,
"name" : "Sample Customer",
"description" : "Customer for tutorial purposes",
"phone-number" : "1-800-555-1234",
"contact" : "Customer Name"
}
]
}
Similar to creating a service tunnel, the service requires both a template and a service. The "templates" API can be used to create the service template:
POST /restconf/data/service-template:templates HTTP/1.1
{
"template": [
{
"name": "VprnServiceTemplate",
"description": "This is a service template for creating VPRNs",
"intent-type": "vprn",
"intent-version": 1,
"state": "released",
"ui-config": "default",
"workflows": [
{
"service-lifecycle-state": "saved",
"service-lifecycle-case": "success",
"workflow-id": "adminUpPort",
"blocking": true,
"execution-timeout": 60
}
]
}
]
}
The "intent-base" API is then used to create the service itself:
POST /restconf/data/nsp-service-intent:intent-base HTTP/1.1
{
"nsp-service-intent:intent": [
{
"service-name": "E2E_Sample_VPRN",
"intent-type": "vprn",
"intent-type-version": "1",
"olc-state": "deployed",
"template-name": "VprnServiceTemplate",
"intent-specific-data": {
"vprn:vprn": {
"admin-state": "unlocked",
"customer-id": 25,
"site-details": {
"site": [
{
"device-id": "{{nodeA}}",
"site-name": "E2E_Sample_VPRN",
"export-inactive-bgp": false,
"enable-ospf": false,
"auto-bind-tunnel": {
"resolution": "any",
"enforce-strict-tunnel-tagging": false
},
"autonomous-system": 2000,
"route-target": [
{
"target-type": "import-export",
"target-value": "2000:100"
}
],
"enable-max-routes": false,
"enable-ebgp": false,
"enable-static-route": false,
"ne-service-id": 325,
"route-distinguisher": "auto-rd",
"interface-details": {
"interface": [
{
"interface-name": "if_1",
"admin-state": "unlocked",
"loopback": false,
"ingress-stats": true,
"sap": {
"admin-state": "unlocked",
"enable-qos": false,
"enable-filter": false,
"port-id": "1/1/10",
"inner-vlan-tag": 325,
"outer-vlan-tag": 325
},
"vpls": {
"evpn-tunnel": false,
"evpn": {
"arp": {
"learn-dynamic": true,
"advertise-static": false,
"advertise-static-route-tag": 0,
"advertise-dynamic": false,
"advertise-dynamic-route-tag": 0
}
}
},
"ip-mtu": 1500,
"ipv4": {
"primary": {
"address": "192.168.25.10",
"prefix-length": 24
}
}
}
]
}
},
{
"device-id": "{{nodeB}}",
"site-name": "E2E_Sample_VPRN",
"export-inactive-bgp": false,
"enable-ospf": false,
"auto-bind-tunnel": {
"resolution": "any",
"enforce-strict-tunnel-tagging": false
},
"autonomous-system": 2000,
"route-target": [
{
"target-type": "import-export",
"target-value": "2000:100"
}
],
"enable-max-routes": false,
"enable-ebgp": false,
"enable-static-route": false,
"ne-service-id": 325,
"route-distinguisher": "auto-rd",
"interface-details": {
"interface": [
{
"interface-name": "if_2",
"admin-state": "unlocked",
"loopback": false,
"ingress-stats": true,
"sap": {
"admin-state": "unlocked",
"enable-qos": false,
"enable-filter": false,
"port-id": "1/1/11",
"inner-vlan-tag": 325,
"outer-vlan-tag": 325
},
"vpls": {
"evpn-tunnel": false,
"evpn": {
"arp": {
"learn-dynamic": true,
"advertise-static": false,
"advertise-static-route-tag": 0,
"advertise-dynamic": false,
"advertise-dynamic-route-tag": 0
}
}
},
"ip-mtu": 1500,
"ipv4": {
"primary": {
"address": "192.168.10.10",
"prefix-length": 24
}
}
},
{
"interface-name": "if_1",
"admin-state": "unlocked",
"loopback": false,
"ingress-stats": true,
"sap": {
"admin-state": "unlocked",
"enable-qos": false,
"enable-filter": false,
"port-id": "1/1/10",
"inner-vlan-tag": 325,
"outer-vlan-tag": 325
},
"vpls": {
"evpn-tunnel": false,
"evpn": {
"arp": {
"learn-dynamic": true,
"advertise-static": false,
"advertise-static-route-tag": 0,
"advertise-dynamic": false,
"advertise-dynamic-route-tag": 0
}
}
},
"ip-mtu": 1500,
"ipv4": {
"primary": {
"address": "192.168.1.10",
"prefix-length": 24
}
}
}
]
}
}
]
},
"sdp-details": {
"sdp": [
{
"source-device-id": "{{nodeA}}",
"sdp-id": "5000",
"override-vc-id": false,
"destination-device-id": "{{nodeB}}"
},
{
"source-device-id": "{{nodeB}}",
"sdp-id": "5000",
"override-vc-id": false,
"destination-device-id": "{{nodeA}}"
}
]
}
}
}
}
]
}
Poll the service to determine it's status, and when it is fully configured:
GET /restconf/data/nsp-service:services/service-layer/l3vpn=E2E_Sample_VPRN?fields=service-extension:eline-svc/slc-state HTTP/1.1
Instead of polling for successful service creation, Kafka notifications can be monitored to confirm that the service lifecycle state enters the "deployed" state. If there are errors during deployment, the "logs" attribute in this same notification will indicate what errors occurred:
{
"nsp-model-notification:object-modification": {
"changes": [
{
"name": "last-updated-time",
"old-value": "2022-11-22T14:45:22.052Z",
"new-value": "2022-11-22T14:45:27.094Z"
},
{
"name": "occurred-at",
"old-value": "2022-11-22T14:45:22.052Z",
"new-value": "2022-11-22T14:45:27.094Z"
},
{
"name": "from-state",
"old-value": "saved",
"new-value": "planned"
},
{
"name": "to-state",
"old-value": "planned",
"new-value": "deployed"
},
{
"name": "logs",
"old-value": [
"Successfully updated the service slc state from saved to planned"
],
"new-value": [
"Successfully deployed service to network",
"Successfully updated the service slc state from planned to deployed"
]
}
],
"schema-nodeid": "/nsp-service:services/service-layer/l3vpn/service-extension:l3vpn-svc/slc-details",
"instance-id": "/nsp-service:services/service-layer/l3vpn[service-id='E2E_Sample_VPRN']/service-extension:l3vpn-svc/slc-details",
"context": "sf_app",
"tree": {
"/nsp-service:services/service-layer/l3vpn/service-extension:l3vpn-svc/slc-details": {
"@": {
"nsp-model:schema-nodeid": "/nsp-service:services/service-layer/l3vpn/service-extension:l3vpn-svc/slc-details",
"nsp-model:identifier": "/nsp-service:services/service-layer/l3vpn[service-id='E2E_Sample_VPRN']/service-extension:l3vpn-svc/slc-details",
"nsp-model:sources": []
}
}
},
"event-time": "2022-11-22T14:45:27.150Z"
}
}
A TWAMP-light OAM test can be executed to validate that the service was correctly configured and that traffic is passing through it. At this time, TWAMP-light reflectors must be manually created on each service site. Test suites can be used to automatically generate and execute the tests which will send the TWAMP-light packets.
Use the "oam-pm" API to create reflectors on each site. Each reflector requires a separate creation request:
POST /restconf/data/nsp-oam-config:oam-pm HTTP/1.1
{
"twamp-light-reflector-svc": [
{
"ne-id": "{{nodeA}}",
"service-name": "E2E_Sample_VPRN",
"admin-state": "enable",
"description": "twamp-light reflector rtrA",
"udp-port": 64364,
"prefix": [
{
"ip-prefix": "192.168.1.10/32",
"description": "subnet1"
},
{
"ip-prefix": "192.168.10.10/32",
"description": "subnet1"
}
]
}
]
}
The "generate-tests" API is used to create a Test Suite and to have the tests automatically generated. Setting "execute" to true will cause the tests to start executing once they have all been created:
POST /restconf/operations/nsp-oam:generate-tests HTTP/1.1
{
"input": {
"test-type": "/nsp-oam:tests/oam-test:tests/twamp-light",
"template": "/nsp-oam:templates/oam-test:templates/twamp-light[name='Delay Streaming (on-demand)']",
"name": "E2E Test Suite",
"app-id": "NSP",
"bidirectional": true,
"execution-type": "on-demand",
"execute": true,
"entities": [
"/nsp-service:services/service-layer/l3vpn[service-id='E2E_Sample_VPRN']/endpoint[endpoint-id='{{nodeA}}-E2E_Sample_VPRN-if_1']",
"/nsp-service:services/service-layer/l3vpn[service-id='E2E_Sample_VPRN']/endpoint[endpoint-id='{{nodeB}}-E2E_Sample_VPRN-if_1']",
"/nsp-service:services/service-layer/l3vpn[service-id='E2E_Sample_VPRN']/endpoint[endpoint-id='{{nodeB}}-E2E_Sample_VPRN-if_2']"
]
}
}
The "list-executing-test-suites" API returns a list of all test suites that are currently running. It can be used to determine when all of the tests associated with the test suite that was just created have completed:
POST /restconf/operations/nsp-oam:list-executing-test-suites HTTP/1.1
Once all tests have completed the test suite can be retrieved using the inventory find. The most recent execution id will contain the results of the test run. If the same test suite name is used for multiple service validations then multiple results, with different execution-ids, may be returned even though the test suite is only executed once during this specific procedure.
POST /restconf/operations/nsp-inventory:find HTTP/1.1
{
"input": {
"xpath-filter": "/nsp-oam:test-suite-results/test-suite-result[test-suite-name='E2E Test Suite']",
"offset": 0,
"limit": 100,
"sort-by": [
"-execution-id"
]
}
}
Retrieve the results of the latest execution using the "get-results" API. Note that this step isn't technically required as all results are being retrieved in the previous step:
POST /restconf/data/nsp-oam:test-suites/test-suite=E2E Test Suite/get-results HTTP/1.1
{
"input": {
"execution-id": 8
}
}
Results of executing test suites are also notified to the oam.test_execution Kafka topic:
{
"time-issued-iso" : "2022-11-22T14:54:57.369Z",
"time-issued" : 1669128897369,
"app-id" : "NSP",
"event-type" : "TEST_SUITE_COMPLETED",
"result-classifier" : "default",
"tests-results-succeeded" : 6,
"execution-id" : 3,
"tests-results-unclassified" : 0,
"start-time" : 1669128884869,
"tests-results-failed" : 0,
"tests-execution-failed" : 0,
"success-rate" : "100.000000",
"result-status" : "finished",
"tests-timed-out" : 0,
"tests-skipped" : 0,
"test-suite-name" : "E2E Test Suite",
"finish-time" : 1669128897368,
"tests-executed" : 6,
"no-first-result" : 0,
"tests-deleted" : 0,
"tests-stopped" : 0,
"results-completed" : true
}
Clean up the test artifacts by deleting the test suite and the TWMAP light reflectors:
DELETE /restconf/data/nsp-oam:test-suites/test-suite=E2E Test Suite HTTP/1.1
DELETE /restconf/data/nsp-oam-config:oam-pm/twamp-light-reflector-svc={{nodeA}},E2E_Sample_VPRN HTTP/1.1
Statistics can be enabled in order to monitor the service and provide metrics on the service usage. SAP egress queue statistics will be enabled using the "subscription" API:
POST /restconf/data/md-subscription:/subscriptions/subscription HTTP/1.1
{
"subscription": [
{
"type": "telemetry:/base/sros-service-vprn/service_vprn_interface_sap_egress_qos_sap-egress_queue_statistics",
"name": "VPRN Queue Stats",
"period": 900,
"db": "enabled",
"state": "enabled",
"notification": "enabled",
"fields": [],
"sync-time": "00:00",
"filter": null
}
]
}
To monitor statistics, it is recommended to use Kafka notifications. The topic that will publish the statistics can be determined by retrieving the subscription after it is created:
GET /restconf/data/md-subscription:/subscriptions/subscription=VPRN Queue Stats HTTP/1.1
An example of the Kafka notification is:
{
"data": {
"ietf-restconf:notification": {
"eventTime": "2022-11-22T15:30:01Z",
"nsp-kpi:real_time_kpi-event": {
"profile_in-inplus-profile-forwarded-packets": 2,
"profile_in-inplus-profile-forwarded-octets": 144,
"profile_out-exceed-profile-forwarded-packets": 0,
"profile_out-exceed-profile-forwarded-octets": 0,
"profile_in-inplus-profile-dropped-packets": 0,
"profile_in-inplus-profile-dropped-octets": 0,
"profile_out-exceed-profile-dropped-packets": 0,
"profile_out-exceed-profile-dropped-octets": 0,
"system-id": "92.168.96.46",
"time-captured": 1669130982379,
"profile_out-exceed-profile-forwarded-packets-periodic": 0,
"time-captured-periodic": 0,
"profile_out-exceed-profile-forwarded-octets-periodic": 0,
"profile_in-inplus-profile-dropped-octets-periodic": 0,
"profile_out-exceed-profile-dropped-packets-periodic": 0,
"profile_in-inplus-profile-forwarded-octets-periodic": 0,
"profile_out-exceed-profile-dropped-octets-periodic": 0,
"profile_in-inplus-profile-forwarded-packets-periodic": 0,
"profile_in-inplus-profile-dropped-packets-periodic": 0,
"neId": "92.168.96.46",
"kpiType": "telemetry:/base/sros-service-vprn/service_vprn_interface_sap_egress_qos_sap-egress_queue_statistics",
"objectId": "/state/service/vprn[service-name='E2E_Sample_VPRN']/interface[interface-name='if_1']/sap[sap-id='1/1/10:325.325']/egress/qos/sap-egress/queue[queue-id='1']/statistics/profile",
"dataType": 1
}
}
}
}