Skip to content

IDS reconfiguration

Stefan Schneider edited this page Nov 27, 2019 · 8 revisions

Reconfiguration based on an intrusion alarm

NS2 + IDS artifacts:

Concept

After successful deployment, an alarm triggered by the IDS should lead to a reconfiguration of the MDC container to connect to the quarantine instance of NS1. This requires a series of 5GTANGO components playing together.

ids_reconfig

Alternatively, the reconfiguration may also be triggered manually from the FMP/SMPortal:

reconfig_fmp

Closing the loop

  1. Tested & confirmed: IDS triggers an alarm iff an intrusion (wrong host or user) is detected. The alarm leads to a corresponding log created in elasticsearch (visible in Kibana).

  2. Tested & confirmed: The HTTP server (H) exposes this alarm via an REST interface to the monitory. This leads to a monitoring metric being set.

    When an alarm is triggered, the metric ip0 changes from 0 to some positive number for around 20s.

  3. Tested & Not working consistently: The policy manager picks up the change in the specified custom metric and triggers a reconfiguration request for the SLM. This request also contains the service ID since the policy is bound to the corresponding service instance.

    Open issues:

  4. Tested & confirmed: The SLM contacts the SSM, which sends back the reconfiguration payload, which is send via the FLM to the FSM of the MDC container.

  5. Tested & confirmed: The FSM requests the MANO to restart the MDC container with a new environmental variable, which should reconfigure the connection of the MDC from the old NS1 to the quarantine NS1 instance.

  6. Tested & confirmed: After the FSM's response the MDC pod is restarted with a new env var and the IMMS traffic should stop arriving in the old NS1 and start arriving in the quarantine NS1.

Useful info

Creating, activating, and using a policy

All policy apis are available at the swagger API

In order to create a new policy and check it for the industrial NS you should follow the next steps:

  1. create the policy Policy creation should be done within the portal. But it is not ready from the UI part. you can create a new policy by posting the policy available here at the REST API: http://pre-int-sp-ath.5gtango.eu:8081/api/v1
  2. define policy as default Screenshot_20191014_171301
  3. activate the policy (this means that you will see the prometheus rule activated) Policy is activated automatically upon the NS deployment. Alternatively steps 1 to 3 can be executed via the execution of the following robot test
  4. then we should somehow trigger the ip0 metric to more than 0
    This can be done by getting connected at msf-vnf1 external ip : smbclient -L <external-IP>
  5. the prometheus rule will be fired and monitoring manager will send the alert to pub/sub
    Screenshot_20191014_170231
  6. policy reads the alert and creates the following alert action
    you should be able to see the policy alert action at the portal image_2019_10_14T11_38_23_741Z
  7. you can also check the payload in the pub/sub logs to confirm that triggering worked
2019-10-14 11:32:41:519: Message published

Node:         rabbit@1beebaffb872
Connection:   172.18.0.39:49626 -> 172.18.0.7:5672
Virtual host: /
User:         guest
Channel:      6
Exchange:     son-kernel
Routing keys: [<<"service.instance.reconfigure">>]
Routed queues: [<<"service.instance.reconfigure">>]
Properties:   [{<<"app_id">>,longstr,<<"tng-policy-mngr">>},
               {<<"reply_to">>,longstr,<<"service.instance.reconfigure">>},
               {<<"correlation_id">>,longstr,<<"5da45cd976e1730001b7e2b9">>},
               {<<"priority">>,signedint,0},
               {<<"delivery_mode">>,signedint,2},
               {<<"headers">>,table,[]},
               {<<"content_encoding">>,longstr,<<"UTF-8">>},
               {<<"content_type">>,longstr,<<"text/plain">>}]
Payload: 
service_instance_uuid: f3a9af69-e42e-4b13-9b02-b6e900d7beb4
reconfiguration_payload: {vnf_name: lhc-vnf2, vnfd_uuid: 602c67d2-4080-436b-95e7-5828a57f0f85,
  log_message: intrusion, value: '1'}

You can check the graylogs of the tng-policy-mngr at http://logs.sonata-nfv.eu putting as search options: source:pre-int-sp-ath AND container_name:tng-policy-mngr AND message:reconfiguration*

  1. Alternatively/Additionally, start creating a trace log of the broker messages on http://int-sp-ath.5gtango.eu:15672/#/traces trace_steps

SSM/FSM

  • Policy manager triggers reconfiguration_event of SSM and SSM triggers reconfiguration_event of FSM
  • Need to overwrite the corresponding functions in the SSM/FSM code to return the correct response

FSM

SSM

  • The incoming content argument can be used to extract VNFR IDs is of this format: https://github.com/sonata-nfv/tng-sdk-sm/blob/master/src/tngsdksm/examples/payloads/ssm/configure_event.yml#L101

  • The response dict to return should have the following format, but as Python dict

    ---
    vnf:
    - configure:
         payload: 
           message: 'alert 1'
         trigger: True
      id: <uuid of the vnf instance that the fsm is associated to>
    - configure:
        trigger: False
      id: <uuid of the vnf instance that doesn't need reconfiguration>
    - configure:
        trigger: False
      id: <uuid of the vnf instance that doesn't need reconfiguration>
    

Testing

Other

  • Prometheus with monitoring metric ip0 : http://pre-int-sp-ath.5gtango.eu:9090/graph?g0.range_input=1h&g0.expr=ip0&g0.tab=0

    ids_ip0_prometheus

    When an alarm is triggered, the metric ip0 changes from 0 to some positive number, here 183762988, for around 20s.

  • List all container names in kubernetes to identify the correct container to look out for in Prometheus:

    kubectl get pods --all-namespaces -o=custom-columns=NameSpace:.metadata.namespace,NAME:.metadata.name,CONTAINERS:.spec.containers[*].name