Logging, Collection of Events, Alert to be sent when something is wrong. Simple right? Unfortunately, it isn't. Collecting the right data and alerting appropriately is a major challenge for most IT organizations today. Most of the events are not logged properly and some that are being flooded by events, some that don't matter, and missing the important data. How to monitor is often the easy question. What to monitor is far more difficult. To overcome this difficulty, DNIF introduces a lightweight component PICO (also indicated as PC), we will walk you through the details on the use of the native component to build a scalable, flexible monitoring infrastructure that can detect, alert, filter all the operational issues.
PICO is a small component used to collect, queue, filter and forward log events. It acts as a collector for all your event logs from multiple servers and desktops and other sources. PICO is a lightweight, standalone Docker container used to collect, store, process, and filter system log events and forward the logs to single or multiple configured destinations or servers.
PICO, a dnif component used to collect, store, filter, and process logs helps in complete administering or monitoring of the system log events.
PC is required for remote sites, where events coming over a WAN link are transported to the central DNIF deployment. It has the capability to collect, filter and then forward to the central deployment, it can queue up events in case there is a link failure and then flush events when the link comes up
PICO helps in achieving the following benefits, thus increasing the overall performance of the organizational log management systems:
- Collection of heavy system log events.
- Filter system log events or log filtering based on source address or text signature.
- Logging Persistence, the events logs are collected and stored (written) to a disk incase of link failure.
- Ability to forward to multiple Adapters (AD).
- Forward syslog events to multiple syslog servers in the raw syslog format.
- Compression of log events.
- Encryption of log events allows for data transfer over an insecure link or the Internet.
The PICO Architecture explained
- PICO receives system log events from multiple sources such as servers, devices etc. The syslog events are forwarded on the port 514.
- PICO consist of the following services, which processes all the event logs with ease:
- System Log Server and Native Listener
- Inward Queue
- System Log Engine
- The Syslog Server listens to all the logs on Pico.
- The logs are then published to an inward queue called pico_inward_queue.
- A component called Native Listener is present at Syslog Server level which listens to logs coming on TCP 1514 port. This port number is configurable. This feature enables various connectors made available by DNIF such as o365, GSuite, RDBMS to send their logs to Pico instead of sending them directly to the Adapter.
- The logs are filtered based on the configurations defined in Syslog Engine. These configurations are stored in a yaml file and can also decide where the logs are to be forwarded or to which target. In addition, it is possible to spawn multiple Syslog Engine processes for faster filtering.
Filtering of the logs can be on the basis of the device source IP address and event matching. The Syslog Engines sends out logs to all forwarders.
- Logs are forwarded using Native Forwarding and Raw Forwarding techniques.
- The logs are compressed and encrypted while using Native Forwarder and sent to Adapter, whereas raw logs are sent to the System log Servers from raw queue using Raw Forwarder. These logs are not encrypted.
- The forwarding of logs could be either in the compressed form by zmq via Native TCP/ UDF Forwarders to the Adapter or in the raw format by Raw Forwarders to the syslog servers.
- The Native TCP Forwarder forwards logs to single or multiple Adapters while the Raw Forwarder sends logs to the syslog servers.
- The Adapter will have a Native TCP Listener for Native TCP Forwarder on a configurable port which will decompress the logs and send them to the ingestion queue of Adapter in desired format.
It is mandatory to whitelist Native TCP Listener IP Address on a particular port in Adapter for all instances or logs received through Native TCP Listener.
- Install Docker on host, where Pico is to be set up.
Setting up docker on host can be customized, you can refer Docker instructions for installing the software.
- Install Docker Compose
Successful installation of Docker will create a container, whereas installation of Docker Compose will help to run the docker container.
- Set the below parameters to /etc/sysctl.conf on the host before container starts using below commands:
#memory & file settings fs.file-max=1000000 vm.overcommit_memory=1 vm.max_map_count=262144 #n/w receive buffer net.core.rmem_default=33554432 net.core.rmem_max=33554432
- Reload the settings via:
- Enable memory swapping of 50% of RAM on the host system where Pico is to be set up.
- Once Docker and Docker Compose is successfully installed, you need to perform the following configurations:
- Obtain or create the docker-compose.yaml file in a particular directory.
- The docker-compose file is of the below format:
version: "3" services: pico: image: dnif/ship-pico:8.7.1 hostname: pico network_mode: "host" cap_add: - NET_ADMIN ports: - "514:514/udp" - "4369:4369" - "5671:5671" - "5672:5672" - "15671:15671" - "15672:15672" - "25672:25672" volumes: - /data:/pico
|version||Defines the Docker versionl|
|services||Defines the services in Docker|
|image||Pico version to be installed|
|hostname||Hostname of the system|
|network_mode||Applies network configuration of host to docker container|
|cap_add||Used to add container capabilities.|
|ports||Defines list of port numbers to be mapped. The ports are mapped from Host to Docker. For example, "4369:4369" the left port denotes the port of Host and right port denotes the port of Docker,which defines that the logs are stored from the Host to Docker container using the shared port.|
|volumes||Volumes shared between the host machine and the Docker container. It provides mapping of directories from host to container, the files mapped to the directory on host will also be visible inside the container.|
Next, pull the specific image as per version mentioned in the yaml file by executing the following command:
docker-compose up -d
This command defines that the Docker Container is running successfully.
After setup, ssh connection can be established with rabbitmq container and pico container by using the below commands for each connection:
docker exec -it <container_id> sh
After set up of ssh connection, make configuration changes in the pico_config.yaml file in the directory /pico/conf/ and mention values for below fields:
event-filtering: system-procs: 2 host-filter: default-policy: "deny" host-address: - "192.168.1.5" - "192.168.1.6" - "192.168.1.7" - "192.168.1.8" string-filter: default-policy: "allow" search-string: - "64112" - "not signed in" native-forwarder: - cpc-site: primary-adapter: - "10.20.1.4:1514" - "10.20.1.5:1514" failover-adapter: - "10.20.2.4:1514" - "10.20.2.5:1514" system-procs: 3 enable-compression: True encryption-key: "KJKJJKEN*#<AJKJS(AL" socket-timeout: 300 optimize-bunch: 5000 optimize-timeout: 10 - blr-site: primary-adapter: - "10.50.2.4:1514" system-procs: 3 enable-backoff: True raw-syslog-forwarder: - local-syslog: syslog-server: - "172.16.2.1:514" - "172.16.2.2:514" optimize-bunch: 5000 - server-group: syslog-server: - "192.168.1.44:514" optimize-bunch: 5000 configuration: logfile-path: "/pico/log/" log-level: 0 scope-id: 'AUz_fv6YDDS9_qEFkivb' pico-id: '1' queue-peristence: True backoff: 10 maxeps: 6000 listener-port: 1514
|Field Name||Sub Field||Description|
|event-filtering (IP and Event Filter Configuration)||system-procs||Number of services or instances running in Syslog Engine of Pico. The more the number of Syslog Engine Processes, the better is the performance of filtering. The minimum value is 1.|
|host-filter (IP Filter Configuration)|
|string-filter (Event Filter Configuration)|
|native-forwarder (Native Forwarder Configuration (TCP/UDP))||cpc-site: Indicates Adapter Site Address|
|blr-site: Indicates Adapter Site Address|
|raw-syslog-forwarder:Raw Syslog Forwarder Configuration||local-syslog: Indicates Syslog Server Name|
|server-group: Indicates Syslog Server Name|
|logfile-path||Defines the path, where logs of all internal operations performed in PICO are to be stored. Note: This path will have log files for all components available in Pico, for troubleshooting and observation purposes.|
|log-level||The log level for the logs functioning internally in the queues. This is used to know the status of the logs updated by PICO before forwarding to the next queue. The logging level can be set with below integer values: 0 - DEBUG/ 1 - INFO/ 4 - WARNING/ 5 - ERROR/ 6 - CRITICAL|
|scope-id||Enter the ScopeID. The ID is added as a tag in the log to be sent. The input value is a string which denotes the ScopeID.|
|pico-id||Enter the PicoID. The ID is added as a tag in the log to be sent. The input value is a string which denotes the PicoID.|
|queue-persistence||Used to enable or disable queue persistence. It defines writing of logs to the hard disk. Due to this, logs can be persistent on the host system in the event of a restart of the container.The valid values are: True/False. By default, the value is True.|
|backoff||This value is enabled for Native Listener. The Native Listener will accept all the logs coming from connector. In case the threshold for maxeps exceeds, it will ask the connector to backoff, thus reducing the burden on PC.|
|maxeps||Defines the maximum limit for the number of events per second to be ingested in PICO.|
|listener-port||The port number 1514 from which the Native Listener will receive TCP logs.Note: Ensure to configure Native Listener port number 1514 in the docker-compose.yaml file in the ports section.|
Once the above configuration is defined, reload the configuration and start the services. The configuration changes in the pico-config.yaml file should be loaded in Pico using the following command:
Services running can be viewed by executing supervisorctlcommand.
The management of service daemons running in Pico such as syslog server, syslog engine, etc. can be monitored efficiently with the help of Supervisor.
There are 2 parts for Supervisor:
Supervisord is the server service of the supervisor. It is responsible for starting child programs at its own invocation. This is already configured during initialization of Pico.
Supervisorctl is the command-line client piece of the supervisor. The user can control a number of services of Pico using supervisorctl. This makes it easy for the user to interact with each service individually.
supervisorctl lists all the services whose configurations have been read from the pico-config.yml file. In addition, it also displays the status of the service. For example,
To restart a specific service:
To stop a specific service with a specific process number:
pico_ctl is a service running in Pico, to monitor system statistics and queuing status of rabbitmq. This has the ability to control the queue. The below screenshots displays the functions and commands for using pico_ctl.
To check system statistics:
To check queue statistics and control functions for the queue:
- Access pico and rabbitmq docker containers, use the following command:
docker exec -it <container_id> sh
- View services, execute supervisorctlcommand.
- Following services should be running in the services list.
- If few services are missing, then run the individual python files of above services and check again for any issues.
- If no native queue has been formed:
- Native Forwarder TCP is not running. This is because, native forwarder is responsible for creating the native queue.
- If no logs available in the native queue:
- There is an issue with the syslog engine. This is because the syslog engine is responsible for sending logs to the native queue.
- To understand the Event Per Second (EPS) viewed in logs:
- EPS number is the events coming into the system by the assets in your network. The value of EPS shown in the logs should be interpreted in multiples of * This is because the logs are being bunched in size of 100 while processing.
- To monitor the queues at a constant rate with their publishing and delivery rate, use the following command:
rabbitmqadmin list queues vhost name node messages message_stats.publish_details.rate message_stats.deliver_details.rate
|Sr.No.||Number of Sources||Event Length||Source Parameter (180s)||Max eps at NL Pico||Buffer at NL Pico|