Zabbix and Pure Storage monitoring V2

At the end of 2017, I’ve shared a post talking about Zabbix and Pure Storage monitoring. At this time we were using it as a POC. A few weeks later, we were so happy with the product that we decided to use it in production.

So it was an opportunity to rewrite my monitoring script from scratch.
It’s now time to share it with you.
If you want more information about the purpose, the way I’ve made the integration, etc. don’t hesitate to read my previous post as the only modifications are the code and the type of information the script can gather from the array.

Instead of writing a script for each part of the monitoring (volumes, array, disks,…), I’ve put everything in one file called “PureStorageMonitoring.py”.
The dependencies are still the same : purestorage and urllib3 libraries from Python and Python 3.x

A complete list of datas and triggers that will be gathered by the script is available at the end of this post.

Important note : This script has been tested with Zabbix 3.4.x and 4.0.x and Pure Storage 4.10.8 (REST API version 1.12). I cannot guarantee that it will work fine with any other version even if I made my best regarding this point. The template available is exported from Zabbix 4.0.x so you will have to delete some settings to import it in Zabbix 3.4.x don’t hesitate to contact me in case of trouble.

Setup the proxy or server

To gather data, we need to make some works on the Zabbix Proxy or Zabbix Server.

  • Be sure that you have Python 3.x installed on the server
  • Install the pip packages
pip3 install purestorage urllib3 py-zabbix

Get API Token

To be able to use the REST API of Pure Storage, you have to find your API Token on the web interface.
Go on “SYSTEM”, then “Users” and under the Users menu, you will find “API Tokens”. Click on the three dots on the right of the screens and use “Show API Token” to get your API Token.

Download the scripts

The script is available on my Github repository.
You can download it on the server using the following steps :

  • Find the directory where you have to download it, by looking at the value of “ExternalScripts” setting of your proxy or server (by default it’s “/usr/local/share/zabbix/externalscripts” but if like me you are using the official Debian packages from Zabbix, it will be “/usr/lib/zabbix/externalscritps”
  • Go to this directory
cd /usr/lib/zabbix/externalscripts
  • Download the scripts
wget https://raw.githubusercontent.com/jeremyverda/zabbix-public/master/Pure%20Storage/PureStorageMonitoring.py
  • Set the rights
chmod +x PureStorageMonitoring.py
chown zabbix:zabbix PureStorageMonitoring.py

Import the Zabbix template

You can find the Zabbix template on the Github repository.
You have to import it in Zabbix before you continue.

Configure the Zabbix host

In the Zabbix frontend, add a new host and set the information as follows :

  • Agent interface IP : Fill in with the IP address of your Pure Storage
  • Macros
    • {$PURE_TOKEN_API} : Fill in with the API Token that you have copy previously
    • {$PURE_ZBX} : WIll be the IP of your Zabbix Proxy or server that will be used by the Zabbix Sender. By default it will be 127.0.0.1
  • Monitored by proxy : Set this field if you are going to use a Zabbix proxy
  • Template : select the previously imported template : “Template Pure Storage”

Wait for the data

Once you have completed all the previous steps, you have to wait until the data are gathered by the script.

Data and triggers

Here is the list of data and triggers the script will gather from your Flash Array :

Data

  • Global
    • latencies
    • read and write
    • data reduction ratio
    • array capacity, volume size, used space and shared space
    • version
    • remote assist
    • phone home
    • hostname
    • etc.
  • Volumes
    • size and used size
    • read and write
    • latencies
    • data reduction
    • etc.
  • Hosts (value corresponding to VMware or any other hosts connected to the array)
    • latencies
    • read and write
  • Disks
    • capacity
    • last failure (currently this value won’t be empty but it will be corrected on a future API and Purity version)
    • protocol
    • status
    • type

Triggers

  • Global
    • Phone home is disabled
    • Remote assist is enabled
  • Volume
    • Volume size has changed
  • Disks
    • Error on disk status

This list will be updated for each data or trigger that will be added in the template. Don’t hesitate to ask me to add something if you need it.

Update

  • 01/11/2018 :
    • Replace the Zabbix Sender by a Python Library
    • Increase the delay between each check
    • Get more value at each check instead of requiring more check to get the same amount of values

Zabbix and Pure Storage monitoring

WARNING : A complete rewrite of this script has been made.
You can find it here : Zabbix and Pure Storage monitoring V2

A few weeks ago, at work, we have started a new POC for one of our infrastructure with a Pure Storage FlashArray.

To be able to give information about this POC to everybody inside the team, we needed a dashboard. So with the help of Flash Array Rest API, Zabbix and Grafana, I produced this dashboard :

Please, bear in mind : These scripts and the Zabbix template are a "Work in progress". Currently we are on a POC and not yet in production so other elements will be added (triggers, items, etc.) and this post will be updated.

Zabbix integration

The Zabbix integration is made of :

  • Python scripts and libraries
  • FlashArray Rest API
  • Zabbix Sender (with Trapper items)
  • Zabbix ExternalScripts (executed on Zabbix Server or Proxy)
  • Low-Level discovery
  • Zabbix Macro

There are four scripts :

  • LLD_PureVolume.py
  • PureArrayInfo.py
  • PureArrayMonitor.py
  • PureVolumeMonitor.py

How it works

There are two parts in the Zabbix template :
  • Discovery rules
  • Static items
The LLD_PureVolume.py is responsible of the discovery rule. Zabbix will use it to get the list of existing volumes on the FlashArray and will create the corresponding items. The PureArrayInfo.py and PureArrayMonitor.py are responsible of gathering the data from the Array point of view. The PureVolumeMonitor.py is responsible of gathering data from the volume point of view using the items that will be created by the discovery rules. As all the data are in the same JSON output when we query the FlashArray API, the scripts will gather all information at the same time and will use the Zabbix Sender to populate all the Trapper items at each execution instead of launching the scripts for each items. The scripts will return “1” at the end and store it in an item called “Pure Array Info Launch”,”Pure Array Monitoring Launch” or “VolumeName Monitoring Launch”.

Be ready

Important note : We are using Zabbix 3.4.x. So you could face some problems while importing the template. Let me know if it’s the case.

Before we start with the integration into Zabbix, be sure to have the following tools installed on your Zabbix Proxy or Server :

  • Python 3.x
  • purestorage and urllib3 libraries You can install them on your system by running the following command :
    pip install purestorage urllib3
    or (if your default Python version is not Python 3.x)
    pip3 install purestorage urllib3
  • Zabbix Sender You can install it with the following command :
    apt install zabbix-sender

Get the necessary information

Before going further, you will need to gather the following information from your Pure Storage FlashArray :

  • IP
  • API Token

To get the API Token, open your Flash Array web interface and go on "API Tokens" menu under the "Users" menu that you will find in the "SYSTEM" tab.

You will see a list of users and three dots on the right of the screen. Click on it and finally select "Show API Token"

 

Let's start

If your system is ready and you have all the information at hand, let's start the integration on your system.

Download the scripts and the template

All the scripts and template are available on my GitHub repository. Download all those files and put the scripts on your ExternalScripts directory on your Zabbix Proxy or Server. By default this directory is : /usr/local/share/zabbix/externalscripts but check your Zabbix configuration file if you have any doubt. By example, we are using the official Zabbix packages for Debian and this directory is /usr/lib/zabbix/externalscripts

Set the good rights

To allow Zabbix to execute those scripts you must allow them to be executed and owned by the Zabbix user on your system. You can execute the following commands (under the ExternalScripts directory) :

chmod +x LLD_PureVolume.py PureArrayInfo.py PureArrayMonitor.py PureVolumeMonitor.py
chown zabbix:zabbix LLD_PureVolume.py PureArrayInfo.py PureArrayMonitor.py PureVolumeMonitor.py

Create a new Zabbix host

Once you have imported the template, you can create a new host and fill the following information :
  • Agent interfaces IP address : Fill in with the IP address of your Pure Storage FlashArray
  • Macros
    • {$PURE_TOKEN_API} : Fill in with the API Token that you have gathered from the FlashArray web interface
    • {$PURE_ZBX} : Will be the IP of your Zabbix Proxy or Server that will be used by the Zabbix Sender. By default it’s set to “127.0.0.1”
  • Monitored by proxy : Set this field if you put the scripts on a proxy.

It’s done

Once you have done all those steps, you have to wait until the data are gathered from the Pure Storage FlashArray.

Items explanation

If you some question about the meaning of some items, you will find a description of each items in the description of each one. Example :

Feedback

If you have any remarks or question about those scripts or the Zabbix template, you are welcome in the comments section or by using the contact form.

Installing Gitea on Debian

Gitea is a self-hosted Git service forked from Gogs. It’s very easy to deploy and has a low system usage impact.

It has features like :

  • web interface
  • wiki
  • issues management
  • admin dashboard

Gitea web interface

Gitea web interface


In this howto, we will use SQLite as the database backend and the Gitea binary (as there is no Debian package available at this time).

This howto is based on Debian Jessie. But it should work with Debian Stretch (alias testing).

Installing the basics

To be able to run Gitea on Debian you will need a local user without root rights and the git package.

Install git :

sudo aptitude install git

Create a local user without root rights and no right to log on :

sudo adduser --disabled-login --gecos 'gitea' git

Here is the explanation of the two options that I’ve added to the “adduser” command :

  • “–disabled-login” won’t ask for any password. So it means that as long there is no password set for this user, he won’t be allowed to log on
  • “–gecos” followed by a comment, will populate the finger information automatically (First Name, Last Name, …)

Installing Gitea

Switch to the git user that you have previously created :

sudo su - git

Download the Gitea binary (get the last version here or here) :

wget -O gitea https://dl.gitea.io/gitea/1.4.1/gitea-1.4.1-linux-amd64

Make the download file executable :

chmod +x gitea

Create the necessary directories :

mkdir -p custom/conf
mkdir data

Launch Gitea for the fist time :

./gitea web

Open a webpage in your browser using the IP of your server on the TCP port 3000 (by example : 192.168.1.200:3000) and complete all the information asked by Gitea.

Your configuration file will be in the “custom/conf/” directory on the “app.ini” file. You can find the standard configuration file here if you want to check what you can change.

If you want to change some advanced settings, you’ll want to read the “Configuration Cheat Sheet“.

Installing Supervisor

To be able to run Gitea as a service, we will use Supervisor.

Install it :

sudo aptitude install supervisor

Create the log directory for the Gitea service :

sudo mkdir /var/log/gitea

Create the configuration file for the Gitea service :

sudo vi /etc/supervisor/conf.d/gitea.conf

Put the following information in this file :

[program:gitea]
directory=/home/git/
command=/home/git/gitea web
autostart=true
autorestart=true
startsecs=10
stdout_logfile=/var/log/gitea/stdout.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stdout_capture_maxbytes=1MB
stderr_logfile=/var/log/gitea/stderr.log
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=10
stderr_capture_maxbytes=1MB
environment = HOME="/home/git", USER="git"

Restart the supervisor service :

sudo service supervisor restart

Check the content of the Gitea log file :

sudo tail /var/log/gitea/stdout.log

Check if the service is running :

ps -ef | grep gitea

You should have something like that :

Use Gitea with Nginx as a Reverse Proxy

If you want to use Gitea behind a reverse proxy using Nginx with SSL (using Let’s Encrypt with certbot) you can use the following configuration file :

server{
        listen 80;
        server_name mygitserver.tld;

        location /.well-known/acme-challenge {
                root /var/www/git;
        }
        location / {
                return 301 https://$host$request_uri;
        }
}

server{
        listen 443;
        ssl on;
        server_name mygitserver.tld;
        ssl_certificate /etc/letsencrypt/live/mygitserver.tld/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/mygitserver.tld/privkey.pem;
        location / {
                proxy_pass http://127.0.0.1:3000;
                proxy_set_header Host             $host;
                proxy_set_header X-Real-IP        $remote_addr;
                proxy_set_header X-Forwarded-For  $proxy_add_x_forwarded_for;
                #max_body_size will allow you to upload a large git repository
                client_max_body_size 100M;
        }
}

In this case, Nginx will be configured to allow the Let’s Encrypt server to get the challenge file in the “/.well-known/acme-challenge” directory. To do that, you will have to create the following directory : “/var/www/git” and give Nginx the right to read files on it.

Then you will have to install certbot. You can find it in Debian Backports.

sudo apt-get install certbot -t jessie-backports

To get the certificate file, you can use the following command :

certbot certonly --webroot -w /var/www/git -d mygitserver.tld

Remember that Let’s Encrypt certificate is only valid during 90 days, so put this in your crontab to automatically generate new certificate before it expires :

0       1       1       *       *       certbot renew --quiet
36      8       2       *       *       certbot renew --quiet

You will find the following settings in the “[server]” part of the Gitea “app.ini” file :

[server]
PROTOCOL = http
DOMAIN = localhost
ROOT_URL = %(PROTOCOL)s://%(DOMAIN)s:%(HTTP_PORT)s/
HTTP_ADDR = 0.0.0.0
HTTP_PORT = 3000

Change them by :

[server]
PROTOCOL               = http
DOMAIN                 = mygitserver.tld
ROOT_URL               = https://mygitserver.tld
HTTP_ADDR              = 0.0.0.0
HTTP_PORT              = 3000

Then, to apply, restart the supervisor service :

sudo service supervisor restart

Backup your WinSCP configuration file

If like me, you are a user of WinSCP, maybe you would like to save your configuration.
The configuration of WinSCP contains all the information needed to get access to your servers (IP, protocol, …). It could be valuable to save all those information in case of computer crash or if you want to sync this configuration between two computers.

To do this, you have two choices. You can manually export the configuration file after each change or you can add the WinSCP file to your backup or synchronization software.

If you want to use the second method, you can get all the settings from the Windows registry or from a text file stored in the AppData.

To use this last method you have to go to the settings and select the “Storage” sub-menu.

In this menu, you have to select “INI file”.

WinSCP Preferences

As soon this settings is validated, you will find a “WinSCP.ini” file under the “Roaming” directory of the AppData linked to your Windows profile.

WinSCP's configuration file