SSH avoid “Connection reset by peer”

If you are a regular SSH user, there are chances that you sometimes faced the error message “Connection reset by peer”. To avoid this message, you can edit some settings on server side or on client side. In this post I will show you both case.

Client side

Putty

If you are using Putty, there is a setting under the “Connection” tab that allow you to set the keepalive value. By default it’s disabled (value is 0).

You can set the value at 240 and each 240 seconds, Putty will send a null packets to keep the connection open.

Be sure to set this value by default each time you launch Putty by following this post : Change Putty default settings

OpenSSH

If you are using OpenSSH (from a Linux machine or using WSL under Windows 10), you can change a settings for your user by editing the config file under the SSH directory in your home directory.

You can follow those steps :

  • Edit the config file
vi ~/.ssh/config
  • Add the settings to enable the keepalive
Host *
ServerAliveInterval 240
  • Set the following rights to the file
chmod 600 ~/.ssh/config

With this setting, your client will send a keepalive packet each 240 seconds to the remote host.

Server side

On the server side, you can edit the SSH server configuration file. If you are using Debian it will be “/etc/ssh/sshd_config”.

First, uncomment the following line :

#ClientAliveInterval 0

Then replace the value 0 by 240, save the file and restart the service.

Change Putty default settings

If you are using Putty on a regular basis there could be some settings that you would like to set by default at each launch.

To set default settings, there is only a few steps to follow.

  • Launch Putty. You will see that there is default profile called “Default Settings”.
  • Set the settings that you want to change. By example logging settings.
  • Once you have set all the settings, go back to the Session tab and select the “Default Settings” profile. Then click “Save”.

From now on, once you will launch Putty all the values will be set with the values that you have save on the “Default settings” profile.

Zabbix and Pure Storage monitoring V2

At the end of 2017, I’ve shared a post talking about Zabbix and Pure Storage monitoring. At this time we were using it as a POC. A few weeks later, we were so happy with the product that we decided to use it in production.

So it was an opportunity to rewrite my monitoring script from scratch.
It’s now time to share it with you.
If you want more information about the purpose, the way I’ve made the integration, etc. don’t hesitate to read my previous post as the only modifications are the code and the type of information the script can gather from the array.

Instead of writing a script for each part of the monitoring (volumes, array, disks,…), I’ve put everything in one file called “PureStorageMonitoring.py”.
The dependencies are still the same : purestorage and urllib3 libraries from Python and Python 3.x

A complete list of datas and triggers that will be gathered by the script is available at the end of this post.

Important note : This script has been tested with Zabbix 3.4.x and Pure Storage 4.10.8 (REST API version 1.12). I cannot guarantee that it will work fine with any other version even if I made my best regarding this point.

Setup the proxy or server

To gather data, we need to make some works on the Zabbix Proxy or Zabbix Server.

  • Be sure that you have Python 3.x installed on the server
  • Install the pip packages
pip3 install purestorage urllib3
apt install zabbix-sender

Get API Token

To be able to use the REST API of Pure Storage, you have to find your API Token on the web interface.
Go on “SYSTEM”, then “Users” and under the Users menu, you will find “API Tokens”. Click on the three dots on the right of the screens and use “Show API Token” to get your API Token.

Download the scripts

The script is available on my Github repository.
You can download it on the server using the following steps :

  • Find the directory where you have to download it, by looking at the value of “ExternalScripts” setting of your proxy or server (by default it’s “/usr/local/share/zabbix/externalscripts” but if like me you are using the official Debian packages from Zabbix, it will be “/usr/lib/zabbix/externalscritps”
  • Go to this directory
cd /usr/lib/zabbix/externalscripts
  • Download the scripts
wget https://raw.githubusercontent.com/jeremyverda/zabbix-public/master/Pure%20Storage/PureStorageMonitoring.py
  • Set the rights
chmod +x PureStorageMonitoring.py
chown zabbix:zabbix PureStorageMonitoring.py

Import the Zabbix template

You can find the Zabbix template on the Github repository.
You have to import it in Zabbix before you continue.

Configure the Zabbix host

In the Zabbix frontend, add a new host and set the information as follows :

  • Agent interface IP : Fill in with the IP address of your Pure Storage
  • Macros
    • {$PURE_TOKEN_API} : Fill in with the API Token that you have copy previously
    • {$PURE_ZBX} : WIll be the IP of your Zabbix Proxy or server that will be used by the Zabbix Sender. By default it will be 127.0.0.1
  • Monitored by proxy : Set this field if you are going to use a Zabbix proxy
  • Template : select the previously imported template : “Template Pure Storage”

Wait for the data

Once you have completed all the previous steps, you have to wait until the data are gathered by the script.

Data and triggers

Here is the list of data and triggers the script will gather from your Flash Array :

Data

  • Global
    • latencies
    • read and write
    • data reduction ratio
    • array capacity, volume size, used space and shared space
    • version
    • remote assist
    • phone home
    • hostname
    • etc.
  • Volumes
    • size and used size
    • read and write
    • latencies
    • data reduction
    • etc.
  • Hosts (value corresponding to VMware or any other hosts connected to the array)
    • latencies
    • read and write
  • Disks
    • capacity
    • last failure (currently this value won’t be empty but it will be corrected on a future API and Purity version)
    • protocol
    • status
    • type

Triggers

  • Global
    • Phone home is disabled
    • Remote assist is enabled
  • Volume
    • Volume size has changed
  • Disks
    • Error on disk status

This list will be updated for each data or trigger that will be added in the template. Don’t hesitate to ask me to add something if you need it.

Zabbix and Pure Storage monitoring

WARNING : A complete rewrite of this script has been made.
You can find it here : Zabbix and Pure Storage monitoring V2

A few weeks ago, at work, we have started a new POC for one of our infrastructure with a Pure Storage FlashArray.

To be able to give information about this POC to everybody inside the team, we needed a dashboard. So with the help of Flash Array Rest API, Zabbix and Grafana, I produced this dashboard :

Please, bear in mind : These scripts and the Zabbix template are a "Work in progress". Currently we are on a POC and not yet in production so other elements will be added (triggers, items, etc.) and this post will be updated.

Zabbix integration

The Zabbix integration is made of :

  • Python scripts and libraries
  • FlashArray Rest API
  • Zabbix Sender (with Trapper items)
  • Zabbix ExternalScripts (executed on Zabbix Server or Proxy)
  • Low-Level discovery
  • Zabbix Macro

There are four scripts :

  • LLD_PureVolume.py
  • PureArrayInfo.py
  • PureArrayMonitor.py
  • PureVolumeMonitor.py

How it works

There are two parts in the Zabbix template :
  • Discovery rules
  • Static items
The LLD_PureVolume.py is responsible of the discovery rule. Zabbix will use it to get the list of existing volumes on the FlashArray and will create the corresponding items. The PureArrayInfo.py and PureArrayMonitor.py are responsible of gathering the data from the Array point of view. The PureVolumeMonitor.py is responsible of gathering data from the volume point of view using the items that will be created by the discovery rules. As all the data are in the same JSON output when we query the FlashArray API, the scripts will gather all information at the same time and will use the Zabbix Sender to populate all the Trapper items at each execution instead of launching the scripts for each items. The scripts will return “1” at the end and store it in an item called “Pure Array Info Launch”,”Pure Array Monitoring Launch” or “VolumeName Monitoring Launch”.

Be ready

Important note : We are using Zabbix 3.4.x. So you could face some problems while importing the template. Let me know if it’s the case.

Before we start with the integration into Zabbix, be sure to have the following tools installed on your Zabbix Proxy or Server :

  • Python 3.x
  • purestorage and urllib3 libraries You can install them on your system by running the following command :
    pip install purestorage urllib3
    or (if your default Python version is not Python 3.x)
    pip3 install purestorage urllib3
  • Zabbix Sender You can install it with the following command :
    apt install zabbix-sender

Get the necessary information

Before going further, you will need to gather the following information from your Pure Storage FlashArray :

  • IP
  • API Token

To get the API Token, open your Flash Array web interface and go on "API Tokens" menu under the "Users" menu that you will find in the "SYSTEM" tab.

You will see a list of users and three dots on the right of the screen. Click on it and finally select "Show API Token"

 

Let's start

If your system is ready and you have all the information at hand, let's start the integration on your system.

Download the scripts and the template

All the scripts and template are available on my GitHub repository. Download all those files and put the scripts on your ExternalScripts directory on your Zabbix Proxy or Server. By default this directory is : /usr/local/share/zabbix/externalscripts but check your Zabbix configuration file if you have any doubt. By example, we are using the official Zabbix packages for Debian and this directory is /usr/lib/zabbix/externalscripts

Set the good rights

To allow Zabbix to execute those scripts you must allow them to be executed and owned by the Zabbix user on your system. You can execute the following commands (under the ExternalScripts directory) :

chmod +x LLD_PureVolume.py PureArrayInfo.py PureArrayMonitor.py PureVolumeMonitor.py
chown zabbix:zabbix LLD_PureVolume.py PureArrayInfo.py PureArrayMonitor.py PureVolumeMonitor.py

Create a new Zabbix host

Once you have imported the template, you can create a new host and fill the following information :
  • Agent interfaces IP address : Fill in with the IP address of your Pure Storage FlashArray
  • Macros
    • {$PURE_TOKEN_API} : Fill in with the API Token that you have gathered from the FlashArray web interface
    • {$PURE_ZBX} : Will be the IP of your Zabbix Proxy or Server that will be used by the Zabbix Sender. By default it’s set to “127.0.0.1”
  • Monitored by proxy : Set this field if you put the scripts on a proxy.

It’s done

Once you have done all those steps, you have to wait until the data are gathered from the Pure Storage FlashArray.

Items explanation

If you some question about the meaning of some items, you will find a description of each items in the description of each one. Example :

Feedback

If you have any remarks or question about those scripts or the Zabbix template, you are welcome in the comments section or by using the contact form.

Bulk links #1

“Bulk links” category, will be a series of posts where I will drop links from my Pocket, Feedly, Twitter and Facebook accounts.

So, here is the first list.

Export Synology NAS system logs

If you are a Synology NAS user, maybe you want to read some logs to understand what’s happened when you face a problem.

There are two ways to do that :

  • Using Log Center package
  • Using Support Center

If you want to read logs about backup, user connection, etc. the first solution should be quite good but if you are looking for more specific logs about a system problem, slowness, etc. the second solution will give you more details.

To export logs from Support Center, follow those steps :

  1. Open Support Center from Synology menu

  2. Go to “Support Services” tab and in the “Log Generation” menu, select the logs that you want to check, then click “Generate logs”
  3. Wait a few minutes while your NAS is generating the logs
  4. The file generated by the NAS will be named “debug.dat“, you can open it using 7-Zip.
  5. In the file you will find a directory named “dsm“, all the primary logs will be under the directory hierarchy : “dsm -> var -> log


  6. Enjoy reading all those logs!

Traceroute with Zabbix and MTR

At the company I work for, we are using Zabbix to monitor our infrastructure and in some case we are using “traceroute” to keep an eye on our network from different locations.

By keeping a traceroute history, it can help you to understand some networking problem by example when you are facing packet loss from some location.

To keep this history in Zabbix, we are using “MTR” instead of the traditional “traceroute” command.

Here is a small explanation of what MTR is, taken from the Github page of the project :

mtr combines the functionality of the ‘traceroute’ and ‘ping’ programs
in a single network diagnostic tool.

As mtr starts, it investigates the network connection between the host
mtr runs on and a user-specified destination host. After it
determines the address of each network hop between the machines,
it sends a sequence of ICMP ECHO requests to each one to determine the
quality of the link to each machine. As it does this, it prints
running statistics about each machine. 

Here is an example of the command output :

MTR is installed on our Zabbix proxies and on the main server.

Under Debian, the command to install MTR is :

sudo aptitude install mtr

As we are using proxies and active agents, the remote commands are not supported. So we have deployed a local script on the server and proxies.

Here is the local script :

#!/usr/bin/env bash
IP=$1
mtr -r -c3 -w -b $IP

This local script is named “mtrtrace.sh” and is saved under the “externalscripts” directory of Zabbix. The path can be found in the proxy or server configuration file under the following configuration key : “ExternalScripts“. Under Debian it’s “/usr/lib/zabbix/externalscripts“.

Once you have created the file, you have to allow Zabbix to execute it by using the following commands :

chmod +x mtrtrace.sh
chown zabbix:zabbix mtrtrace.sh

Then import the following template in Zabbix web interface and link it to a host : MTR History (exported from Zabbix 3.2.x)

If you prefer to add it in an existing template, here are the settings :

It will use the first ” IP address” in the “Agent interfaces” part of the host configuration.

You will find the history of data under the application “Traceroute” and in the item ” MTR Trace” on the “Latest data” tab of the configured host.

Here is an example of data from this tab :

Please note that, it’s recommended to increase the “Timeout” value in the Zabbix Server and Proxies configuration files to use this script. On my side it takes 10 seconds in average to be completely executed.

Veeam Backup & Replication 9.5 Update 1 with vSphere 6.5 support is available

A few weeks ago, I was writing about the availability of Veeam Backup & Replication 9.5 Update 1 RC and the way to get it. Today, you can get the official release from Veeam website. After the upgrade, your build version will be 9.5.0.823. Before making this upgrade, be sure that you are running build 9.5.0.580, 9.5.0.711 or 9.5.0.802.

What’s new

 

Using VMware tags and Veeam for easy backup configuration

When you have a certain amount of VMs, there is one step that you can easily forgot or that takes time : configure the backup job.

Each time you add a VM you have to open Veeam interface and add the VM to a specific job.

But, today, I have an easy way to add some automation to solve this problem : VMware tags and Veeam backup job based on tag.

Create tags in VMware vCenter

When you are in the web interface of the vCenter, you will find on the left sidebar a link named “Tags”

Click on this link, then “Items” and select the tab “Tags”. You will find all the existing tags on your vCenter. To create a new tag, simply click on the new tag button.

Repeat this step for each tag you want to use.

Adding tags to VMs

Once you have created all the necessary tags, it’s time to add them to your VMs.

Right click on your VM then select “Tags & Custom Attributes” and pick the option “Assign Tag…”image2016-8-17 13-57-50

Select the tag that you want to add to this VM and click on “Assign”

image2016-8-17 13-59-10

The tag that you have assigned will be visible on the “Summary” tab of the given VM.

image2016-8-17 14-0-39

Adding the backup job to Veeam

To use a tag in a Veeam backup job, you have to create a new job :

image2016-8-17 14-9-19

Then give it a name. By example the name of the tag :

image2016-8-17 14-13-56

At the “Virtual Machines” step of the wizard, click on the tag button on the top right toolbar and select the tag on the list :

image2016-8-17 14-15-58

You will find the tag on the “Virtual Machines” list of the wizard :

image2016-8-17 14-17-43

Then configure all the next steps on the Veeam wizard and click “Finish”.

You will find the backup job on the jobs list :

image2016-8-17 14-18-47

At each job launch, Veeam will search for all VMs with the given tag. So you’ll never had to open the Veeam wizard again if you want to add a VM to a backup job.

Full restore of a virtual machine

When you want to make a full restore of a VM with a tag, you have to select the “Restore VM tags” option on the “Full VM Restore Wizard”. If you don’t do this, the VM will  lose its tag :

image2016-8-17 14-35-12

Installing Gitea on Debian

Gitea is a self-hosted Git service forked from Gogs. It’s very easy to deploy and has a low system usage impact.

It has features like :

  • web interface
  • wiki
  • issues management
  • admin dashboard

Gitea web interface

Gitea web interface


In this howto, we will use SQLite as the database backend and the Gitea binary (as there is no Debian package available at this time).

This howto is based on Debian Jessie. But it should work with Debian Stretch (alias testing).

Installing the basics

To be able to run Gitea on Debian you will need a local user without root rights and the git package.

Install git :

sudo aptitude install git

Create a local user without root rights and no right to log on :

sudo adduser --disabled-login --gecos 'gitea' git

Here is the explanation of the two options that I’ve added to the “adduser” command :

  • “–disabled-login” won’t ask for any password. So it means that as long there is no password set for this user, he won’t be allowed to log on
  • “–gecos” followed by a comment, will populate the finger information automatically (First Name, Last Name, …)

Installing Gitea

Switch to the git user that you have previously created :

sudo su - git

Download the Gitea binary (get the last version here or here) :

wget -O gitea https://dl.gitea.io/gitea/1.4.1/gitea-1.4.1-linux-amd64

Make the download file executable :

chmod +x gitea

Create the necessary directories :

mkdir -p custom/conf
mkdir data

Launch Gitea for the fist time :

./gitea web

Open a webpage in your browser using the IP of your server on the TCP port 3000 (by example : 192.168.1.200:3000) and complete all the information asked by Gitea.

Your configuration file will be in the “custom/conf/” directory on the “app.ini” file. You can find the standard configuration file here if you want to check what you can change.

If you want to change some advanced settings, you’ll want to read the “Configuration Cheat Sheet“.

Installing Supervisor

To be able to run Gitea as a service, we will use Supervisor.

Install it :

sudo aptitude install supervisor

Create the log directory for the Gitea service :

sudo mkdir /var/log/gitea

Create the configuration file for the Gitea service :

sudo vi /etc/supervisor/conf.d/gitea.conf

Put the following information in this file :

[program:gitea]
directory=/home/git/
command=/home/git/gitea web
autostart=true
autorestart=true
startsecs=10
stdout_logfile=/var/log/gitea/stdout.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stdout_capture_maxbytes=1MB
stderr_logfile=/var/log/gitea/stderr.log
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=10
stderr_capture_maxbytes=1MB
environment = HOME="/home/git", USER="git"

Restart the supervisor service :

sudo service supervisor restart

Check the content of the Gitea log file :

sudo tail /var/log/gitea/stdout.log

Check if the service is running :

ps -ef | grep gitea

You should have something like that :

Use Gitea with Nginx as a Reverse Proxy

If you want to use Gitea behind a reverse proxy using Nginx with SSL (using Let’s Encrypt with certbot) you can use the following configuration file :

server{
        listen 80;
        server_name mygitserver.tld;

        location /.well-known/acme-challenge {
                root /var/www/git;
        }
        location / {
                return 301 https://$host$request_uri;
        }
}

server{
        listen 443;
        ssl on;
        server_name mygitserver.tld;
        ssl_certificate /etc/letsencrypt/live/mygitserver.tld/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/mygitserver.tld/privkey.pem;
        location / {
                proxy_pass http://127.0.0.1:3000;
                proxy_set_header Host             $host;
                proxy_set_header X-Real-IP        $remote_addr;
                proxy_set_header X-Forwarded-For  $proxy_add_x_forwarded_for;
                #max_body_size will allow you to upload a large git repository
                client_max_body_size 100M;
        }
}

In this case, Nginx will be configured to allow the Let’s Encrypt server to get the challenge file in the “/.well-known/acme-challenge” directory. To do that, you will have to create the following directory : “/var/www/git” and give Nginx the right to read files on it.

Then you will have to install certbot. You can find it in Debian Backports.

sudo apt-get install certbot -t jessie-backports

To get the certificate file, you can use the following command :

certbot certonly --webroot -w /var/www/git -d mygitserver.tld

Remember that Let’s Encrypt certificate is only valid during 90 days, so put this in your crontab to automatically generate new certificate before it expires :

0       1       1       *       *       certbot renew --quiet
36      8       2       *       *       certbot renew --quiet

You will find the following settings in the “[server]” part of the Gitea “app.ini” file :

[server]
PROTOCOL = http
DOMAIN = localhost
ROOT_URL = %(PROTOCOL)s://%(DOMAIN)s:%(HTTP_PORT)s/
HTTP_ADDR = 0.0.0.0
HTTP_PORT = 3000

Change them by :

[server]
PROTOCOL               = http
DOMAIN                 = mygitserver.tld
ROOT_URL               = https://mygitserver.tld
HTTP_ADDR              = 0.0.0.0
HTTP_PORT              = 3000

Then, to apply, restart the supervisor service :

sudo service supervisor restart