Integration of pmacct with ElasticSearch and Kibana

In this post I want to show a solution based on a script (pmacct-to-elasticsearch) that I made to gather data from pmacct and visualize them using Kibana/ElasticSearch. It’s far from being the state of the art of IP accounting solutions, but it may be used as a starting point for further customizations and developments.

I plan to write another post with some ideas to integrate pmacct with the canonical ELK stack (ElasticSearch/Logstash/Kibana). As usual, add my RSS feed to your reader or follow me on Twitter to stay updated!

The big picture

This is the big picture of the proposed solution:

pmacct-to-elasticsearch - The big picture

There are 4 main actors: pmacct daemons (we already saw how to install and configure them) that collect accounting data, pmacct-to-elasticsearch, which reads pmacct’s output, processes it and sends it to ElasticSearch, where data are stored and organized into indices and, at last, Kibana, that is used to chart them on a web frontend.

Read more …

Installing pmacct on a fresh Ubuntu setup

This is a simple, quick-and-dirty, copy/paste guide to install a great software, pmacct, on a fresh Ubuntu 14.04.1 LTS (Trusty Tahr) setup. I’ll use this simple setup as the basis for other related posts I plan to publish soon.


Tl;dr: pmacct is a suite of tools to collect, filter and aggregate IP accounting data, which works with live traffic (libpcap), NetFlow v1/v5/v7/v8/v9, IPFIX, sFlow and ULOG.

A blog post is not enough to show the great features and possibilities that this tool offers, so I really recommend whoever may be interested to read author’s documentation on the official web site.

On a next post I plan to show some ideas to deploy pmacct together with ElasticSearch and Kibana, in order to build useful dashboards full of graphs. Add my RSS feed to your reader or follow me on Twitter to stay updated!

EDIT: the Integration of pmacct with ElasticSearch and Kibana post has been published.

Let’s start from a really simple setup here.

Read more …

GMail fails SPF checks on POP3 fetched messages


It seems that, under certain conditions, GMail reports failed SPF checks for messages fetched via POP3 from other mail servers.

I noticed this behaviour on messages received, for example, by mail servers where an internal relay is used, like the following message sent from PayPal (which uses an hard-fail policy):

Received-SPF: fail ( domain of does not
        designate A.B.C.D as permitted sender) client-ip=A.B.C.D;
Received: by with POP3 ...
Return-Path: <>
Received: from server1.MYPROVIDER.TLD (A.B.C.D)
        by server2.MYPROVIDER.TLD with SMTP; ...
Received: from (
        by mx1.MYPROVIDER.TLD with SMTP; ...
Return-Path: <>
From: "PayPal" <>

This is the SPF record for

;        IN   TXT
;; ANSWER SECTION:   3600  IN   TXT  "v=spf1 -all"

It authorizes every IP address resolved by the A record.

In the example, MyProvider receives the message from, which is one of the many IP addresses resolved by and authorized by

;       IN   A
...  300   IN	A

After receiving the message, MyProvider uses an internal relay and adds a new header:

Received: from server1.MYPROVIDER.TLD (A.B.C.D)
        by server2.MYPROVIDER.TLD with SMTP; ...

Unfortunately, when GMail fetches the message from MyProvider, it runs the SPF check against the IP address of the last Received header (A.B.C.D), that is the IP address of MyProvider internal relay, and not the one authorized by PayPal for outbound email, resulting in a SPF fail.

Also when messages are not internally relayed by the receiving provider, so when they present only one Received header, GMail fails the SPF check with the error “best guess record for domain of transitioning postmaster@MYOTHERDOMAIN.TLD does not designate as permitted sender“.

From the Common receiver mistakes FAQ of…

SPF is designed to work at the border of your network…

… to accept or reject messages as soon as they try to enter, and not after processing, relaying or forwarding have been performed. Moreover it’s intended to be use for SMTP sessions, and not on POP3/fetch.

Email good reputation is hard to achieve; lots of efforts are spent on methods like SPF, DKIM, DMARC to help senders to reach users’ InBox folders and to increase users’ confidence about their content. Surely GMail folks had more than good intentions when they decided to use this policy but, IMHO, a wrong use of these techniques may lead to the opposite results of those intended.

A quick glance at longer than /24 IPv4 prefixes

Distribution of probes which reached the /25 prefix with registered route object

Yesterday RIPE Labs announced a new test to measure propagation of longer than /24 IPv4 prefixes. The scope of these prefixes is to allow small allocations from the 23.128/10 net block that ARIN reserved to facilitate IPv6 deployment, but it could be impaired by filters and routing policies commonly deployed all around the net.

While looking forward to the RIPE results I couldn’t help glancing at the current situation as seen by a bunch of RIPE Atlas probes, so I performed some small tests on my own.


The measurements I created are based on 500 random probes from the world wide area and have been used to ping the IP addresses announced by RIPE RIS (AS12654); these IP addresses are split in 2 main groups: one group has 3 addresses each one from a prefix (/24, /25 and /28) with a registered route object, the other group has similar addresses but from prefixes without a registered route object.

Prefix            Pingable IP	 Route object?	My Atlas measurement ID    yes            1767799    yes            1767800  yes            1767801   no             1767802   no             1767803 no             1767804

All measurements use the same set of probes, taken from the first measurement (the /24 prefix with registered route object).

Of course, mine is only a partial view (500 probes) taken from a bigger partial view (all the RIPE Atlas probes) of the global Internet; please take my results with regard of this.


Data collected by RIPE Atlas have been parsed with a small python script that I used to match the results. It uses the RIPE Atlas Sagan library.


From the first 3 measurements, 497 probes received at least one response from the /24 prefix with the registered route object; of those, only 57 received responses from the /25 prefix and 41 from the /28 prefix too.

Also 497 probes received responses from the /24 prefix without the registered route object; of those, 52 received responses from the /25 prefix and 38 from the /28 prefix too.

Longer than 24 prefixes reachability


For those probes which reach the /25 and the /28 prefix the average RTT seems to be pretty stable regardless of the prefix size.

Longer than 24 prefixes with route object  RTT avg

Longer than 24 prefixes without route object  RTT avg

Probes distribution

Here it is:

Distribution of probes which reached the /25 prefix with registered route object

Distribution of probes which reached the /25 prefix with registered route object – Click to enlarge

More and deeper tests may be useful to reveal AS paths and peering/transit relationships impacts on these measurements; maybe I’ll blog another post as soon as possible, or – better yet – maybe RIPE Labs folks will write about them in their results.


As far as I can see from the measurements I run (based on a small sample of probes), reachability of smaller than /24 prefixes is almost unaffected by the presence of registered route objects; who receives the registered net also receives the unregistered one. From the performances point of view, when the smaller prefixes are reached, RTTs are not better nor worse than /24 prefixes.

The real question is: what’s the portion of Internet that right now succeed to reach these prefixes?

I guess that some efforts will be needed in order to have ISPs to review their routing policies and accept the new small prefixes from the reserved pool.


RIPE Labs, Propagation of Longer-than-/24 IPv4 Prefixes –

ARIN, Policy Proposal 2008-5 – Dedicated IPv4 block to facilitate IPv6 deployment-

ARIN, Number Resource Policy Manual (NRPM), adoption of Dedicated IPv4 block to facilitate IPv6 Deployment –

ARIN, announcement of reservation –

RIPE Atlas: a script to show ASes traversed in traceroute

I released a small Python script which reads results from RIPE Atlas traceroute measurements and shows Autonomous Systems traversed by probes to reach the target: ripeatlastracepath.

It uses a library that I wrote to cache RIPEstat results about IP addresses details (ASN, prefix, …), in order to improve performance and to avoid a flood of requests: ipdetailscache.

More details can be found on my GitHub profile page.

A demo can be found here. It can’t be used to process other measurements, it only shows results from measurement ID 1674977, a traceroute from 50 probes all over the world toward You can drag&drop ASes to build the layout that best describes your scenario and, once done, you can “save” the graph for later usage. In this demo the “Load graph” button gives a preset of JSON data representing the example graph below:

Graph of traceroute to

These scripts are not so elegant, but do the job! ;) They are on, feel free to use/edit/fork/improve them as you whish!