Welcome to my website. Here you can find my two blogs: this one in English and another one in Italian where I share my thoughts about computer, eletronics, developing, information/cyber security and technology in general.

Website migrated to a new server

This site, like some others I own, was hosted on a DirectAdmin managed virtual server that I started renting in 2015.

I used many server managed panels in my life, mainly because running a mail server is an hassle, but now I decided to use a cloud service for my e-mails and I can easily configure and manage the web servers using Ansible, so I don’t really need a panel for server management anymore.

Farewell DirectAdmin you served me weel.

Due to the dns transfer there could be some minor issues with the domain till the info propagates.



Starting an honeypot

From time to time I read stats of honeypots from various security researchers, reading their stats I started becoming curious about looking into one, so some weeks ago I built one using Cowrie.

Honeypot Image

I already shared some stats of it on my twitter account but in the future I plan to share a more comprehensive analysis of the data I log.

Meanwhile here is a list of the top ten credentials used in log-in attempts till now:

Table 1. Top 10 most used credentials
Username Password Attempts

































Cloudflare doesn't cache files without extension into the URL

Yesterday, while debugging some performance issues on one of my websites, I’ve discovered that Cloudflare didn’t cache some images even if they were of the cacheable types.

Checking the headers this was the result:

curl -svo /dev/null <url>/server/images/logo/28
 < date: Sat, 01 Jun 2018 06:27:54 GMT
 < content-type: image/png
 < content-length: 17612
 < set-cookie: __cfduid=<omissis>; expires=Sun, 01-Jun-19 06:27:53 GMT; path=/; domain=<omissis>; HttpOnly; Secure
 < cache-control: max-age=21600
 < expires: Sat, 01 Jun 2018 12:27:54 GMT
 < content-disposition: inline; filename="logo28.png"
 < cache-control: public
 < expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
 < server: cloudflare
 < cf-ray: <omissis>

As you can see the "cf-cache-status" is missing, and this should happen when the file type is one of those that are not "not something they would ordinarily cache" see this article

The resource showed the proper headers and was set as public just like other website resources that were cached correctly by cloudflare. So I’ve tried enforcing a "Cache everything" page rule without any effect.

Then I spent again some time looking to find what was different into the headers without noticing anything new till, after a while, reading again this article, I noticed this phrase:

caches the following types of static content by extension

So I realized that the only difference between cached and uncached content was the presence of the file extension into the url! Even if the files are of the correct mime types and the header contains the file name with the extension.

So I made an experiment and changed the URL of the previous resource including the extension and the result was this:

curl -svo /dev/null <url>/server/images/logo/28.png
 < HTTP/2 200
 < date: Sat, 01 Jun 2018 09:29:38 GMT
 < content-type: image/png
 < content-length: 17612
 < set-cookie: __cfduid=<omissis>; expires=Sun, 01-Jun-19 09:29:38 GMT; path=/; domain=<omissis>; HttpOnly; Secure
 < cache-control: public, max-age=28800
 < expires: Sat, 01 Jun 2018 17:29:38 GMT
 < content-disposition: inline; filename="logo28.png"
 < cf-cache-status: HIT
 < expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
 < server: cloudflare
 < cf-ray: <omissis>

The resource was now cacheable by Cloudflare, what was strange is that the page rules did not enforce the caching of the resource though.

My hypothesis is that the function that decides the cacheability first extracts the file extension from the url then does the actual evaluation, if the resource has not a file extension it just skips all the other phases no matter what page rule you put there.

This is not an issue only something I think it’s useful to be aware of, especially if you serve many "cacheable" content in a rest fashion.

And by the way I really think that Cloudflare offers a great service.



Privacy is not just a word

Privacy is one of those concept that for most people are not really understood.

Most people don’t read privacy policies and terms of services of the various website/apps they use, moreover many of the ones who do read them don’t understand their meaning, and between the ones who do understand them most don’t have really the freedom to choose to accept them.

Most of times TOS and Policies are endured and not accepted, this because if you don’t use those services you are cut out of the world. I don’t have a WhatsApp account and neither a Facebook account and this means that I am cut out from most social interaction. When asked why I don’t have WhatsApp “since is free” I answer that: “I would not mind to use WhatsApp if the price was only a fair amount of money”, for me the price of using those services is too high and there are also many, more privacy wise alternatives, like Signal.

There is a person who never contacts me simply because she only communicates via WhatsApp and Facebook, so I understand the price of my choices and why most people just don’t want to know what happens with their data, nevertheless I consider this a form of violence from those companies that uses the unawareness of the most to force the others into submission.

Be aware of your privacy choices and choose consciously.



Disconnected from the cloud

I had a network outage at home from the 1st of February till the 11th, it was a long period (the longest since at least 2010), and this event make me think about how much we depend on internet and on the cloud for so many things.

Luckily most of my home systems don’t need cloud services to work (as an example I use ownCloud for files sync and mercurial for code versioning on a server at home) but of course I could not watch netflix or downloads games from psn and I had only the mobile for news sites and casual browsing (and I almost depleted the bandwidth).

But I was thinking of those people that buy systems that highly depends on cloud to work (there are also lamps that needs the cloud to be turned on and off!), what will happen during an outage?

Internet become a service we depend on like electricity, and it does rely on standards that allow to replace a provider with another quite easily.

On the other end most cloud services are not based on standards so is not trivial to move from one to another, of course many of them provide some exports functionalities but yet again not based on any standard so you don’t see as much import functionalities. There are of course exceptions: for services like dropbox, is just a matter of moving files from a directory to another (losing history thought) but most are not as easy.

Lets return to the cloud lamps, if the company that provides the service cease operation the lights will stop woking and you have to replace them all, and what if all your house lights are based on it? You will be left alone in the dark…​

The cloud is a valuable resource but also a risk due to lack of standards and security/privacy concerns, I’m not saying that people should avoid it but I think that we should all be aware of related risks.



Propositions for 2017

2016 is coming to an end, and my proposition for the 2017 is I have to take again this blog to share my thoughts about technology and computer security.

For now I’ve started with a small reorganization of the website, introduced tags and joined news and English blog. Not much but it’s a start.

Happy new year to everyone.



Let's Encrypt

I've decidiced to go full https thanks to Let's Encrypt

Since 2019 this website is using Cloudflare so the certificare of this domain is not anymore provided by Let’s Encrypt but other sites I own still use this service that I still fully support!
Let's Encrypt

This website is composed only of static files but nevertheless I’ve decidiced to go full https thanks to Let’s Encrypt

jBaking a new website

Finally I’ve started to rebuild the website (again) from scratch.

The truth is that I abbandoned it in 2013 due to lack of time, and till today all the time I’ve spent on it was spent for drupal mainenance. So I decided to drop dupal and move to something that required little effor for mainenance.

My choice went to jBake that generate a static website, that have not need for patching or similar security mainenance. Another advantage of jBake is that uses a template system common in JEE world that I already know.

Just a final note: Drupal is a really good platform, but it is just too powerful for a simple website like this is now.

Querying Job status on SQL Server 2005 without using OPENROWSET

Where I work the main DB engine is SQL Server 2005, today we had to find a way to check the status of a job started from a stored procedure (following this tutorial).

The tutorial show the usage of OPENROWSET for checking the job status, but for many reasons we could not use that function in our environment so we had to find another way

After many experiments I've written a query that could replace the OPENROWSET without incurring in the "nesting problem" of calling sp_help_job directly.

declare @CurrentJobs table
[Job ID] uniqueidentifier,
[Last Run Date] varchar(255),
[Last Run Time] varchar(255),
[Next Run Date] varchar(255),
[Next Run Time] varchar(255),
[Next Run Schedule ID] varchar(255),
[Requested To Run] varchar(255),
[Request Source] varchar(255),
[Request Source ID] varchar(255),
[Running] varchar(255),
[Current Step] varchar(255),
[Current Retry Attempt] varchar(255),
[State] varchar(255)
insert into @CurrentJobs
EXECUTE master.dbo.xp_sqlagent_enum_jobs 1,''

select *
from @CurrentJobs cj 
join msdb.dbo.sysjobs sj on cj.[Job ID]= sj.job_id
        SELECT  TOP 1 *
        FROM    msdb.dbo.sysjobhistory hj
        WHERE   cj.[Job ID]= hj.job_id
        ORDER BY
                [run_date] DESC, [run_time] DESC
        ) h
WHERE name='my_job_name'

Hope someone could find this useful


Pass the same params to all drives of a Linux mdraid

I have two raids on my home server, one of them is used just as a backup staging area.

So I wanted to set all disks that compose that raid with higher power saving settings.

The result is this simple script

mdadm --detail $1 | grep -o /dev/sd. |xargs hdparm ${@:2}

It make no checks but it gets the job done.



A new home server

At home I've a server that I use for file serving, developing and making experiments but it is really old, has no virtualizzation support built'in (I'm using virtualbox but it is desktop oriented and I need something more oriented to server), and the available HDD space is almost out of space.

So my requirements list is:

  • at least 4Tb of raid protected hdd space
  • low power consumptions on idle (at least no higher than current setup)
  • lowest possible noise
  • full HW virtualizzation support
  • two gigabit Ethernet cards
  • full centos 6 compatible (that is the hardest part)

It was not easy, expecially the noise requirement since that information seldom provided into the hw specifications, my main source for this kind of info is Silent pc review but you can sill have bad suprises.

My current server configuration has 3 HDDs (raid 5) + a 2Tb hdd for "slow changing data" and an ssd for booting.

To achieve larger capacity and lower consumption I decided to use only 2.5 hdds. This is a setup that is now also common for enterprise servers, obviously I don't want to buy enterprise grade 2.5 HDDs (those are pricey) and this will also need a proper backplane for assure a good heat dissipation (so no plastic case).

I've selected the Samsung Spinpoint M8 1T hdd the spinpoint series was always a silent one so I've ordered a sample and make some tests, the result into an usb box was quite satisfactory. To reach the 4tb size in raid 5 i need 5 HDDs, so to have those plus the boot SSD I needed a MB with at least 6 sata ports.


Another important subject was the CPU, this is not a gaming PC, I just want to host some virtual guest for making clustering experiments so two physical core should be enough and power consumption have to be as low as possibile. This seemed simple but I soon discovered that there are not many low power consumption desktop cpus.

After a long research I found two candidates:

I have to admit that i liked the idea of using the E3 expecially due to the ECC memory support but I was unable to find a MB that officially supported that (and was compliant to my other requirements) so I had to buy the i5.


Happy Easter

Happy Easter to all

Migration of the content still in progress

Most of the content has been migrated, and reorganized.During the process I've lost all the tags of the old posts, but this is a minor issue for me cause this site needs new content ;)

Now i need to decide the best way to use Drupal for my projects

Cya K.

Migration to drupal in progress

Joomla 1.5 is in end of life, so I had to migrate to a new product. Joomla is really nice but I've found myself to like Drupal much more so I've started the migration. It'll take some time to complete.


Google removed the share functionality of Google Reader to push Google+, I'm really upset by this choice cause I used that functionality for sharing news and articles that I thought were interesting on this website.
Today I've thought that twitter could be a effective replacement for it, so from now onward I'll be a twitter user too.


Rilego status report

The first steps of the Rilego project are done, most of the features of JE-Comics are completely ported to Rilego 0.4.0 plus the multi-threading engine is active.

Now before continuing adding new features (the next feature planned is ePub output support) I need to consolidate the code (clean up, adding comments, and bug fixes).

Meanwhile I hope to get some testers soon ;)



Older posts are available in the archive.