Blog

Cleanup and layout updates

This is not a good period for me but I finally had the chance to complete some of those things that I tend to procrastinate on for a long time.

One of these is a digital cleanup that I’ve finally started. I have so many projects that I started and never completed, but this time I had the courage to just drop all the ones that I know I’ll never complete.

And meanwhile I also had the chance to work on the layout of this website. Still not fully satisfied, but probably I’ll never be.

Bye

K.


Resuming: New Year New Server

I've just migrated the website to a new server

About 3 years have passed since my last post, and this one is almost a clone of that one!

I’ve just migrated the website to a new server, this new one is a Rocky Linux 8 Server (Centos 8 went EOL) currently on Linode.

I also revamped the layout a little (still not satisfied with the look though), and get rid of all the code that put tracking cookies on this website, the only cookie left is the one to hide the privacy policy popup.

Bye

K.


Website migrated to a new server

This site, like some others I own, was hosted on a DirectAdmin managed virtual server that I started renting in 2015.

I used many server managed panels in my life, mainly because running a mail server is an hassle, but now I decided to use a cloud service for my e-mails and I can easily configure and manage the web servers using Ansible, so I don’t really need a panel for server management anymore.

Farewell DirectAdmin you served me well.

Due to the dns transfer there could be some minor issues with the domain till the info propagates.

Bye

K.


Starting an honeypot

From time to time I read stats of honeypots from various security researchers, reading their stats I started becoming curious about looking into one, so some weeks ago I built one using Cowrie.

Honeypot Image

I already shared some stats of it on my twitter account but in the future I plan to share a more comprehensive analysis of the data I log.

Meanwhile here is a list of the top ten credentials used in log-in attempts till now:

Table 1. Top 10 most used credentials
Username Password Attempts

guest

guest

154600

enable

system

32556

shell

sh

31182

root

password

6869

admin

admin

2187

support

support

1986

root

vizxv

1583

default

OxhlwSG8

1561

default

S2fGqNFs

1479

root

admin

1430

Bye

K.


Cloudflare doesn't cache files without extension into the URL

Yesterday, while debugging some performance issues on one of my websites, I’ve discovered that Cloudflare didn’t cache some images even if they were of the cacheable types.

Checking the headers this was the result:

curl -svo /dev/null <url>/server/images/logo/28
Output
 < date: Sat, 01 Jun 2018 06:27:54 GMT
 < content-type: image/png
 < content-length: 17612
 < set-cookie: __cfduid=<omissis>; expires=Sun, 01-Jun-19 06:27:53 GMT; path=/; domain=<omissis>; HttpOnly; Secure
 < cache-control: max-age=21600
 < expires: Sat, 01 Jun 2018 12:27:54 GMT
 < content-disposition: inline; filename="logo28.png"
 < cache-control: public
 < expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
 < server: cloudflare
 < cf-ray: <omissis>

As you can see the "cf-cache-status" is missing, and this should happen when the file type is one of those that are not "not something they would ordinarily cache" see this article

The resource showed the proper headers and was set as public just like other website resources that were cached correctly by cloudflare. So I’ve tried enforcing a "Cache everything" page rule without any effect.

Then I spent again some time looking to find what was different into the headers without noticing anything new till, after a while, reading again this article, I noticed this phrase:

caches the following types of static content by extension

So I realized that the only difference between cached and uncached content was the presence of the file extension into the url! Even if the files are of the correct mime types and the header contains the file name with the extension.

So I made an experiment and changed the URL of the previous resource including the extension and the result was this:

curl -svo /dev/null <url>/server/images/logo/28.png
Output
 < HTTP/2 200
 < date: Sat, 01 Jun 2018 09:29:38 GMT
 < content-type: image/png
 < content-length: 17612
 < set-cookie: __cfduid=<omissis>; expires=Sun, 01-Jun-19 09:29:38 GMT; path=/; domain=<omissis>; HttpOnly; Secure
 < cache-control: public, max-age=28800
 < expires: Sat, 01 Jun 2018 17:29:38 GMT
 < content-disposition: inline; filename="logo28.png"
 < cf-cache-status: HIT
 < expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
 < server: cloudflare
 < cf-ray: <omissis>

The resource was now cacheable by Cloudflare, what was strange is that the page rules did not enforce the caching of the resource though.

My hypothesis is that the function that decides the cacheability first extracts the file extension from the url then does the actual evaluation, if the resource has not a file extension it just skips all the other phases no matter what page rule you put there.

This is not an issue only something I think it’s useful to be aware of, especially if you serve many "cacheable" content in a rest fashion.

And by the way I really think that Cloudflare offers a great service.

Bye

K.


Privacy is not just a word

Privacy is one of those concepts that for most people are not really understood.

Most people don’t read privacy policies and terms of services of the various websites/apps they use, moreover many of the ones who do read them don’t understand their meaning, and between the ones who do understand them most don’t really have the freedom to choose to accept them.

Most of the time TOS and Policies are endured and not accepted, this is because if you don’t use those services you are cut out of the world. I don’t have a WhatsApp account and neither a Facebook account and this means that I am cut out from most social interaction. When asked why I don’t have WhatsApp “since it is free” I answer that: “I would not mind to use WhatsApp if the price was only a fair amount of money”, for me the price of using those services is too high and there are also many, more privacy wise alternatives, like Signal.

There is a person who never contacts me simply because she only communicates via WhatsApp and Facebook, so I understand the price of my choices and why most people just don’t want to know what happens with their data, nevertheless I consider this a form of violence from those companies that uses the unawareness of the most to force the others into submission.

Be aware of your privacy choices and choose consciously.

Bye

K.


Disconnected from the cloud

I had a network outage at home from the 1st of February till the 11th, it was a long period (the longest since at least 2010), and this event make me think about how much we depend on internet and on the cloud for so many things.

Luckily most of my home systems don’t need cloud services to work (as an example I use ownCloud for files sync and mercurial for code versioning on a server at home) but of course I could not watch netflix or downloads games from psn and I had only the mobile for news sites and casual browsing (and I almost depleted the bandwidth).

But I was thinking of those people that buy systems that highly depends on cloud to work (there are also lamps that needs the cloud to be turned on and off!), what will happen during an outage?

Internet become a service we depend on like electricity, and it does rely on standards that allow to replace a provider with another quite easily.

On the other end most cloud services are not based on standards so is not trivial to move from one to another, of course many of them provide some exports functionalities but yet again not based on any standard so you don’t see as much import functionalities. There are of course exceptions: for services like dropbox, is just a matter of moving files from a directory to another (losing history thought) but most are not as easy.

Lets return to the cloud lamps, if the company that provides the service cease operation the lights will stop woking and you have to replace them all, and what if all your house lights are based on it? You will be left alone in the dark…​

The cloud is a valuable resource but also a risk due to lack of standards and security/privacy concerns, I’m not saying that people should avoid it but I think that we should all be aware of related risks.

Bye

K.


Propositions for 2017

2016 is coming to an end, and my proposition for the 2017 is I have to take again this blog to share my thoughts about technology and computer security.

For now I’ve started with a small reorganization of the website, introduced tags and joined news and English blog. Not much but it’s a start.

Happy new year to everyone.

Bye

K.


Let's Encrypt

I've decidiced to go full https thanks to Let's Encrypt

Note
Since 2019 this website is using Cloudflare so the certificare of this domain is not anymore provided by Let’s Encrypt but other sites I own still use this service that I still fully support!
Let’s Encrypt

This website is composed only of static files but nevertheless I’ve decidiced to go full https thanks to Let’s Encrypt


jBaking a new website

Finally I’ve started to rebuild the website (again) from scratch.

The truth is that I abbandoned it in 2013 due to lack of time, and till today all the time I’ve spent on it was spent for drupal mainenance. So I decided to drop dupal and move to something that required little effor for mainenance.

My choice went to jBake that generate a static website, that have not need for patching or similar security mainenance. Another advantage of jBake is that uses a template system common in JEE world that I already know.

Just a final note: Drupal is a really good platform, but it is just too powerful for a simple website like this is now.


Querying Job status on SQL Server 2005 without using OPENROWSET

Where I work the main DB engine is SQL Server 2005, today we had to find a way to check the status of a job started from a stored procedure (following this tutorial).

The tutorial show the usage of OPENROWSET for checking the job status, but for many reasons we could not use that function in our environment so we had to find another way

After many experiments I've written a query that could replace the OPENROWSET without incurring in the "nesting problem" of calling sp_help_job directly.

declare @CurrentJobs table
(
[Job ID] uniqueidentifier,
[Last Run Date] varchar(255),
[Last Run Time] varchar(255),
[Next Run Date] varchar(255),
[Next Run Time] varchar(255),
[Next Run Schedule ID] varchar(255),
[Requested To Run] varchar(255),
[Request Source] varchar(255),
[Request Source ID] varchar(255),
[Running] varchar(255),
[Current Step] varchar(255),
[Current Retry Attempt] varchar(255),
[State] varchar(255)
)
insert into @CurrentJobs
EXECUTE master.dbo.xp_sqlagent_enum_jobs 1,''

select *
from @CurrentJobs cj 
join msdb.dbo.sysjobs sj on cj.[Job ID]= sj.job_id
OUTER APPLY
        (
        SELECT  TOP 1 *
        FROM    msdb.dbo.sysjobhistory hj
        WHERE   cj.[Job ID]= hj.job_id
        ORDER BY
                [run_date] DESC, [run_time] DESC
        ) h
WHERE name='my_job_name'

Hope someone could find this useful

Bye
Kirys


Older posts are available in the archive.