speeding-up-your-initial-git-clone

I’ve been working with an Open Source project called [NetHunter.](http://www.nethunter.com) For those who are into the InfoSec side of things, you may have heard of [Kali-Linux.](http://kali.org) NetHunter is a project to bring Kali to select android devices. The project is run by [Offensive Security](http://www.offensive-security.com) which is the same organization that develops/funds Kali Linux.

My goal was to setup [Jenkins](http://www.jenkins-ci.org) for [continuous integration.](http://en.wikipedia.org/wiki/Continuous_integration) While tweaking the setup/configuration, the jenkins installation was running in a virtual machine within my home lab. Unfortunately, my home internet is horrendously slow (I live in a sparsely populated area) and doing the initial git clone takes a fair amount of time. I have jenkins configured to start with a clean environment each time, which means it has to do a full git clone for every job it runs. Due to the internet/bandwidth issues, this quickly became fairly painful.

In exploring around to see if there was a way I could speed up this initial clone, I stumbled across `git clone –reference` in one of the git manpages.
It took a few minutes of experimentation to get it to work, but work it did! I was now able to do the initial clone from a local git cache on the machines hard drive!

To setup the git cache:
{% highlight bash %}
mkdir /home/gitcache
cd /home/gitcache
git init –bare

git remote add offensive-security/kali-nethunter https://github.com/offensive-security/kali-nethunter
git remote add offensive-secrity/gcc-arm-linux-gnueabihf-4.7 https://github.com/offensive-security/gcc-arm-linux-gnueabihf-4.7.git
git remote add binkybear/kernel_samsung_manta https://github.com/binkybear/kernel_samsung_manta.git
git remote add binkybear/kangaroo https://github.com/binkybear/kangaroo.git
git remote add binkybear/kernel_msm https://github.com/binkybear/kernel_msm.git
git remote add binkybear/flo https://github.com/binkybear/flo.git
git remote add binkybear/furnace_kernel_lge_hammerhead https://github.com/binkybear/furnace_kernel_lge_hammerhead.git
git remote add binkybear/KTSGS5 https://github.com/binkybear/KTSGS5.git
git remote add binkybear/android_kernel_samsung_jf https://github.com/binkybear/android_kernel_samsung_jf.git
git remote add binkybear/android_kernel_samsung_exynos5410 https://github.com/binkybear/android_samsung_exynos5410.git

git fetch –all
{% endhighlight %}

The `git fetch –all` command should be used occasionally to update the cache with the latest upstream commits. I do it daily, via crontab.
You’ll notice I have multiple `git remotes` in the cache. This allows the same cache directory to be used for multiple projects and repos at the same time. It’s not limited to just 1!

After the cache has been established, you can use `git clone –reference /home/gitcache https://github.com/offensive-security/kali-nethunter.git` to do the initial clone while using the locally stored cache. After the clone, you are free to use other git commands like `git pull` or `git push` as you normally would.

The only drawback that I’ve found is that the newly cloned repo requires the cache to always be available. However, if you’d like the resulting repository to be standalone and independent of the cache after it is cloned, you want to cd into the new repo directory and run `git repack -a -d` and then `rm .git/objects/info/alternates`

speeding-up-your-initial-git-clone

controlling-the-website-cache-with-nginx-and-cloudflare

My good friend [David](https://davidcastellani.com) told me about an amazing service called [Cloudflare](http://www.cloudflare.com). They offer a ton of features including [DNS](https://www.cloudflare.com/dns), [CDN](https://www.cloudflare.com/features-cdn), and other cool stuff.

After recently moving my site to Jekyll / Octopress, I was looking for a way to programmatically expire the cache for my [index.html](https://palmerit.net/index.html) page. I mean, what good is it to update your site, if nobody can see the new content?

In nginx, I added the following to my server block:

{% highlight nginx %}
expires 30d;
add_header Cache-Control “public”;
{% endhighlight %}
The above will set an expire date for 30 days, and set the Cache-Control header to public, and max-age=2592000

The only problem is, the [index.html](https://palmerit.net/index.html) page would also be cached for this duration. Thankfully, Cloudflare has a pretty solid [API](http://en.wikipedia.org/wiki/Application_programming_interface) where I found information on [invalidating a specific page](https://www.cloudflare.com/docs/client-api.html?#s4.5). So, I added new `invalidate` and `purge` tasks to the Octopress Rakefile that looked like this:

{% highlight ruby %}
desc “Purge all Cloudflare-cached assets”
task :purge do
CFtoken = ENV[‘CFtoken’]
CFemail = ENV[‘CFemail’]
CFdomain = ENV[‘CFdomain’]
sh(“curl https://www.cloudflare.com/api_json.html -d a=fpurge_ts -d tkn=#{CFtoken} -d email=#{CFemail} -d z=#{CFdomain} -d v=1”)
end

desc “Invalidate index.html”
task :invalidate do
CFtoken = ENV[‘CFtoken’]
CFemail = ENV[‘CFemail’]
CFdomain = ENV[‘CFdomain’]
CFurl = ENV[‘CFurl’]
sh(“curl https://www.cloudflare.com/api_json.html -d a=zone_file_purge -d tkn=#{CFtoken} -d email=#{CFemail} -d z=#{CFdomain} -d url=#{CFurl}”)
end
{% endhighlight %}

Now when you `rake deploy` it’ll do the usual deploy, but then also invalidate the index.html file at Cloudflare.
`rake purge` will invalidate *ALL* assets that Cloudflare has cached for your site, and must be called specifically. You probably don’t want to use this feature that often.

And then I added the following to the end of the :deploy task:
{% highlight ruby %}
Rake::Task[:invalidate].execute
{% endhighlight %}

Since I store my Rakefile along with my site content in a private [git](http://git-scm.com/) repository using [bitbucket](https://bitbucket.org), I didn’t want to risk having potentially sensitive information directly in the Rakefile because I may eventually make the repo public. So instead, I added them as environment variables in my `~/.profile`, and just have the Rakefile pull that information from my shell environment.

controlling-the-website-cache-with-nginx-and-cloudflare

harden-ssl-on-nginx

I’ve decided to enable SSL on my personal site.

I installed nginx on a ubuntu 14.04 LTS server, generated a private SSL key, created a sha256 certificate signing request, and then went to NameCheap to have it signed. (As a side note, I can’t wait for LetsEncrypt to launch.)

I enabled SSL on nginx, and decided to check out which ciphers were allowed out of the box.

{% highlight erb %}
openssl s_client -connect https://palmerit.net
{% endhighlight %}
I’ve snipped the output for brevity, but of particular concern was this section:

{% highlight erb %}
ssl-enum-ciphers:
SSLv3
ciphers:
TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA – strong
TLS_DHE_RSA_WITH_AES_128_CBC_SHA – strong
TLS_DHE_RSA_WITH_AES_256_CBC_SHA – strong
TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA – strong
TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA – strong
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA – strong
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA – strong
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA – strong
TLS_RSA_WITH_3DES_EDE_CBC_SHA – strong
TLS_RSA_WITH_AES_128_CBC_SHA – strong
TLS_RSA_WITH_AES_256_CBC_SHA – strong
TLS_RSA_WITH_CAMELLIA_128_CBC_SHA – strong
TLS_RSA_WITH_CAMELLIA_256_CBC_SHA – strong
compressors:
NULL
{% endhighlight %}
You may have read about POODLE recently, and the best way to prevent it is to disable SSLv3.
After seeing this result, I went to SSL Labs to see what else I needed to disable.

After the initial scan, I had a `C` rating. I knew at a minimum, I needed to disable SSLv3, but I also decided to enable some of the newer technologies such as SPDY,  HSTS, and OCSP Stapling

The end result was a configuration that looked like this:
In the main server block for palmerit.net, I added:
{% highlight nginx %}
listen 443 ssl spdy;
include ssl.inc;
{% endhighlight %}
I then created `/etc/nginx/ssl.inc` which contained the following:
{% highlight nginx %}

ssl_ciphers ‘AES256+EECDH:AES256+EDH:!aNULL’;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache shared:SSL:10m;

ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout 10s;

ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;

add_header Strict-Transport-Security max-age=63072000;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
{% endhighlight %}

I decided to create this as an include in case I later decide to add additional [“nginx server blocks”](http://wiki.nginx.org/ServerBlockExample).

The final result from an SSL Labs scan: `A+`

Keep in mind, I used a very restrictive CipherSuite. This will block older clients from being able to connect. I personally don’t mind this(Personally, I think people should be using modern browsers and software), but *you* might not want to prohibit older clients to your site.

References:
– Raymii.org – Strong SSL on nginx

harden-ssl-on-nginx