How to license your software properly

(Disclaimer: I am not a lawyer, everything in this post is probably wrong)

Too many times I’ve stumbled across a really useful library or framework that is ridiculously prohibitively licensed. The thing is that most people are simply oblivious to what the license entails and just slap on a GPLv3 (because everyone is using GPLv2, and of course you want the latest version.. right?).

The problem with GPL is that it includes a copyleft. This means that any distributed work that uses anything licensed under GPL is required to be released under a GPL license as well. The reason for this is to force organizations that are developing proprietary software to release their improvements back into the wild.

This is a really good thing in most cases. The Linux kernel use this license to make sure that any company using their code is required to release any bug fixes or features free for everyone. The thing with GPLv2 is that this only applies to your entire project if you include GPLv2 licensed code in your project code base. If you use a third party library or module which is distinct from your code you’re fine and are free to ignore the limitations of the license.

However, this was recognized by GNU and GPLv3 was soon released to fix this “problem”.

So when you license your new awesome library under GPLv3 you are essentially forcing everyone else that use your library to also license their code under the same license. And now nobody can use your code for anything without releasing all of their own code for free.

If you want to prevent people from profiting on (and using) your code you should go ahead with a GPL license. Otherwise pick something else, I explain some of the most common licenses below. Sorted from least to most complex.

WTFPL

This is “(Do) What The Fuck (You Want To) Public License” which is effectively the same as releasing your software under the public domain (which is useful in countries where that’s not a thing). By releasing your code under WTFPL you allow people to do anything they want with it.

MIT

The MIT license was developed at MIT (duh!) and is almost the same as WTFPL except that it includes two important points. The first is that the copyright notice must remain intact, i.e. they are not allowed to remove your license from your code. Note that this is not the same as giving any notable recognition for using your code in a released product.

The second point which is far more important is that it removes any liability from using your code. If your code accidentally deletes important files or causes a company financial losses they can’t claim any damages from you.

If you don’t know what license to pick. Pick this one.

BSD

Since MIT had their own license it was only a question of time before Berkeley created one of their own as well. The BSD license is almost identical to the MIT license except for one additional point.

This license explicitly prohibits the use of your name or your organization in endorsement of the end product. Anyone using your code is not allowed to claim that you are endorsing their product by letting them use your code.

A rather important thing about the BSD license is that there are two versions. One with four clauses and one with three. The three-clause version is the BSD license and the one you should use if you choose to use this license.

Apache License

This is the most complex license and I admit that I don’t entirely understand what all of it means. Note that the Apache license is different from Apache (which is a web server) and Apache software foundation (that are maintaining the web server and this license, among other things).

What makes Apache different from the other licenses (apart from it’s far more complex legalese) is that it grants anyone using your code a patent license for any patents that cover your source code. Because copyright and patents are entirely different animals this protects the licensee from patent infringement which makes your code safer to use.

I hope that this gives some clarity in the jungle that is licensing and copyright.

What deployment tools can do for you

I restarted work on one of my older hobby projects. Though I’m not really sure what my end goal is yet I got a vague idea of what I want to build and it’s nice to have something of my own to code on.

While setting this project up I took some extra time to make sure I got deployments automated from the start. Proper configuration and use of tools saves a lot of time but it also takes several hours to a day or two to set up, depending on the project of course.

There is a ton of guides on the web on how to do this so I won’t reiterate too much here. The project is version controlled with git which allows me to set up a repository directly on the server. In the post-receive hook I am able to add any commands necessary to properly manage the project.

As an added bonus I have fully integrated dependency management with Composer, automated asset optimization with Assetic and database migrations (that write themselves!) with Doctrine. The result is that I write some code, run a command to generate migrations if the code touches the database, and commit before pushing to live.

And that’s it.

I don’t have to SSH into the server to manually do a pull. There’s no need to optimize and minify CSS and JavaScript on my dev machine. I don’t have to worry about out of date dependencies. Heck, I don’t even have to write any SQL. It just works.

How I set up Pelican for blogging pt. 2

So the whole point of running a static website (besides that it’s cool) is the performance aspect. And I personally think that if you’re doing something for performance you might as well go all the way. So here’s how I optimized my static website.

First downloading the plugins and themes I would require to the folder for my configuration files.

git clone https://github.com/getpelican/pelican-plugins ~/Projects/Pelican/plugins
git clone https://github.com/getpelican/pelican-themes ~/Projects/Pelican/themes

And then adding them to the main configuration file by adding these lines to the bottom.

# Plugins
PLUGIN_PATHS = ['/home/rasmus/Projects/Pelican/plugins']
PLUGINS = ['assets', 'gzip_cache']

# Theme
THEME = '/home/rasmus/Projects/Pelican/themes/monospace'

I picked “monospace” for the theme since it was fairly lightweight and only required one small CSS-file, normally I would write a theme myself but for now I just wanted everything up and running. The two plugins added in the configuration are for asset management (assets) and for generating a static compressed version of the website (gzip).

The assets plugin requires additional dependencies for Python which I had to install through pip sudo pip install webassets cssmin. To get the asset management working properly I had to do some modifications to the theme source code. By changing the CSS links to this (below) in the theme and also removing the @import in the CSS file I am able to minimize both CSS files into the same stylesheet and send both over the same request, minimizing requests to the server.

{% assets filters="cssmin", output="css/style.min.css", "css/main.css", "css/pygment.css" %}
{% endassets %}

My Apache-server automatically started serving the gzip compressed files, so I didn’t have to add anything else for the compression to work. Normally I would have to configure Apache to compress the files as they are being requested, but by having the files compressed beforehand allows for some additional milliseconds to be scaled of the wait time from the server.

And then to cheat a little bit I set up static caching with expires headers and disabled ETags. This means that the website will first be downloaded once in 10-20 milliseconds depending on network connection speeds, but after that all resources will be loaded from the local cache and I can reduce the rendering time to 3 milliseconds. I do this by first setting up modules with a2enmod headers expires and then adding this to the vhost directive (I have .htaccess disabled for even better performance).

Header unset ETag
FileETag None
ExpiresActive On
ExpiresDefault A300
ExpiresByType text/css A526000
ExpiresByType image/gif A526000
ExpiresByType image/png A526000
ExpiresByType image/jpeg A526000

All in all the site is rather fast now, I’ll keep this blog posted if I discover anything else to improve.

How I set up Pelican for blogging pt. 1

No, this blog still uses WordPress (now Hugo!) because of its convenvience and ease of use. But I needed a way to document my personal server that I use for Mumble, IRC and my small projects and I decided to test out static blog generators for that.

Normally people use Octopress (based on Jekyll) which labels itself as “A blogging framework for hackers” which is cool and all but I really don’t like Ruby and I had heard a lot of good stuff about Pelican so I went with that.

This post merely describes what’s different in my own approach and isn’t very detailed in itself, for a better tutorial in setting things up I recommend the documentation pages on how to get started.

Installing Pelican was a breeze, running sudo pip install pelican markdown installs both Pelican and the required packages to write in Markdown (normally you do this in a virtual environment but since I normally don’t work in Python this isn’t a concern for me). Following this up with pelican-quickstart generates a good basic template for getting started in the current directory.

I set up my blog in ~/Projects/Pelican and instead of using the make tools included with the quickstart package I set up my own alias in .bash_aliases like so echo "alias blog='pelican ~/Blog -o ~/Projects/Pelican/web -s ~/Projects/Pelican/pelicanconf.py'" >> ~/.bash_aliases. This allows me to write my blog posts in the “Blog” folder in my home directory and then just call blog to re-generate the blog when I’m done.

Of course this requires me to create a virtual host in Nginx that points to the output folder, but I prefer this over running a dedicated Python server since it allows for some better caching options as well as better performance and less resource hogging on the server.

On two factor authentication

I’m studying computer security this term and it has a way of making you very paranoid about security matters, and recent articles like this and this really doesn’t help either. Therefore I’ve decided to set up two-factor authentication everywhere possible to help protect myself to some degree for the uselessness of passwords.

Two-factor authentication essentially means that you use two authentication factors to log in instead of only one. An authentication factor is one of three things, something you know, something you have or something you are. A password is a good example of the first, while a card or cell phone is in the second category.

What this means is that for someone to hijack one of my accounts they will not only need to know my password but they also need my cell phone to generate a temporary one-time key to log in. While my phone can also be remotely tracked and locked down in case it’s stolen, and through backed up recovery keys I will still be able to access my accounts.

It might sound complex and difficult but it really isn’t, and the major security gain is a worthwhile trade off. To enable two-factor authentication you merely have to download an app (like Google Authenticator or Authy), use it to scan a QR code for the account you want secured and then you’re done. The next time you log in on a new computer you open your app, get a key to type in and you’re logged in as usual.

There’s also fairly comprehensive list of services which support two-factor auth.

On Vagrant

Vagrant enables a developer to isolate their project to a dedicated virtual machine while still coding in the same environment they use for other projects. You can essentially edit your project files in Windows and access the result through Windows while everything is running on Linux without having to do any of the tedious work of setting up and installing a virtual machine.

The cool thing about Vagrant is how the configuration file for the project can be redistributed with the rest of the code base to give other developers access to an exact replica of the original development environment.

The really cool thing about Vagrant is how ridiculously easy it is, they have a guide for setting up a first project which takes about 30 minutes to complete and goes over all the aspects of setting everything up.

The major thing that bothers me is that it’s somewhat slow, a virtual machine has a huge overhead compared to running directly on the host machine. It also takes about fifteen minutes to set up a Vagrant box the first time, which is actually negligent compared to the many hours it would take to do it manually but still feels like a long time.

Provisioning could also have been made simpler, but there are a lot of alternatives and even more examples for setting up any imaginable environment so it isn’t really a problem per se.

Fucking sudo

I stumbled across this comment a while ago and though it was pretty funny, so I wrote a basic one liner to add the “feature” to my shell. Basically what it does is allowing you to write “fucking” instead of “sudo” for the humorous effect of it, example below.

$ make install  
No.  
$ fucking make install

Here’s the code for setting it up. The specified configuration file needs to be changed for it to work in other shells than bash.

$ echo "alias fucking='sudo'" >> ~/.bashrc && . ~/.bashrc

Automatically setting height of textarea to height of its contents on page load

jQuery makes everything so ridiculously simple. To make sure a textarea is automatically resized so it fits its content one could calculate the amount of rows of text and the approximate height of the font and set the height of the textarea to the product of that. Or you could set the height to the scroll size with jQuery and JavaScript.

<script>
    $(function() {
         $('textarea').height($('textarea').prop('scrollHeight'));
    });
</script>

Admittedly, it’s not a complete solution if you need it on a page with more than one textarea though. Then it goes from a one-liner to a three-liner.

<script>
    $(function() {
        $('textarea').each(function() {
            $(this).height($(this).prop('scrollHeight'));
        });
    });
</script>

Is [language] worth learning?

This is a really short response to a question I’ve stumbled across twice today, “Is [language] worth learning?” All languages have some worth, but they are all good in different areas. It all depends on what you want to do and how much you want to learn about programming.

C is very good if you want to learn how computers work without delving into the inaccessible mess that is assembly programming. However, it takes quite a bit of effort and understanding to do a lot with C.

Java is a good choice if you want to get into computer science and work with algorithms and data structures since the language has a lot of support for this in its standard library. However, I think the language is quite boring and doesn’t leave much room to quickly hack together things.

Python is probably the best balance between accessibility for beginners and features for more complex projects. It’s also multi-paradigm which means it supports several different programming methods. If someone is just starting out with programming, Python is almost probably the way to go and Learn Python the Hard Way is probably the best way to learn the language.

How to put checkboxes in Bootstrap dropdowns

Bootstrap is an extremely useful set of tools which, I personally believe, everyone should know about and use for their own internal projects. It’s ridiculous what a time saver it is, especially combined with Font Awesome to get over 360 free, scalable icons to use with Bootstrap.

Bootstrap also have these amazing JavaScript tools which, for instance, allows you to place a dropdown menu on virtually any element. The only problem with these are that you can’t really put forms in them, since the dropdown closes when you click on it. So I looked over Stack Overflow and Google and found all sorts of elaborate solutions to it, some where pretty good, others not so much.

However, they were all very involved to set up and configure, and was generally a lot more than I needed so I was hesitant to implementing them and looked a bit more until I stumbled across an issue on GitHub where someone proposed a better solution.

So I added their JavaScript and some padding, and a scrollbar through CSS for appearances and got a very simple but still solid solution to my problem. I uploaded my final result to JSFiddle (http://jsfiddle.net/VEKYN/). It’s amazing how much time and effort you can save by just looking a little bit more at what’s already available.