Creating an Awesome Open Source Community

An active community is probably the most important necessity for open source software projects. Leaders of open source projects should actively satisfy the members of their community because without them their project is not worth much. Makes sense right? I recently came across a PhD thesis called “Factors Influencing Participant Satisfaction with Free and Open Source Projects.” If you’re interested you can read the entire thesis here. In summary, the research question the thesis addressed was: “What influences community member satisfaction with free and open source software projects?”

The Hypotheses

After many interviews and surveys, the following hypotheses remained valid:

  • High quality and relatively frequent developer communication lead to better satisfaction.
  • A positive relationship exists between the amount of participation and participant satisfaction.
  • A positive relationship exists between process openness and participant satisfaction.

There were some others too but these cover the main points.

Ways to Create an Awesome Open Source Community

The paper ends with several recommendations for Open Source projects that will, in theory, create an awesome open source project and, most importantly, an awesome open source community.

Based on my involvement with several open source projects, both big and small, I agree that if a project has these 6 characteristics it has a good chance to become popular. It all comes down to the user base. In my view there is no better accelerator than a happy and involved user base.

The List

  • The “About” page should include information about what types of contributions are most needed. This page should also clearly explain how community members should go about contributing.

This is pretty self-explanatory. If you want people to help you need to tell them how to help.

  • Make sure to acknowledge and celebrate contributions. This will make people who do contribute feel appreciated and motivated to continue and help more.

I’ve made several significant contributions to open source projects and it’s always a little irritating if the Pull Request is merged with no comment of appreciation like “Thanks for your help.”

  • Monitor the project’s email discussion list and/or forums and answer questions — particularly those from newcomers to create a great first experience.

There’s almost nothing more annoying than emailing a list with many members and never getting a reply. I’ve done that several times. It really creates a welcoming environment when a member of that projects community takes a minute to reply to your request.

  • Provide information to the project’s about the future development, aka roadmap.

For free and public open source projects there’s little reason for the lead developers to hide their plans. Some might fear that other projects will borrow “steal” their ideas but that’s not a good reason. A roadmap shows commitment, dedication and organization.

  • Provide documentation that is up-to-date and clear, especially the more complex components.

Reading through documentation that is old, outdated, and written for who knows who isn’t fun for anyone. Creating a group in your open source community that focuses on writing clean documentation is a brilliant idea.

  • Finally, identify what barriers participants encounter when making a contribution to the project, and take steps to decrease or eliminate them.

Making it easy for people to become involved is important. Don’t assume everyone knows the process to submit Pull Requests.

What do you think are the best practices for creating successful open source communities?

High Performance Ghost Configuration with NGINX

Ghost is an open source platform for blogging founded by John O’Nolan and Hannah Wolfe. It’s a node.js application and therefore works great in conjunction with nginx. This guide will will help you create a high performance nginx virtual host configuration for Ghost.

After seeing that tweet I decided to make my own configuration guide for running the Ghost CMS with nginx.

 

Ghost is a node.js application that runs on a port. We can configure nginx to proxy to this port and also cache so that we don’t need to rely on express, the default node web application framework.To start we need to tell nginx what port Ghost is running on. Define an upstream in your domains virtual host configuration file.

upstream ghost_upstream {
    server 127.0.0.1:2368;
    keepalive 64;
}

This tells nginx that Ghost is running on 127.0.0.1:2358 and sets the connection to last 64 secconds to avoid having to reconnect for every request.

Proxy Cache

We want to cache responses from Ghost so we can avoid having to proxy to the application for every request. The first step to do this is set a proxy_cache_path. In your configuration define the cache. The configuration below creates a zone that is 75 megabytes, and removes files after 24 hours if they haven’t been requested.

proxy_cache_path /var/run/cache levels=1:2 keys_zone=STATIC:75m inactive=24h 
max_size=512m;

Server Block

Now we can start the configuration for the domain that will be serving your Ghost blog. Note, if your using SSL/TLS for your blog you will want to use the configuration towards the end of this guide.

1) Location block for blog page requests:

This configuration will cache valid 200 responses for 30 minutes and 404 responses for 1 minute from the previously defined upstream into the STATIC proxy_cache. We also want to ignore and or hide several headers that Ghost creates since we will be using our own. In addition to the nginx cache we will also be caching the pages in the browser for 10 minutes, expires 10m;.

location / {
        proxy_cache STATIC;
        proxy_cache_valid 200 30m;
        proxy_cache_valid 404 1m;
        proxy_pass http://ghost_upstream;
        proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
        proxy_ignore_headers Set-Cookie;
        proxy_hide_header Set-Cookie;
        proxy_hide_header X-powered-by;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        expires 10m;
    }

It’s also helpful to add a header to your page requests that tells if the request hit the nginx cache. This can be done easily with add_header X-Cache $upstream_cache_status;.

2) Location block(s) for static file requests like css, js and images:

We are going to tell nginx where to find static files like css, js and images since the node.js powered Ghost application is on the same server as nginx. To do thise we need four location blocks that point to the location of images folder, assets folder, public folder, and scripts folder. We will cache these static fiiles with expires max; so they remain cached forever in the users browser. This is safe to do since ghost appends a version query string that updates when node.js is reloaded/restarted.

Note: When changing your Ghost theme you will need to change the alias path in the location /assets nginx block.

location /content/images {
        alias /path/to/ghost/content/images;
        access_log off;
        expires max;
    }
    location /assets {
        alias /path/to/ghost/content/themes/(theme-name)/assets;
        access_log off;
        expires max;
    }
    location /public {
        alias /path/to/ghost/core/built/public;
        access_log off;
        expires max;
    }
    location /ghost/scripts {
        alias /path/to/ghost/core/built/scripts;
        access_log off;
        expires max;
    }
3) nginx Location Block for Ghost Admin Interface

The administrative interface should definitely not be cached. The location block below applies to the backend and signout page. It defines the establed ghost_upstream backend and sets cache headers to ensure nothing is cached. Most importantly, note that we are not defining any proxy_cache settings.

location ~ ^/(?:ghost|signout) { 
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $http_host;
        proxy_pass http://ghost_upstream;
        add_header Cache-Control "no-cache, private, no-store,
        must-revalidate, max-stale=0, post-check=0, pre-check=0";
    }

Full HTTP NGINX Configuration for Ghost.

If you’ve followed along you will now end up with a working nginx configuration that looks like this:

server {
   server_name domain.com;
   add_header X-Cache $upstream_cache_status;
   location / {
        proxy_cache STATIC;
        proxy_cache_valid 200 30m;
        proxy_cache_valid 404 1m;
        proxy_pass http://ghost_upstream;
        proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
        proxy_ignore_headers Set-Cookie;
        proxy_hide_header Set-Cookie;
        proxy_hide_header X-powered-by;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        expires 10m;
    }
    location /content/images {
        alias /path/to/ghost/content/images;
        access_log off;
        expires max;
    }
    location /assets {
        alias /path/to/ghost/content/themes/uno-master/assets;
        access_log off;
        expires max;
    }
    location /public {
        alias /path/to/ghost/core/built/public;
        access_log off;
        expires max;
    }
    location /ghost/scripts {
        alias /path/to/ghost/core/built/scripts;
        access_log off;
        expires max;
    }
    location ~ ^/(?:ghost|signout) { 
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $http_host;
        proxy_pass http://ghost_upstream;
        add_header Cache-Control "no-cache, private, no-store,
        must-revalidate, max-stale=0, post-check=0, pre-check=0";
    }

}

SSL/TLS Configuration for Ghost Blog

You may want to server your blog over HTTPS with SSL/TLS. First thing you should do is update the URL in the Ghost config.js file.

The nginx setup for usin SSL/TLS for Ghost requires several additional configurations. A full sample configuration is below. I will highlight the important differences.

The first several lines of the nginx configuration below establish and optimize HTTPS connections. You can use SPDY and additional settings like spdy_headers_comp, keepalive_timeout , ssl_session_cache, and OCSP stapling. Here I’m going to assume you know what those are since the purpose of this guide is to talk about Ghost.

In the location / block it’s very important that you include proxy_set_header X-Forwarded-Proto https; or else when you go to load your Ghost blog you will receive a redirect loop. You’ll need the same thing in the location ~ ^/(?:ghost|signout) { block.

server {
   server_name domain.com;
   listen 443 ssl spdy;
   spdy_headers_comp 6;
   spdy_keepalive_timeout 300;
   keepalive_timeout 300;
   ssl_certificate_key /etc/nginx/ssl/domain.key;
   ssl_certificate /etc/nginx/ssl/domain.crt;
   ssl_session_cache shared:SSL:10m;  
   ssl_session_timeout 24h;           
   ssl_buffer_size 1400;              
   ssl_stapling on;
   ssl_stapling_verify on;
   ssl_trusted_certificate /etc/nginx/ssl/trust.crt;
   resolver 8.8.8.8 8.8.4.4 valid=300s;
   add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';
   add_header X-Cache $upstream_cache_status;
   location / {
        proxy_cache STATIC;
        proxy_cache_valid 200 30m;
        proxy_cache_valid 404 1m;
        proxy_pass http://ghost_upstream;
        proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
        proxy_ignore_headers Set-Cookie;
        proxy_hide_header Set-Cookie;
        proxy_hide_header X-powered-by;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header Host $http_host;
        expires 10m;
    }
    location /content/images {
        alias /path/to/ghost/content/images;
        access_log off;
        expires max;
    }
    location /assets {
        alias /path/to/ghost/themes/uno-master/assets;
        access_log off;
        expires max;
    }
    location /public {
        alias /path/to/ghost/built/public;
        access_log off;
        expires max;
    }
    location /ghost/scripts {
        alias /path/to/ghost/core/built/scripts;
        access_log off;
        expires max;
    }
    location ~ ^/(?:ghost|signout) { 
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $http_host;
        proxy_pass http://ghost_upstream;
        add_header Cache-Control "no-cache, private, no-store,
        must-revalidate, max-stale=0, post-check=0, pre-check=0";
        proxy_set_header X-Forwarded-Proto https;
    }
}

Questions or comments? Post below!

Resource Hints: Preconnect, Preload, Prerender

I was a meetup this week where Ilya Grigorik, Web Performance Engineer @ Google, gave a great talk about Resource Hints that you can add to your application or website to improve performance. He specifically covered using preconnect, preload, and prerender. Keep in mind these are currently draft specifications.

One thing I learned was that using dns-prefetch is more or less pointless if there’s an alternative like prerender. Doesn’t make sense to only resolve the DNS name but not actually connect to the server.

<link rel=”dns-prefetch” href=”//somehost.com” /> <link rel=”preconnect” href=”//somehost.com” />

Here are the slides with much more information.

Continue reading Resource Hints: Preconnect, Preload, Prerender

Using Fail2ban with Nginx and WordPress

Fail2ban is a popular intrusion prevention software written in python that is intended to protect your server from single-source brute force attacks. By default it will watch your SSH service on port 22 but it also does much more.You can find many filters on the web, or write your own, that match a specific set of rules based on some log. Fail2ban can also be set to block the IP address of people trying to log into your WordPress website.First, add the following code to functions.php in your WordPress theme. This will return a 403 status for incorrect login attempts.

function my_login_failed_403() {status_header( 403 );} add_action( ‘wp_login_failed’, ‘my_login_failed_403′ );

Then, create a filter with these rules in /etc/fail2ban/filter.d, name it wordpress-ban.conf.

[Definition] failregex = .*POST.*(wp-login.php|xmlrpc.php).* 403

The last step is to tell Fail2ban what to do when entries in your log meet the specified criteria. In the case below Fail2ban will block a users IP address for 1 hour if he or she has more than 5 incorrect login attempts.

[wordpress] enabled = true
port = http,https
filter = wordpress-ban
logpath = /path/to/nginx/access.log
maxretry = 5
bantime = 3600

Easy, right?

Velocity Conference June 2014 Santa Clara, California

I was fortunate to attend the O’Reilly Velocity Conference in Santa Clara, California on June 25 and 26. The Velocity Conference is the premier annual conference about web performance and operations. People from all over the world attend to hear industry leaders talk and learn how to build faster applications and streamline their operations. It was one of the smartest groups of people I’ve ever been around. Well over 2,000 developers, engineers, managers, sys admins, programmers, and other industry experts in one convention center. I also got to know some of the NGINX team from Russia.

The Exhibitor’s Hall was filled with lots and lots of companies in the web performance industry. Exhibitors included companies such as Cedexis, of course NGINX, RackSpace, Fastly, Dropbox, CacheFly, Akamai, MaxCDN, Linode, New Relic and the list goes on. It was great to recognize many of the companies. I especially enjoyed meeting with some of the leaders at MaxCDN and Cedexis — two companies which sponsor jsDelivr.

The Velocity Conference schedule was jam-packed with talks from leading web performance advocates, engineers and evangelists such as Ilya Grigorik and Patrick Meenan from Google, Andrew Fong from Dropbox, and Steve Souders from Fastly and previously Head Performance Engineer at Google and Chief Performance Officer at Yahoo!.

I spent a lot of time at our booth (for NGINX) but did attend a few sessions too. My favorite session, excluding of course that from Sarah Novotny, evangelist and community leader at NGINX about “Things You Didn’t Know about NGINX,” was from Ilya Grigorik. One of his talks was called “IS TLS Fast Yet?” Ilya delved into TLS Optimization which was really interesting. A few of his key topics were:

  • Leveraging CDNs and edge nodes to minimize latency.
  • Reducing and eliminating RTTs with abbreviated handshakes.
  • Reducing computational costs with session resumption.
  • Reducing buffering latency for interactive traffic and maximizing throughput for bulk delivery.
  • Optimizing certification validation, leveraging certificate pinning and HSTS.
  • Configuration and deployment best practices: enabling False Start, Forward Secrecy, and more.

Overall the Velocity Conference 2014 was a really exciting event. I met many new people in the industry and was introduced to some cool new startups. It was also great to see that almost all companies are hiring. It’s definitely a conference I would hope to attend again one day.

Using Autoptimize Plugin for WordPress Performance

A WordPress Developer recently suggested the Autoptimize plugin in a Facebook Group I follow. The plugin claims to “speed up your website and helps you save bandwidth by aggregating and minimizing JS, CSS and HTML.” I’ve implemented this plugin on several high-traffic WordPress websites and I’ve not ran into any issues. In all cases I have seen considerable performance improvements. There’s quite a few settings in the Autoptimize plugin so I will go through the options panel step by step.

HTML Options in Autoptimize

Optimizing HTML with Autoptimize removes line breaks, eliminates white space and removes comments. All three of these elements are not necessary for a production website and are, in almost all cases, recommended optimization tactics. Although the performance benefit is not as significant as JavaScript and CSS optimization, optimizing your HTML reduces the number of bytes transmitted and therefore reduces the payload which means quicker loading times and less load on your server.

autoptimize html options

Javascript Options in Autoptimize

The options I recommend for the JavaScript Autoptimize section in the Autoptimize plugin are shown in the image below. Activating the first check box “Optimize JavaScript Code?” combines all of the JavaScript files in your WordPress theme into a single file. The Autoptimize plugin then also minifies this file which is essentially the same as the HTML minification process I described earlier. The combined and minified file is served as the last request of your website so that everything else loads first.

autoptimize javascript options

Important: If your WordPress theme has special effects powered by JavaScript then your site may not load as intended. Some ThemeForest.net themes, for example, have JavaScript that makes your text fade or slide on entry. If you tell Autoptimize to load JavaScript last then this effect could be delayed which can cause an awkward user experience. If your website is not loading correctly then try to find the scripts responsible for the missing effects and list them in the “Exclude scripts from Autoptimize” input. If you opt to check the “Force JavaScript in ?” then make sure you also check the “Look for scripts only in ?” option.

CSS Options in Autoptimize

Most of the times you won’t have any issues using the default settings used for CSS in Autoptimize. It is more likely for JavaScript optimization to cause issues in your WordPress theme than CSS optimization. As with HTML and Javascript optimization, the option “Optimize CSS Code?” combines your themes CSS stylesheets into one, and minifies it, i.e. eliminates whitespace, comments and line breaks.

The option “Generate data: URIs for images” converts images that are smaller than a set size into data: URIs. This is essentially a super long string of gibberish that your browser interprets to be an image. The benefit of this method is that it eliminates HTTP requests which will immediately increase the performance and reduce the loading time of your WordPress powered website or application.

The “Look for styles only in ?” option is potentially necessary if your WordPress theme has many CSS stylesheets. There is a certain point, which depends on your users internet connection speed, where it’s more beneficial to have 2 medium-sized CSS stylesheets instead of 1 large CSS stylesheet. If you have more than 8 or 9 CSS stylesheets the I would recommend activating this option and performing some performance analysis tests with WebPagetest, an open source project supported and developed by Google.

I would not recommend activating the “Defer CSS loading” in almost all cases. Actually, I can’t really think of a scenario when you would want Autoptimize to do this (got one? let me know in the comments). Activating this option will make Autoptimize load everything else on your page, including images, before the CSS styelsheet(s) is requested. This results in your page being loaded in an unformatted style which will leave your site visitor irritated. Similarly, there is no real reason to inline all your CSS with the “Inline all CSS?” option unless you have less then a few dozen lines of CSS for your entire WordPress site — which is highly unlikely.

autoptimize css options

CDN Options in Autoptimize

Imputing a “CDN Base URL” in the Autoptimize plugin can further reduce your page load times and by extension increase performance. For example, instead of the CSS URL being http(s)://domain.com/.css it can be http(s)://cdn.domain.com/.css. Serving your CSS and JavaScript over a Content Delivery Network, such as MaxCDN, has many benefits — which I won’t get into here.

autoptimize cdn options

Conclusion

Well, that’s all there is to the Autoptimize WordPress plugin. It’s really improved several medium and large-sized WordPress websites I maintain. I urge you to at least give the plugin a try and of course leave your comments and questions below.