Hexo blog development on Docker

Overview

It’s been over a year since my last post and I wanted to share some of my experiences utilizing Docker for my Hexo blog development. In my current consultant position, I have been working extensively with Docker’s technology stack, streamlining various customers’ integration efforts into the API Management realm. This post will focus on how to develop your own Hexo blog with Docker.

Docker

Docker encapsulates your application in virtual, containerized environments enabling you to deploy and run your applications in their own isolated or clustered domains. All of the application’s run-time OS packages, libraries, and dependencies are included with the application binaries/executables when a Docker container is created. These containers can be deployed to single or multiple hosts for repeatable Continuous Integration/Continous Deployment environments, replicating or replacing physical with virtualized infrastructure, or isolated application environments for development. There are many different use-cases for Docker containers and running a Hexo blog is one of them.

Dockerfile

A Dockerfile is a script that defines all of the various commands for creating an image. The Dockerfile for the Hexo blog is pretty straight-forward. I am using the core node:5.3.0-slim image, set the HEXO_SERVER_PORT environment, install hexo-cli, expose the HEXO_SERVER_PORT, then intall the packages via NPM and run the server. I have already pre-defined the hexo-server and hexo-admin plugins in the app/package.json so NPM handles those dependencies.

1
FROM node:5.3.0-slim

MAINTAINER Chris Page <phriscage@gmail.com>

## set HEXO_SERVER_PORT environment default
ENV HEXO_SERVER_PORT=4000

## update the respositories
RUN apt-get update
## install git for deployment
RUN apt-get install git -y

## install hexo-cli globally
RUN npm install -g hexo-cli

## set the workdir
WORKDIR /app

## expose the HEXO_SERVER_PORT
EXPOSE ${HEXO_SERVER_PORT}

#COPY docker-entrypoint.sh /app/.
#ENTRYPOINT ["/app/docker-entrypoint.sh"]

## npm install the latest packages from package.json and run the hexo server
## TODO put this in an appropriate ENTRYPOINT script
#CMD npm install && hexo clean && hexo server -d -p ${HEXO_SERVER_PORT}
CMD npm install; hexo clean; hexo server -d -p ${HEXO_SERVER_PORT}

Runtime

When running a Hexo blog Docker container, you need to specify the local Hexo blog volume directory via -v to mount to the container’s /app directory:

  • -v ~/github.com/phriscage/phriscage.github.io_hexo/app:/app

The port command -p, will map your exposed container port to the Docker host:

  • -p $HEXO_SERVER_HOST_PORT:$HEXO_SERVER_CONTAINER_PORT

You can also specify the -e HEXO_SERVER_PORT environment variable to change the exposed container portL

  • -e HEXO_SERVER_PORT=$HEXO_SERVER_CONTAINER_PORT
1
$ BLOG_DIR=~/github.com/phriscage/phriscage.github.io_hexo HEXO_SERVER_CONTAINER_PORT=4000; HEXO_SERVER_HOST_PORT=4000; docker run -it --rm -p $HEXO_SERVER_HOST_PORT:$HEXO_SERVER_CONTAINER_PORT -e HEXO_SERVER_PORT=$HEXO_SERVER_CONTAINER_PORT -v $BLOG_DIR/app:/app --name hexo_blog phriscage/hexo-server
npm info it worked if it ends with ok
npm info using npm@3.3.12
npm info using node@v5.3.0
npm info attempt registry request try #1 at 2:54:43 AM
npm http request GET https://registry.npmjs.org/fsevents
npm http 304 https://registry.npmjs.org/fsevents
npm WARN install Couldn't install optional dependency: Unsupported
npm WARN install Couldn't install optional dependency: Unsupported
npm info lifecycle phriscage.github.io@0.1.0~preinstall: phriscage.github.io@0.1.0
npm info linkStuff phriscage.github.io@0.1.0
npm info lifecycle phriscage.github.io@0.1.0~install: phriscage.github.io@0.1.0
npm info lifecycle phriscage.github.io@0.1.0~postinstall: phriscage.github.io@0.1.0
npm info lifecycle phriscage.github.io@0.1.0~prepublish: phriscage.github.io@0.1.0
npm info ok
INFO  Deleted database.
INFO  Hexo is running at http://0.0.0.0:4000/. Press Ctrl+C to stop.

Now that your container is running, you can navigate to the http://DOCKER_HOST_IP:PORT/admin URL on your broswer and start blogging!

Let me know if you have any comments or questions.

Best,

Chris

Flask NoSQL Authentication Tutorial - Part II

Overview

This is the second part of a tutorial that provides instructions for how to create an authentication mechanism for a web application utilizing Flask as the Python web framework and Elasticsearch (ES) as the NoSQL data store.

The first part of the tutorial covered the prerequisites, the Main API, the User model, and the Users API end point. In this second part of the tutorial, I will be covering the Flask-Login and session management modifications required for the main API, the User model, and the Auth API.

Once again, feel free to ask any questions below and I’ll be happy to respond!

Flask-Login

Flask-Login provides user session management for basic authentication tasks; logging a user in and logging out a user, in your application. You can restrict specific views for non-authenticated users by adding a decorator to your view routes. For this tutorial example, I have followed the basic configuration and created a custom user_loader for ES.

Main API

In the Main API, we define the ‘login_manager’ and the ‘load_user’ function for the Flask-Login ‘user_loader’ decorator which sets the callback for reloading a user from the session. The ‘load_user’ funcation creates a User object, checks if the user exists in ES, then returns the User object:

main.pylink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
@login_manager.user_loader
def load_user(email_address):
try:
user = User(email_address=email_address)
except ValueError as error:
message = str(error)
logger.warn(message)
return None
data = {}
try:
data = g.db_client.get('example', user.key)
except (TransportError, Exception) as error:
if not getattr(error, 'status_code', None) == 404:
logger.critical(str(error))
return None
if not data.get('found', None):
message = "'%s' does not exist." % email_address
logger.warn(message)
return None
user.set_values(values=data['_source'])
return user

Then we define the APP_SECRET_KEY as a global variable, then assign it to the main app and instantiate the ‘login_manager’:

1
app.secret_key = APP_SECRET_KEY
login_manager.init_app(app)

That’s all the changes required for the ‘main.py’. We need to modify the User model but those changes are minor too.

User model

For the User model, we need to add a few functions that are required for Flask-Login. The function doc strings should be self explanatory.

User.pylink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
def is_authenticated(self):
""" should just return True unless the object represents a user
that should not be allowed to authenticate for some reason.
"""

if self.is_anonymous():
return False
return True

def is_active(self):
""" method should return True for users unless they are inactive, for
example because they have been banned.
"""

if not self.values.get('is_active', False):
return False
return True

def is_anonymous(self):
""" method should return True only for fake users that are not supposed
to log in to the system.
"""

if not self.values.get('is_anonymous', False):
return False
return True

def get_id(self):
""" return the self.key """
return self.values[KEY_NAME]

Auth API

Now for the Auth API, we create a ‘login’ route for authenticating a user and a ‘logout’ for unauthenticating a user. For the ‘login’ route, first, we verify the user submitting the request is valid by checking if the user key exists in ES. Next, we check if the request payload includes the correct password by comparing the password value with the hashed password from the database. Finally, we add the valid user into session via ‘login_user’. The ‘login’ route is almost identical to the ‘new’ user route from the User API, but we add the password check and add the authenticated user via ‘login_user’:

Authlink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
...
logger.debug("'%s' successfully found!", request.json['email_address'])
user.set_values(values=data['_source'])
if not user.check_password(request.json['password']):
logger.warn("'%s' incorrect password", request.json['email_address'])
message = "Unknown email_address or bad password"
return jsonify(message=message, success=False), 400
login_user(user)
message = "'%s' successfully logged in!" % request.json['email_address']
logger.info(message)
...
```

Once a use is authenticated, the active user is now stored in the session. For the 'logout' route, we simply call the 'logout_user()' method to remove the user id from the current session. Now let's create a test route that is only accessible from authorized users.

## Test API

The [Test API](https://github.com/phriscage/flask_elasticsearch_auth_example/blob/master/lib/example/v1/api/test/views.py) includes the 'login_required' decorator which restricts access to only users that are authenticated:

``` Python Test https://github.com/phriscage/flask_elasticsearch_auth_example/blob/master/lib/example/v1/api/test/views.py
...
@test.route('')
@login_required
def index():
...

Import the new auth and test Blueprints and register it with the URL route to the app in main.py:

main.pylink
1
2
3
4
5
6
from example.v1.api.auth.views import auth
app.register_blueprint(auth, url_prefix="/v1/auth")
from example.v1.api.users.views import users
app.register_blueprint(users, url_prefix="/v1/users")
from example.v1.api.test.views import test
app.register_blueprint(test, url_prefix="/v1/test")

Start the application again with the ‘main.py’ and run curl -X GET -D - http://127.0.0.1:8000/v1/test. You should recieve an 401 unauthorized response:

1
$ curl -X GET -D - http://127.0.0.1:8000/v1/test
HTTP/1.0 401 UNAUTHORIZED
Content-Type: application/json
Content-Length: 294
Set-Cookie: session=eyJfaWQiOnsiIGIiOiJOalk0TldVMU1XWXdaamsyT0Roa1pqVmxOamN3TnpRNU5tSmpNamsxTVRJPSJ9fQ.B6pYAg.q2HbuYgeleBAGU1kKfDCCnGEugg; HttpOnly; Path=/
Server: Werkzeug/0.9.6 Python/2.6.6
Date: Tue, 20 Jan 2015 01:18:19 GMT

{
  "error": "401: Unauthorized",
  "message": "The server could not verify that you are authorized to access the URL requested.  You either supplied the wrong credentials (e.g. a bad password), or your browser doesn't understand how to supply the credentials required.",
  "success": false
}

We need to first authenticate our test user, store the cookie, then send the request again. Let’s authenticate the user we created in Part I, ‘test@abc.com’ and store the cookies into a file, ‘cookies.txt’

1
$ curl -X POST -s -D - -c ~/cookies.txt -H 'Content-Type: application/json' -d '{"email_address": "test@abc.com", "password": "test"}' http://127.0.0.1:8000/v1/auth/login
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 360
Set-Cookie: session=eyJfZnJlc2giOnRydWUsIl9pZCI6eyIgYiI6Ik5qWTROV1UxTVdZd1pqazJPRGhrWmpWbE5qY3dOelE1Tm1Kak1qazFNVEk9In0sInVzZXJfaWQiOiJ0ZXN0QGFiYy5jb20ifQ.B58_Qg.Ez4andKJ01l51Ltd5nDg9EyXzTQ; HttpOnly; Path=/
Server: Werkzeug/0.9.6 Python/2.6.6
Date: Tue, 20 Jan 2015 01:22:10 GMT

{
  "data": {
    "_id": "test@abc.com",
    "_index": "example",
    "_source": {
      "_type": "user",
      "created_at": 1417912435.2168,
      "email_address": "test@abc.com",
      "is_active": true
    },
    "_type": "user",
    "_version": 1,
    "found": true
  },
  "message": "'test@abc.com' successfully logged in!",
  "success": true
}

Boom! We’ve successfully authenitcated our test user! You can view the ‘cookies.txt’ to see the current session cookie. Now we can use that session variable to send a request to ‘test’ again: curl -X GET -s -D - -b ~/cookies.txt http://127.0.0.1:8000/v1/test

1
$ curl -X GET -s -D - -b ~/cookies.txt http://127.0.0.1:8000/v1/test
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 273
Set-Cookie: session=eyJfZnJlc2giOnRydWUsIl9pZCI6eyIgYiI6Ik5qWTROV1UxTVdZd1pqazJPRGhrWmpWbE5qY3dOelE1Tm1Kak1qazFNVEk9In0sInVzZXJfaWQiOiJ0ZXN0QGFiYy5jb20ifQ.B58_6Q.JoOanNrX80o0hiBnrwGllvUg1G8; HttpOnly; Path=/
Server: Werkzeug/0.9.6 Python/2.6.6
Date: Tue, 20 Jan 2015 01:24:57 GMT

{
  "data": {
    "cookies": {
      "session": "eyJfZnJlc2giOnRydWUsIl9pZCI6eyIgYiI6Ik5qWTROV1UxTVdZd1pqazJPRGhrWmpWbE5qY3dOelE1Tm1Kak1qazFNVEk9In0sInVzZXJfaWQiOiJ0ZXN0QGFiYy5jb20ifQ.B58_Qg.Ez4andKJ01l51Ltd5nDg9EyXzTQ"
    }
  },
  "message": "Test",
  "success": true
}

That’s it! There’s not alot too it. You can use the ‘login_required’ decorator on any view that requires authentication. There are some session expiration configuration options and custom authentication params that are confgiurable in Flask-Login.

I hope you have found this tutorial helpful and maybe even learned a thing or two about Python, Flask, authentication, etc. Let me know if you have any questions.

Best,

Chris

Flask NoSQL Authentication Tutorial - Part I

Overview

This tutorial provides instructions for how to create an authentication mechanism for a web application utilizing Flask as the Python web framework and Elasticsearch (ES) as the NoSQL data store. Many applications utilize ES as the index/search layer, but I choose ES as the primary database as a proof of concept for both persistant and search data layers. ES can be swapped out with almost any available NoSQL document store.

A basic understanding of the *NIX system, Python, and web applications is required otherwise you may struggle with some of the concepts and context. If you are new to Flask, I highly recommend checking out Miguel Grinberg’s Flask Mega Tutorial or his newley published Flask Book by O’Reilly for a complete Flask application how-to. The User Login tutorial actually inspired me to build this tutorial for a NoSQL data store.

In this first part of the tutorial, I will be covering the prerequisites, the main API, the User model, and the Users API end point. If you have any questions, feel free to write below and I’ll be happy to answer if you have any issues.

Let’s get started!

Prerequisites

Below are the specific prerequisites that are required to setup the working environment and download the neccesary packages and files.

  • linux server: This tutorial is based off the Centos 6.4 x86_64 base image, so package management (and command instructions below) are via RPM and Yum. sudo or root privileges are required to install the various system packages. If you prefer Debian, you’ll need to substitute the respectable DEB packages and apt-get commands.

ssh username@hostname

  • Elasticsearch: The ES server package is downloaded directly from the ES site. Installation and the default configuration is all that is required to get the service running. You can verify ES is running by executing curl -X GET http://127.0.0.1:9200 or navigating to the URL.
    note that version 1.3.2 is used at the time of writing

wget wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.2.noarch.rpm && yum install elasticsearch-1.3.2.noarch.rpm --nogpgcheck -y
service elasticsearch start

1
$ curl -X GET http://127.0.0.1:9200
{
  "status" : 200,
  "name" : "Sludge",
  "version" : {
    "number" : "1.3.2",
    "build_hash" : "dee175dbe2f254f3f26992f5d7591939aaefd12f",
    "build_timestamp" : "2014-08-13T14:29:30Z",
    "build_snapshot" : false,
    "lucene_version" : "4.9"
  },
  "tagline" : "You Know, for Search"
}
  • Python: Python 2.6.6 is already included in the base Centos 6.4, so that version will work. We’ll be using Python virtual environments and Pip to handle the Python libraries and dependencies:

yum install python-virtualenv python-virtualenvwrapper python-pip -y

  • Git: We’ll need to install Git and clone the tutorial source code from my Gihub repository.

yum install git -y
git clone https://github.com/phriscage/flask_elasticsearch_auth_example && cd flask_elasticsearch_auth_example

  • Python libraries: Create a new virtual environment and activate it. Then pull the packages from PyPi using Pip and requirements.txt:

mkvirtualenv flask_elasticsearch_auth_example -r requirements.txt

Now we should have all the required dependencies. :)

Main API

Before we create the primary User model, we need to create the basic Flask app API and verify we can connect to ES. I’m using Flask’s global g module to handle the ES client connection for each request. You can tweak the ES connection pool options for the cluster, but for now the default connection object works. I am using the default_error_handle method to return a standard JSON formatted message for all of the relevant HTTP error codes.

main.pylink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
def connect_db():
""" connect to couchbase """
try:
db_client = Elasticsearch()
#[{'host': ELASTICSEARCH_HOST, 'port': ELASTICSEARCH_PORT}],
#use_ssl=True,)
#sniff_on_connection_fail=True,)
except Exception as error:
logger.critical(error)
raise
return db_client

def create_app():
""" dynamically create the app """
app = Flask(__name__)
app.config.from_object(__name__)

@app.before_request
def before_request():
""" create the db_client global if it does not exist """
if not hasattr(g, 'db_client'):
g.db_client = connect_db()

def default_error_handle(error=None):
""" create a default json error handle """
return jsonify(error=str(error), message=error.description,
success=False), error.code

## handle all errors with json output
for error in range(400, 420) + range(500, 506):
app.error_handler_spec[None][error] = default_error_handle

The main.py arguments accept a specific hostname or IP and port number. When you start the application, the output should look like this:

1
$ ./main.py
2014-12-06 22:10:05,770 INFO werkzeug[8640] : _log :  * Running on http://0.0.0.0:8000/
2014-12-06 22:10:05,770 INFO werkzeug[8640] : _log :  * Restarting with reloader

We can verify it works, along with the default_error_handle, but pulling the base URL. curl -X GET -D - http://127.0.0.1:8000/

1
$ curl -X GET -D - http://127.0.0.1:8000/
HTTP/1.0 404 NOT FOUND
Content-Type: application/json
Content-Length: 191
Server: Werkzeug/0.9.6 Python/2.6.6
Date: Sun, 07 Dec 2014 00:35:23 GMT

{
  "error": "404: Not Found",
  "message": "The requested URL was not found on the server.  If you entered the URL manually please check your spelling and try again.",
  "success": false
}

Great! Now let’s define our User model and how-to store the user document data in ES.

User model

The User model contains the data structure and validation methods for the user metadata that will be passed from the API.

First, we include the system level modules and two password hash functions from werkzeug. We define what the key or ID attribute name will be for our user document and any additional required and/or valid attributes for the document.

user.pylink
1
2
3
4
5
6
7
8
9
from __future__ import absolute_import
import time
import re
import logging
from werkzeug.security import generate_password_hash, check_password_hash

KEY_NAME = 'email_address'
REQUIRED_ARGS = (KEY_NAME, 'password',)
VALID_ARGS = REQUIRED_ARGS + ('first_name', 'last_name',)

Instatiation of class executes private class functions to validate the kwargs against the global VALID_AGRS and REQUIRED_ARGS. It also sets the default and required values for the user document:

user.pylink
1
2
3
4
5
6
7
8
9
10
class User(object):
""" encapsulate the user as an object """

def __init__(self, **kwargs):
""" instantiate the class """
self.key = None
self.values = {}
self._validate_args(**kwargs)
self._set_key(kwargs[KEY_NAME])
self._set_values()

The set_password and check_password functions are how the model generates a password hash and verifies a plain text password against a hash. Instead of creating our own hashing algorithms, we use werkzeug’s utilies we imported above:

user.pylink
1
2
3
4
5
6
7
8
9
def set_password(self, password):
""" set the password using werkzeug generate_password_hash """
self.values['password'] = generate_password_hash(password)

def check_password(self, password):
""" check the password using werkzeug check_password_hash """
if not self.values.get('password', None):
return None
return check_password_hash(self.values['password'], password)

There’s not alot going on the User model for Part I, but we will expand the functionality in the next tutorial.

Users API:

Now that we have our basic user model, let’s define the User API endpoint that enables us to create a new user in ES. I’m using Flask’s Blueprint, jsonify, request and g modules. I created a ‘users’ Blueprint and added the root ‘/new’ route to create new users via HTTP POST. REST API Tutorial provides a greate “resource” for learning the appropriate synatx naming. For a truely textbook RESTful interface, one can argue between how a new resource is created ( ‘/users/new’, ‘/user/new’, or ‘/users’) and if resource pluralization matters, but I’ll save that discussion for a later date…

The overall logic is straightforward. First we verify the request content type is ‘application/json’. Next we create the User model and check the payload. Then check if the User document exits in ES. Finally, create a new User document if the User key, email_address, does not exist in ES.

users/views.pylink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.realpath(__file__)) +
'/../../../../../lib')
from example.v1.lib.user import User, KEY_NAME as USER_KEY_NAME
from flask import Blueprint, jsonify, request, g
from elasticsearch import TransportError
import logging

logger = logging.getLogger(__name__)

users = Blueprint('users', __name__)

@users.route('/new', methods=['POST'])
""" create a user and hash their password

**Example request:**

.. sourcecode:: http

GET /users/new HTTP/1.1
Accept: application/json
data: {
'email_address': 'abc@abc.com',
'password': 'abc123',
'first_name': 'abc',
'last_name': '123'
}

**Example response:**

.. sourcecode:: http

HTTP/1.1 200 OK
Content-Type: application/json

:statuscode 200: success
:statuscode 400: bad data
:statuscode 409: already exists
:statuscode 500: server error
"""


if not request.data:
message = "Content-Type: 'application/json' required"
logger.warn(message)
return jsonify(message=message, success=False), 400
try:
user = User(**request.json)
except ValueError as error:
message = str(error)
logger.warn(message)
return jsonify(message=message, success=False), 400
data = {}
try:
data = g.db_client.get('example', user.key)
except (TransportError, Exception) as error:
if not getattr(error, 'status_code', None) == 404:
logger.critical(str(error))
message = "Something broke... We are looking into it!"
return jsonify(message=message, success=False), 500
if data.get('found', None):
message = "'%s' already exists." % user.values[USER_KEY_NAME]
logger.warn(message)
return jsonify(message=message, success=False), 409
try:
args = {
'index': 'example',
'id': user.key,
'body': user.values,
'doc_type': user.values['_type']
}
data = g.db_client.index(**args)
except Exception as error:
message = str(error)
logger.warn(message)
return jsonify(message=message, success=False), 500
message = "'%s' added successfully!" % user.values[USER_KEY_NAME]
logger.debug(message)
return jsonify(message=message, success=True), 200

Next we need to import the users Blueprint and register it with the URL route to the app in main.py:

main.pylink
1
2
from example.v1.api.users.views import users
app.register_blueprint(users, url_prefix="/v1/users")

If your ‘main.py’ file is not running, restart it. Finally, let’s test creating a new user ‘test@abc.com’ against the Users API with the curl -X POST -H 'Content-Type: application/json' -d '{"email_address": "test@abc.com", "password": "test"}' http://127.0.0.1:8000/v1/users/new

1
$ curl -X POST  -D - -H 'Content-Type: application/json' -d '{"email_address": "test@abc.com", "password": "test"}' http://127.0.0.1:8000/v1/users/new
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 73
Server: Werkzeug/0.9.6 Python/2.6.6
Date: Sun, 07 Dec 2014 00:33:55 GMT

{
  "message": "'test@abc.com' added successfully!",
  "success": true
}

Success!

You’ll notice that if we try to add the same user again, we recieve a 409 conflict error:

1
$ curl -X POST  -D - -H 'Content-Type: application/json' -d '{"email_address": "test@abc.com", "password": "test"}' http://127.0.0.1:8000/v1/users/new
HTTP/1.0 409 CONFLICT
Content-Type: application/json
Content-Length: 70
Server: Werkzeug/0.9.6 Python/2.6.6
Date: Sun, 07 Dec 2014 00:34:01 GMT

{
  "message": "'test@abc.com' already exists.",
  "success": false
}

That’s it for Part I. I’ll follow up in a couple weeks with Part II which will utilize Flask-Login to handle the user session managment.

Best,
Chris

Nginx SSL Configuration

Overview

Nginx is becoming one of the more popular event driven web servers. As of October 2014, it is currently used amongst the top 20% of the most busiest websites today Netcraft. Setting up SSL should not be a daunting task, so I created a default SSL configuration (from Raymii.org) and Nodejs config here.

You can setup multiple sub domains in the same server config:

server_namelink
1
server_name    abc-dev.sample.com abc.sample.com;

Redirect all HTTP to HTTP requests permanently (301):

HTTP redirectlink
1
2
3
if ($scheme != "https") {
rewrite ^ https://$server_name$request_uri? permanent;
}

Also, redirect any unsupported client browsers:

browser redirectlink
1
2
3
4
## redirect ie8 and lower
if ($http_user_agent ~ "MSIE\s([1-8])\.") {
rewrite ^ /unsupported break;
}

Source.

Best,
Chris

Inspired Protoype Application

Overview

When I was invloved with the Invid.io team from 2012-2014, one of the initial proof of concept projects I created was an application that provides video product metadata and merchant retailers directly to consumers. The Inspired app (source), enabled the content creators the ability to organize and sell their products to an audience. In this post, I’m going to provide an overview of the technologies utilized for the application at a high-level and a few specific examples.

Most of the Python web applications I had built previously were using the Django web framework. Since Inspired did not require all of the compents and functionality that Django provided out of the box, I decided to try Flask for the project. I had created a few stand-alone APIs using Flask in my full-time position, but not a full-blown application.

Data model

Inspired was designed with a high-level relational data model of Artists -> Videos -> Products -> Retailers. The complete Inspired ERD is below:

I was initially tempted to use a NoSQL data store like Cassandra to handle the horizontal scaling in the future, but at the time, I had minimal experience with denormalizing and duplicating the data to fit the specific queries for the user interface. I decided to go with the de facto standard relational data store, MySQL. Instead of creating standard raw SQL queries, I used an ORM plugin SQLAlchemy to build the queries and model relationships. SQLAlchemy provides some create documentation on how to build the model classes and their respectable releationships. Here’s an example of how the Video model uses both One-to-Many and Many-to-Many relationships in it’s class:

Video modellink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
class Video(Base):
""" video_products join_table used to defined the bi-directional
relationship between Video and Product. Creating a separate class is
overkill unless additional atributes are required.
"""

video_products = Table('video_products', Base.metadata,
Column('video_id', Integer(unsigned=True),
ForeignKey('videos.video_id',
name='fk_video_products_video_id', ondelete="CASCADE"),
index=True, nullable=False),
Column('product_id', Integer(unsigned=True),
ForeignKey('products.product_id',
name='fk_video_products_product_id', ondelete="CASCADE"),
index=True, nullable=False),
mysql_engine='InnoDB',
mysql_charset='utf8'
)

class Video(Base):
""" Attributes for the Video model. Custom MapperExtension declarative for
before insert and update methods. The migrate.versioning api does not
handle sqlalchemy.dialects.mysql for custom column attributes. I.E.
INTEGER(unsigned=True), so they need to be modified manually.
"""

__tablename__ = 'videos'
__table_args__ = {
'mysql_engine': 'InnoDB',
'mysql_charset': 'utf8'
}
## mapper extension declarative for before insert and before update
__mapper_args__ = { 'extension': BaseExtension() }

id = Column('video_id', Integer(unsigned=True), primary_key=True)
name = Column(String(120), unique=True, index=True, nullable=False)
image_url = Column(String(2083))
video_sources = relationship("VideoSource", backref="video")
scenes = relationship("Scene", backref="video")
products = relationship("Product", secondary="video_products",
backref="videos")
created_at = Column(DateTime(), nullable=False)
updated_at = Column(DateTime(), nullable=False)

When I was building the data models, I wanted to use a similar DJango function for auto updating the DateTime fields whenever the row was created/updated auto_now. SQLAlchemy 0.7.8 did not have this ability, but you could create custom extensions for the SQLAlchemy model through the mapper_args. I was able to implement the auto_now by extending the MapperExtension:

BaseExtensionlink
1
2
3
4
5
6
7
8
9
10
11
12
13
class BaseExtension(MapperExtension):
"""Base entension class for all entity """

def before_insert(self, mapper, connection, instance):
""" set the created_at """
datetime_now = datetime.datetime.now()
instance.created_at = datetime_now
if not instance.updated_at:
instance.updated_at = datetime_now

def before_update(self, mapper, connection, instance):
""" set the updated_at """
instance.updated_at = datetime.datetime.now()

Schema Migrations

I was familiar with Ruby on Rails schema migrations and used a snippet of the Rails migration functionality extensively for Django (South was not mature yet). I decided to give Alembic a try since it has the ability to auto-generate the migrations based off the SQLAlchemy models. There were some gotchas with the 0.6.0 release, but overall, I think it is comparable to Rails migrations. I.E.

  • explicitly importing sqlalchemy.dialects.mysql.INTEGER for unsigned values
  • version filename length had fixed limit

I was also able to seed some initial test data (outside unit testing) in a few migrations.

Unit testing

I used Python’s unittest library to test both the SQLAlchemy models and Flask API end points. For each model and API test case class, I duplicated the MySQL schema and ran the migrations to ensure a clean environment. The nosetest performance was not great, but utilizing MySQL over SQLlite provided a more production like environment for test simulation.

unittest
1
2
3
4
5
6
$ nosetests -s -x
...........................................................................
----------------------------------------------------------------------
Ran 75 tests in 5.823s

OK

Feel free to check out the source and let me know if you have any questions.

Best,
Chris