<![CDATA[bengladwell.com]]>https://www.bengladwell.com/https://www.bengladwell.com/favicon.pngbengladwell.comhttps://www.bengladwell.com/Ghost 5.80Sun, 03 Mar 2024 21:48:43 GMT60<![CDATA[Gatsby + Cognito Authentication]]>https://www.bengladwell.com/gatsby-site-with-authentication/6418efdb9927d60001ebbe0dSun, 16 May 2021 18:43:57 GMT

I create videos that I don't want to upload to Youtube or Vimeo. There's nothing sketchy about my videos; I just don't want to share my family videos with the world, nor do I need Google's machine learning dissecting my kids into advertising data points. So, I want to self host.

Also, I want to easily share these videos with my extended family. And that needs to be easy for them. Not everyone in my family gets excited by technical challenges. So, I need some simple authentication layer. Finally, I really want my video hosting site to render differently for authenticated and unauthenticated users. This is because I want Open Graph meta headers to give information about a page on the site, even if the page does not render restricted content to unauthenticated users. In other words, I want links to my content that are posted in social media and messaging apps to be descriptive and interesting. Like so:

Gatsby + Cognito Authentication

Finally, I wanted to ditch the traditional server approach that I have used in the past and try to leverage AWS services to build my video hosting site serverless.

The Plan

  • Use Gatsby to generate static pages for the site. I like React. I sorta like GraphQl. Gatsby seemed like a useful thing to learn.
  • Use AWS Cognito to manage users and to allow users to login with Facebook.
  • Somehow manage a list of allowed users. Anyone could attempt to login with Facebook, but only those users on the list would actually succeed.
  • Use client-side security to show unauthenticated users a restricted version of the site. This is obviously not secure, so...
  • Restrict all access to the actual videos to authenticated users. Even if a malicious user circumvented the javascript, they still wouldn't be able to access the stuff that I actually want to keep private.

The Plan

This proved to be possible with AWS and also quite a bit more complicated than I imagined. As I worked through the problem I discovered that AWS is both over-documented and under-documented. AWS services are always completely documented with respect to available API endpoints, parameters, allowed values, etc. But it is often the purpose of a service, or how it is intended to be incorporated into larger projects that is missing. I think AWS documentation is teleologically under described. ;)

But I eventually figured it out. I figured out the difference between AWS Cognito User Pools and Identity Pools. I figured out why one should secure content with signed cookies rather than with IAM permissions. I figured out how to use Cognito's various tokens. I figured out the separation of concerns between API Gateway, Lambda, Cognito, and CloudFront. And I figured out how to manage it all with CloudFormation - a good step for learning how to operate with infrastructure as code. You can see the whole project in GitHub.

I configured my Cognito User Pool to allow sign ups and logins from a 3rd party identity provider: Facebook. Additionally, because you can configure your user pool to connect to a Lambda function on certain triggers, I was able to use a Lambda function to ensure that the user is on my allow list.

My Gatsby content is stored in S3 and distributed through a CloudFront distribution. This distribution is public. I created a second CloudFront distribution to serve the video content. This distribution requires that requests include signed cookies. The user obtains those signed cookies by making a request to an API Gateway endpoint. That request must include an Authorization header containing the access token from Cognito. The API Gateway endpoint integrates with a Lambda function that creates the signed cookies.

And the whole thing is described in a CloudFormation template.  The template includes 31 AWS resources! Many of these are access-related. Most resources need to be explicitly given access to the other resources with which they communicate. Access is granted by creating more resources.

I also used a handful of scripts to orchestrate creating the CloudFormation stack from the template and tearing it down. One particularly important function of the setup script is creating a key pair with OpenSSL, passing the private key to the signed cookie lambda function and the public key to the video CloudFront distribution.

Here are a couple diagrams:

Resource Diagram

Gatsby + Cognito Authentication
Resource Diagram

UML Sequence Diagram

Gatsby + Cognito Authentication
Sequence Diagram

Summary

This was definitely not the simplest way to rig up a private video hosting site. But it worked and I learned a lot about Gatsby, AWS, and infrastructure as code.

]]>
<![CDATA[Easy Video Encoding with AWS]]>https://www.bengladwell.com/aws-video-encoding/6418efdb9927d60001ebbe0cMon, 19 Apr 2021 22:00:00 GMT

The following CloudFormation template creates resources that automate the encoding of video. It creates a stack that provides the following workflow:

  1. User uploads video file to the /originals directory of a S3 bucket.
  2. A lambda function is notified of the newly uploaded file and starts an AWS MediaConvert job to encode the video in both Dash and HLS.
  3. The resulting encoded files are stored in the /assets directory of the same S3 bucket.
Easy Video Encoding with AWS

More detailed explanation following the template.

I was calling this setup "transcodekit" as I worked on it. Feel free to ignore that label wherever you see it. :)

AWSTemplateFormatVersion: 2010-09-09

Parameters:
  BucketName:
    Type: String
    Default: 'transcodekit'

Resources:
  TranscodekitLambdaPolicy:
    Type: 'AWS::IAM::ManagedPolicy'
    Properties:
      Description: Provides necessary access to MediaConvert and CloudWatch logs
      ManagedPolicyName: !Join
        - '-'
        - - !Ref 'AWS::Region'
          - TranscodekitLambdaExecutor
      PolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action:
              - mediaconvert:CreateJob
              - mediaconvert:DescribeEndpoints
            Resource:
              - '*'
          - Effect: Allow
            Action:
              - 'logs:CreateLogGroup'
              - 'logs:CreateLogStream'
              - 'logs:PutLogEvents'
            Resource:
              - '*'
          - Effect: Allow
            Action:
              - 'iam:PassRole'
            Resource:
              - !GetAtt MediaConvertRole.Arn

  LambdaExecutionRole:
    Type: 'AWS::IAM::Role'
    Properties:
      RoleName: LambdaExecution
      Description: Allows Transcodekit lambda function to start MediaConvert job
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - lambda.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      ManagedPolicyArns:
        - !Ref TranscodekitLambdaPolicy

  TranscodeVideoFunction:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: TranscodeVideo
      Description: Sends uploaded S3 object to MediaConvert for transcoding
      Handler: index.handler
      Role: !GetAtt LambdaExecutionRole.Arn
      Environment:
        Variables:
          ROLE: !GetAtt MediaConvertRole.Arn
      Code:
        ZipFile: |
          const path = require('path')
          const MediaConvert = require('aws-sdk/clients/mediaconvert')

          exports.handler = async function(event, context, cb) {
            const mediaConvert = new MediaConvert({
              apiVersion: '2017-08-29'
            })
            const s3Record = event.Records[0].s3
            const { base: fileName, name: title } = path.parse(s3Record.object.key)

            try {
              const { Endpoints: [{ Url: endpoint }]} = await mediaConvert.describeEndpoints().promise()
              mediaConvert.endpoint = endpoint

              const hlsResponse = await mediaConvert.createJob({
                Role: process.env.ROLE,
                JobTemplate: 'System-Ott_Hls_Ts_Avc_Aac',
                Settings: {
                  Inputs: [{
                    FileInput: `s3://${s3Record.bucket.name}/${s3Record.object.key}`,
                    AudioSelectors: {
                      'Audio Selector 1': {
                        Offset: 0
                      }
                    },
                  }],
                  OutputGroups: [{
                    OutputGroupSettings: {
                      Type: 'HLS_GROUP_SETTINGS',
                      HlsGroupSettings: {
                        Destination: `s3://${s3Record.bucket.name}/assets/${title}/hls/`
                      }
                    }
                  }]
                },
              }).promise()

              const dashResponse = await mediaConvert.createJob({
                Role: process.env.ROLE,
                JobTemplate: 'System-Ott_Dash_Mp4_Avc_Aac',
                Settings: {
                  Inputs: [{
                    FileInput: `s3://${s3Record.bucket.name}/${s3Record.object.key}`,
                    AudioSelectors: {
                      'Audio Selector 1': {
                        Offset: 0
                      }
                    },
                  }],
                  OutputGroups: [{
                    OutputGroupSettings: {
                      Type: 'DASH_ISO_GROUP_SETTINGS',
                      DashIsoGroupSettings: {
                        Destination: `s3://${s3Record.bucket.name}/assets/${title}/dash/`
                      }
                    }
                  }]
                },
              }).promise()

              cb(null, [hlsResponse, dashResponse])
            } catch (e) {
              cb(e.message)
            }
          }
      Runtime: nodejs12.x

  S3Bucket:
    Type: 'AWS::S3::Bucket'
    DependsOn:
      - S3ExecutionPermission
    Properties:
      BucketName: !Ref BucketName
      NotificationConfiguration:
        LambdaConfigurations:
          - Event: 's3:ObjectCreated:*'
            Function: !GetAtt TranscodeVideoFunction.Arn
            Filter:
              S3Key:
                Rules:
                  - Name: prefix
                    Value: 'originals/'

  S3ExecutionPermission:
    Type: AWS::Lambda::Permission
    Properties:
      FunctionName: !GetAtt TranscodeVideoFunction.Arn
      Action: lambda:InvokeFunction
      Principal: s3.amazonaws.com
      SourceAccount: !Ref 'AWS::AccountId'
      SourceArn: !Sub 'arn:aws:s3:::${BucketName}'

  TranscodekitMediaConvertPolicy:
    Type: 'AWS::IAM::ManagedPolicy'
    Properties:
      ManagedPolicyName: !Join
        - '-'
        - - !Ref 'AWS::Region'
          - TranscodekitMediaConverter
      Description: Provides access to S3 for MediaConvert transcode jobs
      PolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Action:
              - 's3:PutObject'
            Resource:
              - !Sub 'arn:aws:s3:::${BucketName}/assets/*'
          - Effect: Allow
            Action:
              - 's3:GetObject'
            Resource:
              - !Sub 'arn:aws:s3:::${BucketName}/originals/*'

  MediaConvertRole:
    Type: 'AWS::IAM::Role'
    Properties:
      RoleName: MediaConvertExecution
      Description: Allows MediaConvert to gain access to S3
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - mediaconvert.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: "/"
      ManagedPolicyArns:
        - !Ref TranscodekitMediaConvertPolicy

Getting the video from S3 to MediaConvert

At the time of writing, S3 does not have a notification that will send an uploaded file directly to MediaConvert. So, we'll have to use a Lambda function. S3 will notify our Lambda function of the file and Lambda will kick off the MediaConvert encoding jobs.

This general setup is relatively simple. The tricky part was the required IAM roles and managed policies required to give the different resources permission to interact. The Lambda function has its own role, but must pass a different role to the MediaConvert jobs it initiates. This passed role gives the MediaConvert jobs permission to do what they need to do: read from and write to S3.

Encoding the Video

We use the System-Ott_Hls_Ts_Avc_Aac and System-Ott_Dash_Mp4_Avc_Aac job templates to simplify configuring the encoding settings. These templates produce HLS and Dash encoded videos respectively in a range of resolutions and bitrates.

The jobs themselves are instructed where to read the original files in S3 and where to write the encoded assets when they are done.

Cost

For a 5 minute video, encoding in both HLS and Dash in the variety of resolutions and bitrates provided by the aforementioned MediaConvert templates costs $2-$3.

Happy encoding!

]]>
<![CDATA[Testing React Native Components with Mocha]]>https://www.bengladwell.com/testing-react-native-components-with-mocha/6418efdb9927d60001ebbe0bMon, 15 Oct 2018 19:39:38 GMT

Testing for React Native components is broken

Testing React Native Components with Mocha

This week, I generated a react-native app with react-native v2.0.1. It went something like this:

The resulting Jest-based testing setup does not function.

TL;DR

You can get a nice Mocha testing setup working for React Native using a custom Mocha config and react-native-mock-render. See below for the modules you'll need to add and the configuration.

The problem

Using react-native-cli v2.0.1, I ran react-native init testproj to generate a clean React Native project. It spit out a project directory with React Native v0.57.3, Jest v23.6.0, and metro-react-native-babel-preset v0.48.1. I then added a simple test called App.test.js like so:

describe('a test test', () => {
  it('is true', () => {
    expect(true).toBe(true);
  });
});

However, running yarn test results in:

$ jest
 FAIL  ./App.test.js
  ● Test suite failed to run

    Couldn't find preset "module:metro-react-native-babel-preset" relative to directory "/Users/bgladwell/repos/testproj"

      at node_modules/babel-core/lib/transformation/file/options/option-manager.js:293:19
          at Array.map (<anonymous>)
      at OptionManager.resolvePresets (node_modules/babel-core/lib/transformation/file/options/option-manager.js:275:20)
      at OptionManager.mergePresets (node_modules/babel-core/lib/transformation/file/options/option-manager.js:264:10)
      at OptionManager.mergeOptions (node_modules/babel-core/lib/transformation/file/options/option-manager.js:249:14)
      at OptionManager.init (node_modules/babel-core/lib/transformation/file/options/option-manager.js:368:12)
      at File.initOptions (node_modules/babel-core/lib/transformation/file/index.js:212:65)
      at new File (node_modules/babel-core/lib/transformation/file/index.js:135:24)
      at Pipeline.transform (node_modules/babel-core/lib/transformation/pipeline.js:46:16)

Test Suites: 1 failed, 1 total
Tests:       0 total
Snapshots:   0 total
Time:        0.259s, estimated 1s
Ran all test suites.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

There is some problem with how Jest and Babel are (not) working together. In this case, Jest's out-of-the-box configuration is working against us. Dependencies for Jest and Metro are apparently causing some conflict between Babel 6 and Babel 7, but it's quite difficult to isolate which packages are causing the conflict.

There are open issues for React Native and Metro, but no definitive solutions.

A proposal in both issue threads is to change the babel preset used in .babelrc from metro-react-native-babel-preset to react-native. It doesn't surprise me that this works for some people in some situations, but I don't think it's a good idea. Here's why:

The react-native preset is short for babel-preset-react-native. As you might guess, this is a Babel preset package for working with react native code. What is less obvious is that babel-preset-react-native depends on Babel 6 and has been superseded by metro-react-native-babel-preset, which depends on Babel 7. So to regress back to babel-preset-react-native means forcing our project to continue to depend on Babel 6, something that the react-native maintainers have decided against. Also, we have to worry about more than Jest here; Metro (the React Native bundler) depends on Babel 7. It seems unwise to have our tests compiled with Babel 6 while our development and production code is compiled by Babel 7.

Not really sure how to fix Jest

There may be some way to resolve this issue using Jest. I'm just not sure how. And neither does anyone else, judging by the React Native and Metro issues.

But, I am able to get tests working using Mocha. Here's how:

Mocha

Let's add Mocha and Enzyme to our project so that we can test React components. We also add Chai for assertions. Of course, we'll have to add the Enzyme React 16 adapter. We will configure Enzyme to use the adapter below.

yarn add -D mocha chai enzyme enzyme enzyme-adapter-react-16

Next, update App.test.js so that it does something sorta like a real component test:

import React from 'react';
import Enzyme, { shallow } from 'enzyme';
import Adapter from 'enzyme-adapter-react-16';
import { Text } from 'react-native';
import { expect } from 'chai';

import App from './App';

Enzyme.configure({ adapter: new Adapter() });

describe('<App>', () => {
  let wrapper;

  beforeEach(() => {
    wrapper = shallow(<App />);
  });
  it('renders default Text', () => {
    expect(wrapper.find(Text)).to.have.lengthOf(3);
  });
});

Our test assumes that App.js renders a simple component with three <Text> component children, which is how react-native init generated it.

Ok, let's try to run our test:

node_modules/.bin/mocha App.test.js

No dice. We get this error:

...
App.test.js:1     
(function (exports, require, module, __filename, __dirname) { import { shallow } from 'enzyme';      
                                                              ^^^^^^                                 

SyntaxError: Unexpected token import
...

That makes sense. We need Babel to compile those import statements, but Babel is not yet involved. Let's change that:

If you check Babel's documentation for how to use Babel with Mocha, it says to do this:

mocha --require babel-register

The --require option loads a npm module at runtime - in this case it is telling Mocha to compile everything with Babel. But this is incorrect in our case! These instruction are out of date. babel-register is the Babel 6 version of what we want to do. We actually need to do:

mocha --require @babel/register

@babel/register is the Babel 7 version. Let's try it:

node_modules/.bin/mocha --require @babel/register  App.test.js

Oh no what's this? Another error:

module.js:491                                     
    throw err;                                    
    ^                                             

Error: Cannot find module 'Platform'
...

Mocking native modules

This error is telling us that that Node is not able to load Platform. This is because Platform is a native module provided by React Native - i.e. a non-JavaScript module written in Objective-C or Java. Happily, this problem is also solvable by mocking the native modules. If you Google around, most solutions will direct you to a package called react-native-mock. I'm sure this package worked at some point, but it is no longer maintained, and it does not work with React 16.

However, the good people at Root Insurance have forked the project and improved it. Lets use their module: react-native-mock-render.

yarn add -D react-native-mock-render

Now what happens if we also load react-native-mock-render with our tests?

node_modules/.bin/mocha --require @babel/register --require react-native-mock-render/mock App.test.js

OMG! IT WORKED!

Compiling node_modules

If the above is working for you, you're good to go. However, it wasn't enough for my project.

Let's add a fairly large dependency to our project: Native Base. This will bring in a host of other modules.

yarn add native-base

Now add a Native Base component to the App.js file that was generated by react-native init. It should look something like this:

import React, {Component} from 'react';           
import {Platform, StyleSheet, Text, View} from 'react-native';                                       
import { Spinner } from 'native-base';            

const instructions = Platform.select({            
  ios: 'Press Cmd+R to reload,\n' + 'Cmd+D or shake for dev menu',                                   
  android:                                        
    'Double tap R on your keyboard to reload,\n' +                                                   
    'Shake or press menu button for dev menu',    
});                                               

type Props = {};                                  
export default class App extends Component<Props> {                                                  
  render() {                                      
    return (                                      
      <View style={styles.container}>             
        <Spinner />                               
        <Text style={styles.welcome}>Welcome to React Native!</Text>                                 
        <Text style={styles.instructions}>To get started, edit App.js</Text>                         
        <Text style={styles.instructions}>{instructions}</Text>                                      
      </View>                                     
    );                                            
  }                                               
}                                                 

const styles = StyleSheet.create({                
  container: {                                    
    flex: 1,                                      
    justifyContent: 'center',                     
    alignItems: 'center',                         
    backgroundColor: '#F5FCFF',                   
  },                                              
  welcome: {                                      
    fontSize: 20,                                 
    textAlign: 'center',                          
    margin: 10,                                   
  },                                              
  instructions: {                                 
    textAlign: 'center',                          
    color: '#333333',                             
    marginBottom: 5,                              
  },                                              
});

All we added was
import { Spinner } from 'native-base';

and
<Spinner />

Now what happens when we run our test?

node_modules/.bin/mocha --require @babel/register --require react-native-mock-render/mock App.test.js

New errors!

...
node_modules/native-base-shoutem-theme/index.js:1                    
(function (exports, require, module, __filename, __dirname) { import connectStyle from "./src/connectStyle";                                                                                               
                                                              ^^^^^^                                 
                                                                                                     
SyntaxError: Unexpected token import
...

Wait, what? This looks like Babel 7 is no longer doing its job. Why is "import" an unexpected token?

The answer is that Babel 7 no longer compiles node_modules by default. This isn't necessarily bad. If you don't need to do it, compiling everything in node_modules can add a LOT of overhead to your tests.

But we apparently DO need Babel 7 to compile the modules under node_modules that ship with ES2015 assets. To make that happen, we're going to create a new Mocha configuration that we load at runtime. As a bonus, we can put some of our general test config stuff (like Enzyme's adapter config) in there too.

Create config/mocha.js

const Enzyme = require('enzyme');
const Adapter = require('enzyme-adapter-react-16');

Enzyme.configure({ adapter: new Adapter() });

require('react-native-mock-render/mock');

require('@babel/register')({
  presets: [require('metro-react-native-babel-preset')],
  ignore: [
    function (filepath) {
      const packages = [
        'native-base-shoutem-theme',
      ];
      if (packages.some(p => filepath.startsWith(`${__dirname}/node_modules/${p}`))) {
        return false;
      }
      return filepath.startsWith(`${__dirname}/node_modules`);
    },
  ],
});

As you can see, at the top of the file we configure Enzyme with the React 16 adapter, so we can remove that stuff from our test file. We also pull in react-native-mock-render, so we can now omit that from the command line when running our tests.

Then, we require @babel/register (the same module that we were --requireing on the command line` and pass it a configuration object.

presets: [require('metro-react-native-babel-preset')]

This specifies the same Babel preset that is listed in the .babelrc file generated by native-react init. As you may recall, it is the new version of babel-preset-react-native that uses Babel 7.

Next comes the ignore array. You can find documentation for how this works here. In our case, we are providing a function that returns false for anything we do want Babel to compile, true for anything it should skip.

The function says to compile anything in the packages array, skip anything else under node_modules, compile everything else. In our case we want to compile native-base-shoutem-theme, the module with the error from above.

We could simply write the ignore array as an empty array, but this would mean Babel should compile everything, including node_modules. That seems unnecessarily slow. Using the ignore function above, we can simply add the name of any module that ships code with ES2015 import semantics.

Run the new config like this:

node_modules/.bin/mocha --require config/mocha.js App.test.js

And there you go! Working React Native component tests with Mocha and Babel 7.

Summary

We changed the files and commands we used throughout this post. Here are the final versions:

The example test:

import React from 'react';
import { shallow } from 'enzyme';
import { Text } from 'react-native';
import { expect } from 'chai';

import App from './App';

describe('<App>', () => {
  let wrapper;

  beforeEach(() => {
    wrapper = shallow(<App />);
  });

  it('renders default Text', () => {
    expect(wrapper.find(Text)).to.have.lengthOf(3);
  });
});

The Mocha configuration file:

const Enzyme = require('enzyme');
const Adapter = require('enzyme-adapter-react-16');

Enzyme.configure({ adapter: new Adapter() });

require('react-native-mock-render/mock');
require('@babel/register')({
  presets: [require('metro-react-native-babel-preset')],
  ignore: [
    function (filepath) {
      const packages = [
        'native-base-shoutem-theme',
      ];
      if (packages.some(p => filepath.startsWith(`${__dirname}/node_modules/${p}`))) {
        return false;
      }
      return filepath.startsWith(`${__dirname}/node_modules`);
    },
  ],
});

We added the following modules to make all this work:

yarn add -D mocha chai enzyme enzyme enzyme-adapter-react-16 react-native-mock-render

To run a test:

node_modules/.bin/mocha --require config/mocha.js App.test.js

You can/should of course put that in a npm script and omit the node\_modules/.bin/ as well as change App.test.js to be something like **/*.test.js.

P.S.

Snapshot testing with Mocha seems to work with with snap-shot-it and enzyme-to-json. So far, I'm quite happy with Mocha as my React Native test framework.

]]>
<![CDATA[Lessons from Rails: Focus on State in Tests]]>
import { lessons } from 'rails'; // part 3

Bringing techniques used in Rails to Node

late
Better late than never. I'm happy, if not surprised, to be learning so much from Rails in 2018.

This is the third post in a series about techniques I found in Rails that

]]>
https://www.bengladwell.com/lessons-from-rails-3-focus-on-state-in-tests/6418efdb9927d60001ebbe0aTue, 28 Aug 2018 23:45:29 GMT
import { lessons } from 'rails'; // part 3

Bringing techniques used in Rails to Node

Lessons from Rails: Focus on State in Tests

Lessons from Rails: Focus on State in Tests
Better late than never. I'm happy, if not surprised, to be learning so much from Rails in 2018.

This is the third post in a series about techniques I found in Rails that were so good, they needed to be ported to my Node projects.

Technique 3 - In tests, focus on state

This is another technique gleaned from RSpec, a BDD test framework commonly used as an alternative to Rails' default testing framework, Minitest. RSpec's let expression allows you to abstract setup code in a convenient way. I'll try to illustrate by way of example, but first...

A word of caution: this technique may not agree with your testing sensibilities

In my experience, smart people often disagree on test readability and structure. If you like your tests to be extremely readable, even at the expense of repeating yourself, this technique is probably not for you. If, on the other hand, you don't mind a bit of abstraction in your tests to make things more concise, keep reading. For me, readability is important, but I don't mind that my test files feel more like programs and less like stories. I like the balance that RSpec strikes and let is a big part of that.

A quick explanation of RSpec's let

RSpec's let is, simply put, another way of binding the result of an expression to a variable. You might see let expressions like this:

let(:avg_chicken_weight) { 1.2 }
let(:num_chickens) { 50 }
let(:total_chicken_weight) { avg_chicken_weight * num_chickens }

You can see that let expressions can reference other let expressions.

let expressions often serve similar purposes to before blocks; they contain setup code necessary to for the initial conditions of your tests. Some of the advantages of let are explained nicely in this stackoverflow answer.

Lessons from Rails: Focus on State in Tests

The real value of let

But, in my opinion, the real value of let is that allows you to focus on the state that describes test conditions, rather than steps required to create that state.

For example, let's say we're setting up an integration test for some classes in a blog system. Specifically, we're going to test a user's ability to edit someone else's blog post when the user is a member of different groups. Users that are a member of the editors group should be able to edit other users' posts. Users that are a member of the default group should not.

We need to initialize a few different entities for the test. Using RSpec and before block semantics, we might do it like so.

describe 'User permission to edit' do
  subject { @post.edit(@user) }

  before do
    @post = create :post
    group = create :group, is_admin: false
    @user = create :user, group: group
  end

  it 'cannot edit another user's post' do
    expect{ subject }.to raise_error
  end

  context 'when user is a member of an admin group' do
    before do
      group = create :group, is_admin: true
      @user = create :user, group: group
    end

    it 'can edit another user's post' do
      expect{ subject }.not_to raise_error
    end
  end
end

Now let's refactor using let. Notice that the nested context is much cleaner and the setup code has been reduced to a single expression that communicates purpose.

describe 'User permission to edit' do
  subject { post.edit(user) }

  let(:post)   { create :post }
  let(:is_group_admin) { false }
  let(:group)  { create :group, is_admin: is_group_admin }
  let(:user) { create :user, group: group }

  it 'cannot edit another user's post' do
    expect{ subject }.to raise_error
  end

  context 'when user is a member of an admin group' do
    let(:is_group_admin) { true }

    it 'can edit another user's post' do
      expect{ subject }.not_to raise_error
    end
  end
end

With the before blocks approach, we were forced to rehearse the steps necessary to initialize this new test. But this isn't really important when trying to understand the point of the test. The point is the state: the user is now part of an admin group.

Can we do it in Javascript?

Sorta, yeah!

describe('User permission to edit', () => {
  let s;
  beforeEach(() => {
    s = new StateDefinition({
      post: create('post'),
      isGroupAdmin: false,
      group: (state) => create('group', { isAdmin: state.isGroupAdmin }),
      user: (state) => create('user', { group: state.group }),
    });
  });

  const subject = () => { s.post.edit(s.user) }

  it('cannot edit another user's post', () => {
    expect(subject).toThrow();
  });

  describe('when user is a member of an admin group', () => {
    beforeEach(() => {
      s.define({ isGroupAdmin: true });
    });

    it('can edit another user's post', () => {
      expect(subject).not.toThrow();
    });
  });
});

Clearly, we snuck in an new construct, StateDefinition. This class was designed by a colleague and I as we were attempting to reduce the cognitive overhead required to switch back and forth between Rails RSpec tests and our frontend React tests.

StateDefinition

The StateDefinition class is used in the Javascript example as a stand in for the let expressions from the Ruby example. While not as terse, it has some of the same useful properties.

  • Properties defined can reference other properties in the state definition.
  • Properties can be defined as functions, but are accessed as properties.
  • Properties are evaluated as-needed. This means that if you redefine a state property like we do with isGroupAdmin, its value will bubble up to other properties when they are referenced in tests.
  • Property values are memoized. Once evaluated, functions that define properties will not be called again. This is useful when state definitions involve stateful operations like creating a database entry or rendering a React component with Enzyme.

The StateDefinition API is pretty simple. You can pass in an object of property definitions at instantiation - definitions that can either be statements or functions. Later, you can update those definitions with the .define() method.

All properties are accessed as properties, even if they are defined as functions. This allows you to not worry about how properties were defined and helps with readability of the tests.

StateDefinition is available as a npm package. Give it a try!

Wrap it up

RSpec's let expressions are fantastic for creating more readable tests and for reducing some repeated setup code. Using StateDefinition, we can enjoy some of the same benefits in Javascript.

There is more work to do on StateDefinition. I think it could be integrated with Mocha to make usage clearer and more seamless.

]]>
<![CDATA[Lessons from Rails: Generate test data]]>
import { lessons } from 'rails'; // part 1

Bringing techniques used in Rails to Node

late
Better late than never. I'm happy, if not surprised, to be learning so much from Rails in 2018.

This is the second post in a series about techniques I found in Rails that

]]>
https://www.bengladwell.com/lessons-from-rails-2-generate-test-data/6418efdb9927d60001ebbe09Tue, 28 Aug 2018 23:21:51 GMT
import { lessons } from 'rails'; // part 1

Bringing techniques used in Rails to Node

Lessons from Rails: Generate test data

Lessons from Rails: Generate test data
Better late than never. I'm happy, if not surprised, to be learning so much from Rails in 2018.

This is the second post in a series about techniques I found in Rails that were so good, they needed to be ported to my Node projects.

Technique 2 - Generate test data with factories

In the RSpec Better Specs documentation, they say, "Do not use fixtures because they are difficult to control, use factories instead. Use them to reduce the verbosity on creating new data."

This is a simple argument, so here it is.

Setup code in your tests like the following isn't pleasant. Nor is it maintainable.

const user = new User({
  name: 'Joe User',
  email: 'joe@fake.com',
  birthdate: new Date('1999-03-30'),
});
const user2 = new User({
  name: 'Bill Jones',
  email: 'bill@fake.com',
  birthdate: new Date('1973-12-08'),
});
const message1 = new Message({
  content: 'This is the content of message1',
  user: user1,
});
const message2 = new Message({
  content: 'Message2 content!',
  user: user2,
});

Why manually create the boilerplate data needed to instantiate these test entities every time we need them? And each time the API changes for these entities, we'll need to update dozens of references. Thankfully, there are libraries that can help us with this.

How about code like this instead?

const user1 = casual.user;
const user2 = casual.user;
const message1 = casual.message({user: user1});
const message2 = casual.message({user: user2});

This makes for much cleaner, much more maintainable tests.

Casual

In the examples, we used casual, a fake data generator. There are other options in the Javascript ecosystem - Chance looks popular - but I like casual because it has a simple, extensible API.

For the above example, you could create casual definitions like so:

const casual = require('casual'),
  moment = require('moment');

casual.define('user', function () {
  return {
    name: casual.name,
    email: casual.email,
    birthdate: casual.date,
  };
});

casual.define('message', function (opts) {
  return Object.assign({
    content: casual.sentence,
    user: casual.user,
  }, opts);
});

You just need to make sure to put those definitions in a file that is loaded before your tests run. You will then have access to casual.user and casual.message whenever casual is required in your tests.

Wrap it up

So that's it. A pattern that is an accepted norm in the Rails/RSpec community works pretty well in JavaScript. Take the 10 minutes required to set up test data factories and enjoy cleaner, easier to maintain tests!

]]>
<![CDATA[Lessons from Rails: Tests using DB transactions]]>
import { lessons } from 'rails'; // part 1

Bringing techniques used in Rails to Node

I'm about a decade late.
late
After unknowingly circling it for years, I am now working in Rails. When I started at CoverMyMeds in the beginning of 2017, I wasn't sure what

]]>
https://www.bengladwell.com/run-tests-as-db-transactions-with-objection-js/6418efdb9927d60001ebbe08Fri, 17 Aug 2018 19:33:53 GMT
import { lessons } from 'rails'; // part 1

Bringing techniques used in Rails to Node

Lessons from Rails: Tests using DB transactions

I'm about a decade late.
Lessons from Rails: Tests using DB transactions
After unknowingly circling it for years, I am now working in Rails. When I started at CoverMyMeds in the beginning of 2017, I wasn't sure what I would think of working with a framework that had clearly crossed peak hype years ago. I expected to be somewhat bored with a system that I assumed wouldn't have much to teach me. But working in Rails has been nothing if not educational. I had no idea how many concepts and practices from Rails permeate thttps://giphy.com/search/hackhe libraries and frameworks I have used throughout my career. The Rails community continually discovers and enshrines best practices in the framework and supporting libraries. From what I can tell, this community generally prefers to converge. The JavaScript community, for all of its strengths, generally does not.

I still love working in Node though. So, as I have learned Rails and RSpec, some of the techniques I discovered there proved so useful that I decided to find a way to bring them into my JavaScript projects.

Technique 1 - In tests, use DB transactions

Rails has always supported running your database operations in tests as transactions that are automatically rolled back after each test. By using this feature, you prevent your test database from being inundated by old or even misleading data.

TBH I'm not sure if a transaction with a rollback is faster than a DB write plus a delete, but it's much, much simpler to manage than manual test teardown statements that restore the state of your database. I highly recommend this practice.

Setting this up in Node is going to depend on your DB abstraction layer. I prefer Objection.js and the example presented here uses that library.

For the testing framework, I typically use Mocha, but I think the semantics used here would work with Jasmine and Jest as well.

Setup

Create a helper file that you will use in your test files. I put mine at test/support/transactional_tests.js. We're going to import this file in each test file in which we want transactional tests.

'use strict';

const dbConfig = require('../../knexfile'),
  knex = require('knex')(dbConfig[process.env.NODE_ENV]),
  Model = require('objection').Model;

let afterDone;

beforeEach('initialize transaction', function (done) {
  knex.transaction(function (newtrx) {
    Model.knex(newtrx);
    done();
  }).catch(function () {
    // call afterEach's done
    afterDone();
  });
});

afterEach('rollback transaction', function (done) {
  afterDone = done;
  Model.knex().rollback();
});

Explanation

It's a fun little hack. Let's break it down.

const dbConfig = require('../../knexfile'),  
  knex = require('knex')(dbConfig[process.env.NODE_ENV]),
  Model = require('objection').Model;

Here we are initializing knex and objection. My knexfile is keyed by environment, hence I pass in process.env.NODE_ENV to the knex initialization function.

Now for the main trick. Rather than analyze the code line by line, it's easier to think about how the tests will interact with individual functional sections.

First:

beforeEach('initialize transaction', function (done) {  
  knex.transaction(function (newtrx) {
    Model.knex(newtrx);
    done();
  })
  ...

In the beforeEach section, which will run before every test, we initialize a transaction and inject it into the base Objection model. All Objection models in the test will now use this transaction. Immediately after injecting the transaction, we call the beforeEach function's done callback, so the current test is now free to run.

After the test, the afterEach executes:

afterEach('rollback transaction', function (done) {
  afterDone = done;
  Model.knex().rollback();
});

The afterDone = done; line is like a bookmark; we're holding on to a reference to afterEach's done function so that we can call it later.

Calling .rollback() rolls the transaction back (obviously), but it also results in an exception being thrown that is caught by the original transaction's catch function (which you will see in the beforeEach callback):

...
  .catch(function () {
    // call afterEach's done
    afterDone();
  });

Here, we call afterEach's done function, which we refer to with afterDone. The teardown code is now complete and the test will not hang.

So, the steps can be summarized as follows:

  1. Initialize a transaction and inject it into Objection's base Model class.
  2. Run test
  3. Rollback transaction
  4. Catch resulting exception
  5. Call afterEach done function

Use it

In any test file in which you want automatic transactions, simply require your helper file.

require('../../support/transactional_tests');

describe('Class under test', function () {
...
});

The end.

If you're using Objection and want cleaner tests, give transactions a try!

]]>
<![CDATA[Vimscript: Automatic ESLint detection]]>

Update - as fun as it was to learn some vimscript, I no longer make use of this stuff. I just use ALE and ESLint and all is well.


I have been a vim user for over 15 years, but until recently I had never tried writing any Vimscript. I

]]>
https://www.bengladwell.com/vimscript-automatic-eslint-detection/6418efdb9927d60001ebbe07Fri, 30 Dec 2016 14:09:14 GMT

Update - as fun as it was to learn some vimscript, I no longer make use of this stuff. I just use ALE and ESLint and all is well.


I have been a vim user for over 15 years, but until recently I had never tried writing any Vimscript. I had occasionally tried (usually in vain) to understand the Vimscript in some of the plugins I use. But actually writing my own seemed inaccessible, or at least, not important enough to try.

That changed recently when I decided that I had too many lines like this in my .vimrc:

autocmd BufNewFile,BufRead some_directory let g:syntastic_javascript_checkers = ['eslint']
autocmd BufNewFile,BufRead some_directory let g:syntastic_javascript_eslint_exec = "some_directory/node_modules/eslint/bin/eslint"
let g:syntastic_javascript_eslint_args = "--rule 'no-console: 1' --rule 'no-debugger: 1'"

where some_directory is the project root directory. Syntastic is a fantastic syntax checking plugin that can use any number of syntax checking rules, including ESLint. If the javascript_checker is set to eslint as above, Syntastic will automatically look for an .eslintrc file and use it to check your code in real time as you write it.

I decided that determining if we should use ESLint for Syntastic should be automated. We could simply look for the presence of eslint under the node_modules directory. If it exists, assume that we are using ESLint.

So, here is my Vimscript code to do that:

function! FindEslint(dir)
    " search up directory tree to node_modules
    let node_modules = finddir('node_modules', a:dir . ';')
    if strlen(node_modules)
        " look for eslint package dir
        let eslint_dir = finddir('eslint', node_modules . '/**1')
        if strlen(eslint_dir)
            return eslint_dir
        endif
        " if no eslint dir, look for gulp-eslint package dir
        let gulp_eslint_dir = finddir('gulp-eslint', node_modules . '/**1')
        if strlen(gulp_eslint_dir)
            return fnamemodify(gulp_eslint_dir . '/node_modules/eslint', ':p')
        endif
    endif
    return 0
endfunction

function! SetLinter(rootdir)
    let eslint_dir = FindEslint(a:rootdir)
    if strlen(eslint_dir)
        let g:syntastic_javascript_checkers = ['eslint']
        let g:syntastic_javascript_eslint_exec = eslint_dir . "/bin/eslint.js"
        let g:syntastic_javascript_eslint_args = "--rule 'no-console: 1' --rule 'no-debugger: 1'"
    endif
endfunction

" set up linting for javascript files
autocmd FileType javascript call SetLinter(expand("<afile>:p:h"))

Starting at the bottom, we first use an autocommand: when the filetype is Javascript, call the SetLinter function on the filename. For more info on the expand command see

:help expand

but basically the usage here takes the file name provided by autocmd, expands it to its full path, and removes the file name, resulting in the parent directory.

SetLinter in turn calls FindEslint on this directory.

a:rootdir

means "the variable rootdir that is a function argument". In Vimscript, you have to be explicit about the scope of the variables you are referencing.

FindEslint uses

finddir

to travel up the directory tree, looking for a node_modules directory. If it finds one, it again uses finddir to look for an eslint directory or a gulp-eslint directory. I use gulp-eslint that this makes sense for me.

If FindEslint finds a directory like this, SetLinter uses that directory to set the appropriate syntastic options.

In the end, I found Vimscript to be somewhat odd compared to other languages I know. On the other hand, the built in documentation made learning the necessary commands relatively easy.

If you use and love vim like I do, I definitely recommend getting your feet wet with Vimscript!

]]>
<![CDATA[Serverless Web Apps by Example]]>https://www.bengladwell.com/serverless-web-apps-by-example/6418efdb9927d60001ebbe06Wed, 31 Aug 2016 20:38:59 GMT

Disappearing Servers

There's a lot of talk about serverless architecture and serverless web apps. But what exactly does a serverless web application look like?

I've built some of my own serverless apps and have been carefully surveying the serverless landscape. As I see it, there are three classes of serverless web architectures being used today. I describe them here, listed from simple to complex.

  1. File hosting services
  2. Backend as a Service
  3. Functions as a Service

Type 1: File hosting service architecture

Provider examples:

It doesn't get much more simple than uploading some pages to a file hosting service. The foundational technology for this approach has been around for decades! Each path on the app's domain is represented by a file. And that's it.

But wait a minute... You can't build an app with static pages! If you could, we wouldn't have made CGI! And PHP! And Rails! And the 90s Tech Bubble! And UX consultants! And "I'd like to add you to my professional network on LinkedIn"!

Well. Maybe that's not completely correct. But we have certainly come much farther than static files in the last 20 years. Why would we want to go back?

1st Reason: JavaScript

You can build dynamic, engaging client applications with JavaScript. You know this. I know this.

However, for years, I overlooked that this is often all you need.

With just an index.html page, a JavaScript app bundle, and a stylesheet, we have everything we need for a basic web app.

Our index.html file might look like this:

<!doctype html>
<html lang="en">
  <head>
    <meta charset='utf-8'/>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <title>A decent web app</title>

    <link href="/css/app.css" rel="stylesheet" type="text/css">
    <script type="text/javascript" src="/js/app.js"></script>
  </head>

  <body></body>

</html>

Sure, you might throw in an additional stylesheet or a web font. But the point is, everything is a static file or a link to a file on a CDN.

All the complexity is in that app.js file, which can inject the app content into the body tag.

To get started, you could just start writing JavaScript in app.js. Break all the rules! No file-based modules! No transpiling! It's ok - just get something on the page.

Of course, once your app starts to get more complicated, you probably will want to add a build step. You can just use browserify or webpack on the command line or as a npm script. If you have never tried a Javascript bundler before, you'll find that these tools are mature, well-documented, and (relatively) easy to use.

2nd Reason: Great APIs

SASS products of all stripes seem to have an API these days, many of them well-documented. The expectation of an API is a new development; one that enables all kinds of fun and interesting web applications. Many kinds of web apps need only talk to an API.

This first kind of serverless architecture has a big limitation though: web apps based on static file hosting services can't maintain state across sesssions. Your users cannot save configuration data specific to your app. There's no database outside of the browser to save to. But you may not need one! If you are visualizing data from an API or building a custom UI, you can simply read from and write to the API.

When deciding if an API can be used for your serverless web app, look for APIs that support in-browser authentication. For OAuth 2.0, this means using an Implicit Grant, not an Authorization Code Grant (which requires a server). Lots of APIs support Implicit Grants, or something like it. For instance, Google's APIs.

A file hosting service example

At Adept Marketing, we use Asana to plan our projects and allocate resources. However, we needed a way to visualize our project timelines and the resources assigned to them. Asana provided no such visualization, but they do provide a great API! They even use OAuth2 Implicit Grant for auth.

So, we built a task visualization app with Backbone.js and D3.js.

The app is simple. After logging in using OAuth, all relevant tasks are fetched from the API.
loading
Then, we visualize the tasks and provide some navigation.
visualization

When we are ready to release a new version, we run our build step (we're using gulp) and upload the files to S3, where the site is hosted. The gulp deploy task that copies everything under the build/ directory to S3 is shown below for your amusement. But you don't need all that. Just use the AWS CLI to copy the directory.

gulp.task('deploy', ['assets', 'bundle'], function (cb) {
  var findit = require('findit')('build'),
    fs = require('fs'),
    path = require('path'),
    files = [],
    P = require('bluebird'),
    readFile = P.promisify(fs.readFile),
    AWS = require('aws-sdk'),
    config = require('./config.json'),
    s3 = P.promisifyAll(new AWS.S3({
      accessKeyId: config.aws.accessKeyId,
      secretAccessKey: config.aws.secretAccessKey
      //logger: process.stdout
    }));

  findit.on('file', function (f) {
    files.push(f.substr(6));
  });
  findit.on('end', function () {
    P.all(_.map(files, function (fstr) {

      var contentType;
      switch (path.extname(fstr)) {
      case '.html':
        contentType = 'text/html';
        break;
      case '.css':
        contentType = 'text/css';
        break;
      case '.js':
        contentType = 'application/javascript';
        break;
      default:
        contentType = 'application/octet-stream';
      }

      return readFile('build/' + fstr, {encoding: 'utf8'}).then(function (data) {
        return s3.putObjectAsync({
          Bucket: '<BUCKET>',
          Key: fstr,
          Body: data,
          ContentType: contentType
        });
      });
    })).then(function () {
      cb();
    });
  });
});

This web app was a huge win for our resource manager, and it is nothing more than one HTML page, a JS bundle, and some CSS.

What else could you build with a file hosting service and an API?

A quick Google search for APIs supporting OAuth2 Implicit Grant turns up a few interesting ones. Spotify. Getty Images. Google. And many others.

Type 2: Backend as a Service (BAAS) Architecture

Provider examples:

But usually, your app really does need to save state. Your users login, make changes, and expect to see those changes later in some other browser. File hosting services can't provide this functionality alone. You need a backend.

Mobile app developers have been using Backend as a Service providers like Firebase for years now. But they work really well for web apps too.

Check out my post on building a serverless app using Firebase. I built a word magnet game that uses GitHub for auth and stores all the phrases and user info on Firebase. It was a great experience.

reword

If your app has a straightforward data model, why not skip the servers and the databases and just use a BAAS provider like Firebase?

What else could you build as a BAAS serverless app?

Out of the box, Firebase supports Google, Facebook, Twitter, and GitHub for auth. That means that you could create a rich, interactive serverless app that makes use of any of the APIs exposed by those companies. You would use their auth token to both log the user in to Firebase and also to make calls against their API.

Type 3: Functions as a Service (FAAS) Architecture

Provider examples:

The final type of serverless architecture considered here uses Functions as a Service to build out a custom backend. FAAS is a relatively new player in the cloud ecosystem - Amazon introduced AWS Lambda in 2014. With FAAS, individual functions can be uploaded to the service or edited directly in the service UI. Those functions are configured to respond to events triggered by other cloud services. The FAAS platform presumably spins up short-lived containers that run the code, then removes the container when it is no longer needed.

At first glance, FAAS seems like a great fit for certain kinds of web apps. Instead of using BAAS to store state, we could design a series of microservices that run on a FAAS platform and provide much more control over our backend data. In conjunction with a cloud service database like DynamoDB, our FAAS functions could be a formidable backend with effortless horizontal scaling.

However, in practice, this doesn't really work very well. Others have written about this, but here are some of the problems I encountered while trying to create a web backend based on AWS Lambda:

Bad developer experience

There is no official way to run AWS Lambda locally while you develop. There are community attempts to mirror the Lambda/API Gateway environment locally. However, configuration of these mock environments is cumbersome.

AWS API Gateway is difficult to configure and maintain

note: AWS has made some updates to API Gateway and Lambda since I wrote this post. See this announcement for details. It is now easier to pass request information from the API Gateway endpoint to a Lambda function.

Lambda functions are not designed to respond to HTTP requests. Some translation layer between the request and the Lambda event is required. You don't realize how essential request headers are until you no longer have access to them!

So, you tie Lambda functions to AWS API Gateway endpoints. These endpoints themselves have lots of options that describe what release they are associated with and what security model they should use. The endpoints can also inject information into the Lambda event that will be received by the Lambda function. This injection step is configured with a DSL. So yes, it is possible to get all the information you need in the Lambda function. But no, it is not simple.

AWS Lambda functions don't integrate well with other AWS services

Lambda functions appear to be a perfect candidate for a share-nothing, horizontally scaling backend. Of course, we'll need somewhere to read and write state from time to time. How about redis? After all, AWS has a cloud service for that - AWS ElastiCache. I expected to click a few buttons and have a redis endpoint to use in my Lambda functions.

But no. Using AWS ElastiCache requires deploying your Lambda functions in a non-default way. Specifically, you must deploy them into a specially-configured Virtual Private Cloud. Prepare to take a deep dive into VPC configuration. You'll also need to dust off your networking and routing skills. If you want your Lambda functions to continue to be able to reach network destinations on the Internet, your VPC will need to have at least two subnets: one for the Lambda function and one with a VPC NAT Gateway. Oh and that NAT Gateway will cost you. Unlike Lambda, there is no free tier for those.

I trust you're getting the point. FAAS functions are probably really good for the asynchronous tasks that they were originally marketed for. But, so far, they are rather inconvenient for web backends.

note: Google Functions seems to have a better story for web-triggered events, but, as far as I can tell, the other problems exist there too.

Conclusion

Next time you hear about serverless architecture for web apps, ask yourself what kind of serverless design you are dealing with.

For some projects, file hosting is all you need! If the only thing your app needs to do is interact with a good API, you may be able to build an app that is extremely easy to maintain using a file hosting service like S3.

And if your app doesn't need to talk to an API, but just needs to read and save some state, OR if your app happens to be dealing with Google, Facebook, GitHub, or Twitter, why not use Firebase? Building an app with no servers is a real joy.

Finally, take my advice and don't use FAAS yet for your custom backend endpoints. By all means, use FAAS for support tasks, but let's let the major vendors update their offerings before we try to build our own APIs with them.
note: there has been some progress here. See this announcement.

]]>
<![CDATA[Serverless Web Architecture with Firebase]]>https://www.bengladwell.com/serverless-architecture-with-firebase/6418efdb9927d60001ebbe05Wed, 30 Mar 2016 19:07:15 GMT

Cut out the middle man

For a couple of years, I sensed that I was missing something in web architecture. I was hearing more and more that I may not need that web server that played such an important role in my standard design. You know the server I'm talking about. The one that handles authentication, the one that maintains sessions, the one that serves up static assets like CSS and images. The one that proxies to your backend server. I was hearing more and more people say that you could simply have a backend API server and your Javascript app. But for me, this seemed impossible.

You see, like many others, I learned how to build Javascript web apps with Backbone.js. And Backbone seems to presuppose a middle tier web server. From the Backbone documentation (March, 2016):
Backbone Screenshot 1

So Backbone expects you to use your middle tier server - your Rails, PHP, Node, or whatever server - to assist your client code. This is a really nice architecture. Your Javascript app initializes almost instantaneously. And if your model design isn't overly complicated, it's not hard to implement. In addition, that middle tier server is probably using a framework that makes auth and session management easy.

But you know what else is nice?

  • Having one less class of servers to maintain.
  • Having one less code base to maintain.
  • Having an architecture that mirrors the one needed for a mobile app.
  • Having an architecture that scales simply and horizontally.

In short, cutting out the middle tier - the one between the client code and the backend API - could conceivably save lots of time and money.

I realized that this no-middle-tier architecture was becoming much more common early in 2015 when I was experimenting with Ember. It seemed like the Ember ecosystem was steering me toward deploying to a CDN with Ember CLI and using Fastboot to deal with the initial data load. Ember seemed to presuppose just the opposite of Backbone - that there would be no middle tier web server.

Similar buzz about Lamda/Cognito/S3 AWS architecture confirmed my sense that this was at least worth checking out. I decided my reword would be serverless. No more middle tier web server.

But what about sessions and auth?

Re: sessions - that's easy: there typically are no sessions with this architecture. This architecture usually involves using a horizontal scaling backend. In practice, this means that every request to the backend has enough information in the headers to identify the requester and gather up any data that would otherwise be stored in a session.

Re: auth - the backend will need some way of exchanging credentials from the user for a token or cookie that can be sent with every request. The token or cookie is the identifier used to retrieve information about the requester and authorize the request.

But what about the backend and database?

This post is supposed to be about server-less architecture, right? What about that backend API server? And what about the datastore?

Well, it turns out we can get rid of those too.

In October, 2014, I heard an episode of Javascript Jabber that discussed Backend as a Service (BAAS) providers. Unlike Platform as a Services like Heroku or Google App Engine, you don't run your own code on your BAAS provider. BAAS platforms typically give you an API with which you can save and fetch data and authenticate users. For reword, this was all I needed.

Firebase seemed like a great choice. And in addition to providing a datastore and auth service, Firebase also provides free file hosting. So my HTML, JS, and CSS can live there too.

So, no servers. JS, CSS, and HTML hosted on Firebase hosting. And Firebase BAAS for auth and the datastore API.

Working with Firebase

Firebase provides a npm module for interacting with their API. Once you understand Firebase's tree-based data structure, the client library is pretty straightforward.

I load data into my Redux store like this.

import Firebase from 'firebase';
const firebase = new Firebase(`https://${config.firebaseApp}.firebaseio.com`);
...

// ask for all data under "phrases"; the "value" event is fired when the data arrives; we only need to respond to it once

firebase.child('phrases').once('value', (data) => {
  // now that we have the data, dispatch an action to the Redux store
  store.dispatch({
    type: 'PHRASE_ADD_MULTIPLE',
    phrases: _map(data.val(), (p, id) => {
      return Object.assign({}, p, {id});
    })
  });
});

And I send data to Firebase like this.

if (phrase.length) {
  // push creates a new entity
  firebase.child('phrases').push({
    words: phrase
  }).then((entity) => {

    // keys are a crucial part of the Firebase data model
    dispatch({
      type: 'PHRASE_ADD',
      id: entity.key(),
      words: phrase
    });

  });
}

All in all, I had very little problem understanding Firebase's data model and using the client library.

Using Firebase does require you to figure out the relationships between your data entities in advance. This is obviously a good practice for any datastore, but because there is no server code between the client and the data, you can't easily cover over convoluted relationships. Firebase has a good guide on how to do structure your data. I found myself reorganizing the data structure as I figured out how my app would work.

Firebase auth

Firebase provides a few different mechanisms for authenticating users. I opted for OAuth via Github. To log in, a user is redirected to Github to give permission, then back to the app. After logging in, the user's information is available to Firebase client.

The code to direct the user to Github is pretty simple.

firebase.authWithOAuthRedirect("github", (err) => {
  if (err) {
    firebase.unauth();
  }
});

The code used to check for an active session is not too bad either. You can see that I am dispatching an ADD_USER action here.

firebase.onAuth((authData) => {
  if (authData) {
    const user = {
      id: authData.uid,
      provider: authData.provider,
      name: authData.github.displayName,
      image: authData.github.profileImageURL
    };
    store.dispatch({
      type: 'USER_ADD',
      user: user
    });
  }
});

Firebase security

Firebase presents a novel solution for security. Security rules are listed in a declarative way, with special variables that allow for precise definitions. In addition, your security rules can be defined in your app's Firebase dashboard, or they can be stored as part of your codebase. Once I figured out how my rules should work, I opted for the latter.

My security rules. Hopefully the comments help.

{
    "rules": {
      "words": {
        ".read": true,
        // Only I can create words!
        ".write": "auth.uid == 'github:686913'"
      },
      "users": {
        "$user_id": {
          ".read": true,
          ".write": "$user_id === auth.uid"
        }
      },
      "phrases": {
        ".read": true,  // anyone can request a listing of all phrases
        "$phrase_id": {
          // !data.exists() - writing new entry
          // data.child('user').val() === auth.id - the user making the call owns this
          // auth required, anybody can write new entry; deletes and updates require matching user
          ".write": "auth !== null && (!data.exists() || data.child('user').val() === auth.uid)"
        }
      }
    }
}

Final verdict

Firebase is great. I would definitely use it again. Of course, your project needs to be a good fit. If you need complicated processing around your backend data, you should look elsewhere. But if your data needs are relatively straightforward and you want give serverless architecture a try, look no further than Firebase!

]]>
<![CDATA[On React CSS techniques]]>https://www.bengladwell.com/on-react-css-techniques/6418efdb9927d60001ebbe04Wed, 02 Mar 2016 19:22:03 GMT

TL;DR

Material-UI components are great even though they use inline styles, which are not for me. CSS Components are great.


When I was designing reword, I had to decide how to handle CSS. As far as I could tell, there were four options.

  • The standard approach. CSS files are not connected in any explicit way to components. BEM or something like it can help organize CSS classes and connect them conceptually with React components. With this approach, a single Sass or Less stylesheet would probably pull in all the individual stylesheets for components resulting in a single CSS file.
  • The webpack approach. Each component's JS file would include a require ('./style.css'); line which would cause that css file to be included in a CSS bundle file. This seems better than the first option because only those css files that we actually require are included.
  • Inline styles. Using inline styles has become popular in React world. All styling lives right in the component JS file. There are no CSS files. In this way, everything is declared and handled in JS space - application logic, markup (JSX), and styling.
  • CSS modules. CSS Modules are pretty new. After a bit of research, they seemed like a good combination of the other techniques. More on this later.

I also knew I wanted to incorporate Material-UI components into reword. Material-UI's ready-to-use react components looked like they could really speed up development (I was right about that). Material-UI uses inline styling, so that looked like a vote in favor of that technique. But CSS Modules seemed really smart. I decided to go with CSS Modules for my own components and work with inline styles when using Material-UI components.

Why CSS Modules?

With CSS Modules, each component gets its own CSS file. This css file gets required or imported in the component's JS file, just like the webpack approach. And like with webpack, requiring the CSS file marks it to be included in a final CSS bundle. But in addition, the require or import statement gives you a Javascript object with properties that map to the class names in the CSS. You use those properties to assign classNames to elements in the component. In this way, you have an explicit connection between the CSS rules and the your JS code.

So let's say you have component.js and component.css:

component.css

.top {
  font-size: 12px;
  font-weight: 400;
  line-height: 1;
}

component.js

import styles from './component.css';
...
render() {
  return (
    <div className={styles.top}>
      ...
    </div>
  );
}

Here's what the generated CSS and Javascript might look like.

generated CSS

._component__top {
  font-size: 12px;
  font-weight: 400;
  line-height: 1;
}

generated js

...
<div className="_component__top">
...

You never have to deal with those crazy, BEM-like class names. CSS Modules handles that for you. And it removes the problems with a global CSS namespace and gives you a way to extend previously declared CSS rules. It's sorta like BEM without all the boilerplate and with explicit connections to your application code. Read Glen Madden's write up. I do not regret choosing CSS Modules and I'm continuing to experiment with it.

Downside to CSS Modules

My main struggle with CSS Modules had to do with everybody's favorite Javascript topic: tooling. Using CSS Modules requires introducing a new link in the chain of code that processes your JS code.

I actually found that the available tool libraries worked really well for my basic needs. I just had to add something like this to my gulp browserify task:

bundle.plugin(cssModulesify, {
  rootDir: './app',
  output: './build/css/app.css'
});

However, at one point I tried to get hot loading to work with browserify. I kept running into problems, and I think it was because of that extra build step. If I was using webpack like everybody else, this probably wouldn't have been a problem. ¯\(ツ)

So what about inline styles?

I have to say... I'm not a fan.

Using inline styles was a disorienting experience. If you use stylesheets from any external libraries, you always have a mix of inline style rules and traditional style rules. It was hard to think clearly about what styles belonged where.

Inline styles also make for a pretty lousy dev tools experience. I spend a lot of time in browser dev tools. Inline styles mucked that up. Just having all those style rules jammed in the markup makes everything really cluttered. But much worse, inline style rules take the cascade out of cascading style sheets. I rely on dev tools to make changes to CSS rules and see those changes applied throughout the page. But if you make a change to an inline style rule - something like changing the background color of a featured item - you will not see those changes reflected on all similar items. You are only changing rules for that item, never the rules for an associated class.

Conclusions

  • I won't be adopting inline styles any more than I have to.
  • Material-UI is a fantastic project in spite of the inline styles. I highly recommend it.
  • CSS Modules delivered. I'm going to keep using it and I think you should too.
]]>
<![CDATA[Learning the React ecosystem]]>https://www.bengladwell.com/learning-the-react-ecosystem/6418efdb9927d60001ebbe03Wed, 24 Feb 2016 19:06:25 GMT

A while ago, I realized that I had fallen out of step with current trends in Javascript application design. "I regret nothing!" I shouted at the roiling Javascript masses. But still.

React looked cool. A few people like it. And I did like the server-rendering story I was hearing about. Redux sweetened the deal. Having a FP-style state layer sounded good. Might as well give it a shot.

I also wanted to experiment with a no-frontend-server architecture. Basically treat the Javascript SPA like a mobile app that talks to a backend API server. Statically host a single HTML page that references other statically hosted assets, including the main Javascript payload. If you use a backend as a service, you can avoid servers all together. AWS Cognito + API Gateway could have been good. Throw in S3 for hosting. But a one-stop BAAS solution like Firebase seemed like a smaller barrier to entry.

While I was just picking stuff, why not throw in CSS Modules? I like the idea of component-scoped CSS.

Well, it took me a while, but I did indeed build something with all that stuff. I dusted off the refrigerator words idea that I used for some Backbone and gulpjs experimentation years ago. I called my new project reword, because everything in the React/Redux ecosystem should start with re and words are involved.

reword in action

I used Firebase hosting, so checkout it out in action.

The project was an experiment to learn the React/Redux ecosystem, so hopefully it will help someone else learn how to put the many pieces together to build something.

In the end, reword was built with the following stuff:

It may be worth noting that reword does not use webpack. I like Browserify and decided to stick with it.

I liked building with React and Redux. They weren't magic bullets, but I can see why these libraries have a lot of momentum.

I intend on writing about my impressions of React, Redux and the other libraries, services, and techniques used for reword. Until then, take a look or fork it. I would love any feedback, suggestions, or pull requests.

]]>
<![CDATA[Superficial lessons I learned in Ethiopia]]>https://www.bengladwell.com/superficial-lessons-i-learned-in-ethiopia/6418efdb9927d60001ebbe02Fri, 04 Dec 2015 13:46:21 GMTSuperficial lessons I learned in Ethiopia

I recently returned home from a two-week trip to Ethiopia. My family spent some time in the capital, Addis Ababa, as well Awassa in the south of the country and Bahir Dar in the north.

I had many experiences that will probably influence my view of the world, and many others that were interesting or beautiful or difficult.

But that's not what this post is about. If you are looking for a fun and provocative take on Ethiopia, look no further than Anthony Bourdain's Parts Unknown episode; it has the emotional depth and helpful reflection you are looking for. Or, if you know me personally, please ask me about my experiences there. I'm happy to share.

This post is about some of the practical, nerdy lessons that I learned while traveling around a developing country.

Client-side webapps can be better than server-generated content in high-latency environments

Here in the U.S., I am accustomed to high-speed, low-latency networks. Occassionaly, I have to deal with low-speed, low-latency. In that scenario, everything is pretty much the same, you just wait longer.

But throughout this trip, I was on high-latency networks the whole time. Regardless of the network infrastructure, just trying to reach servers that were probably hosted on a different continent meant long round trips.

So I got to experience first hand what many have been saying for a while: well-implemented, client-side webapps may handle high latency connections better than traditional server-generated pages. Usually this point is made in the context of mobile networks. But it is probably even more applicable for the kind of remote networks that I was dealing with.

Using Gmail illustrated this point perfectly. As you may recall, Gmail provides a link to "Load basic HTML (for slower connections)". If you have a Gmail account, you can try it at mail.google.com/mail/h/. This seemed like the right thing to do over there. After all, my connection felt really slow.

But using the basic HTML, server-generated version didn't feel faster at all. And after I thought about it for a moment, the reason was obvious. In my case, the high-latency network requests weren't necessarily slow, they just sort of behaved erratically; the timing felt off. Sometimes I would make a request and wait 30 seconds before there was any response at all. Sometimes I would get a response almost right away. And this erratic response pattern fits much better with the async architecture of a client-side webapp. The user is already accustomed to initiating an action and then moving on as the action is dealt with in the background. Very often, the speed with which the action is resolved is either not very important or is masked because the user is immediately shown a notification.

So yay for webapps.

Immediate UI feedback is crucial

In addition to Gmail, I also use Yahoo mail. And though Yahoo mail is a webapp with UX similar to Gmail, the experience of using it on a high-latency network was terrible, simply because it does not give immediate UI feedback. When you click to delete a message or to open a different folder, the app waits for the server response to update. This is ok on low-latency, moderately fast networks (except when the response takes longer than usual, which happens not infrequently), but unbearable when dealing with high-latency.

Without immediate UI feedback, you just don't know if anything happened after you clicked. Should you click that button again? Was there an error? What is happening!? Just a simple spinner or some kind of status indicator would alleviate all of that frustration.

I admit that I have built UIs that commit this exact error. I justified it because I thought that the particular response would always be fast. Well, high latency can definitely mess that up. Better to always let the user know when something is happening.

Cheap Chromebooks make for awesome travel dev systems

I needed a development machine to take with me in case something went wrong at work. My normal dev system is a Macbook, but I really didn't want to take it. It seemed too risky and even though its a newer model, it seemed too bulky to carry around in backpack everywhere I went.

So I bought a 11.6" Acer CB3-111 Chromebook. This felt risky at the time. I had never used a Chromebook before. I knew it was possible to run Ubuntu in chroot environments on Chromebooks, but it couldn't possibly work as well as advertised, right? Could I really get my dev environment working on this underpowered little computer? Also, it only has 16GB of SSD space, and some of that is devoted to the Chromebook OS. Would whatever is leftover really be enough to run a Linux distribution?

Well, it all worked out; even better than advertised. First of all, the hardware is great. Yeah, it feels sort of plasticy, but the thing costs less than $200! And Chromebook OS is very smart. Boot time is incredibly fast and the UI is fantastic.

But the Linux chroot setup - well, that was a revelation. Chromebooks are built on Linux, so running Ubuntu in a chroot is nothing like running a virtual machine. There is barely any performance penalty. And the lack of disk space was no problem for me. I really wasn't interested in running a full Gnome or Unity or KDE Plasma UI. I just wanted a shell environment to install my development tools. Thanks to years of sysadmining, I'm happy fixing bugs with just bash and vim.

The Chromebook was probably the best $175 I have ever spent on a computer. I imagine it will be my travel computer for years.

iPhone ups and downs

I took my trusty iPhone 4S. No, that's not a typo. If it ain't broke...

More than I expected from the battery

I was surprised how well the battery did. Battery life is one the main problems with an aging phone. In my normal life, I sometimes barely make it home before I need to recharge. I worried that I would be unable to take pictures at times because I wouldn't be able to recharge often enough.

But in Ethiopia, because I couldn't connect to the local GSM networks, the phone was on airplane mode when we were away from hotels and wifi. Apparently even an old phone can go a long time on airplane mode. And it was probably just my imagination, but recharging on 220V seemed even faster than 110V.

Less than I expected from iCloud and Photos

I did have a problem with not being able to take pictures at all times, but it was more with iCloud's syncing protocol. I use iCloud Photo Library, which means that theoretically all the pictures I take are synced to iCloud whenever the phone is online. And, the Photos app is smart enough to ditch the fullsize images in favor of smaller thumbnails when you are running out of disk space. Since this is an old phone and I have a lot of pictures, I am pretty much always running out of disk space.

What I didn't know is that iCloud will not sync if it can tell you have a poor connection. So there were a few days when I had to frantically delete apps or podcasts or other stuff to make more room for photos even though I did have an Internet connection, albeit a bad one. Once we got to a hotel with a better connection, the Photos app was able to upload, sync, and purge.

New Balance Minimus 10v3s are the perfect shoes for exploring a developing country

These New Balance Minimus trail shoes are one in a series of minimal running shoes I have tried. I got lucky and bought these shortly before the trip. They were perfect. Even in the biggest cities in Ethiopia many of the roads are unpaved or have no sidewalks. Almost everywhere I was walking on dirt and rocks. A shoe designed for trails was ideal both in the cities and in the country side.

This is a great shoe. The Vibram outsole is awesome. Probably the best minimalist shoe I have tried so far.

Superficial lessons I learned in Ethiopia

]]>
<![CDATA[Fridgewords: A Backbone and gulp.js tooling example app]]>

I gave a talk at CodeMash last week on using gulp.js to organize your frontend stack. It was the first time I have ever spoken at a tech conference and it was a lot of fun.

]]>
https://www.bengladwell.com/fridgewords-a-backbone-and-gulp-js-tooling-example-app/6418efdb9927d60001ebbe01Mon, 12 Jan 2015 20:52:25 GMT

I gave a talk at CodeMash last week on using gulp.js to organize your frontend stack. It was the first time I have ever spoken at a tech conference and it was a lot of fun.

You can find the slides in my codemash2015 github repo. It's a reveal.js presentation, so just clone the repo and open the index.html file in a browser.

Fridgewords

I needed an example app for the presentation so I created fridgewords. It is inspired by those sliding words magnets often seen on refridgerators.

Like this one.
beard poet

gulp.js, browserify, etc

The code is heavily commented throughout. Check it out if you are trying to get a handle on using gulp to build your project.

Backbone with routing

I also made sure to including routing. It seems to me that Backbone example apps rarely demonstrate patterns for routing.

]]>
<![CDATA[Assembling a frontend stack part 4: Bootstrap, Less, and Livereload]]>

Edit: Some of the material in the Assembling a Frontend Stack posts is outdated. Specifically, I no longer use Bower in any of my projects.

Bootstrap - save me from making decisions about design

I'm a developer, so I love a design framework like Bootstrap. I'm

]]>
https://www.bengladwell.com/assembling-a-frontend-stack-part-4-bootstrap-less-and-livereload/6418efdb9927d60001ebbe00Sun, 31 Aug 2014 03:01:16 GMT

Edit: Some of the material in the Assembling a Frontend Stack posts is outdated. Specifically, I no longer use Bower in any of my projects.

Bootstrap - save me from making decisions about design

I'm a developer, so I love a design framework like Bootstrap. I'm told that not all designers feel the same way. Whatever. We're using it.

Adding Bootstrap to our build

You already installed Bootstrap in part 2 when you installed the Bower components. Running the bower gulp task also includes Bootstrap's Javascript file in vendor.js.

We will make sure Bootstrap's CSS is included in the following section when we tackle Less. For now, let's just add a simple gulp task that will copy the Bootstrap fonts (read: icons) into a useful place.

Update your gulpfile:

"use strict";

var gulp = require('gulp'),
  mbf = require('main-bower-files'),
  concat = require('gulp-concat'),
  handlebars = require('gulp-handlebars'),
  wrap = require('gulp-wrap'),
  browserify = require('gulp-browserify'),
  jshint = require('gulp-jshint');
  
gulp.task('handlebars', function () {
  return gulp.src('src/hbs/**/*.hbs')
    .pipe(handlebars())
    .pipe(wrap('module.exports = Handlebars.template(<%= contents %>);'))
    .pipe(gulp.dest('src/js/templates/'));
});

gulp.task('browserify', ['handlebars'], function () {
  gulp.src(['src/js/app.js'])
    .pipe(browserify())
    .pipe(gulp.dest('public/js/'));
});

gulp.task('bower', function () {
  gulp.src(mbf({includeDev: true}).filter(function (f) { return f.substr(-2) === 'js'; }))
    .pipe(concat('vendor.js'))
    .pipe(gulp.dest('public/js/'));
});

gulp.task('jshint', function () {
  return gulp.src(['src/js/**/*.js', '!src/js/templates/**/*.js'])
    .pipe(jshint(process.env.NODE_ENV === 'development' ? {devel: true} : {}))
    .pipe(jshint.reporter('jshint-stylish'))
    .pipe(jshint.reporter('fail'));
});

gulp.task('bootstrap', function () {
  gulp.src('bower_components/bootstrap/fonts/*')
    .pipe(gulp.dest('public/fonts/vendor/bootstrap/'));
});

Run the task: gulp bootstrap.
With our Bootstrap icon fonts in place, we can move on to Less.


Less

CSS precompilers are great. If you haven't yet tried Less or Sass, you don't know what you're missing. A CSS precompiler is a must-have in any frontend stack.

Let's use Less. It's simple and its compiler is written in Javascript (Sass comes from the Rails world and is written in Ruby).

Adding Less to our build

Install the gulp plugin:
npm install --save-dev gulp-less

Create your first less file at src/less/main.less.

@import "../../bower_components/bootstrap/less/bootstrap.less";
@icon-font-path: "../fonts/vendor/bootstrap/";

This does nothing but include Bootstrap's main Less file and override one of its Less variables (because we put Bootstrap's icon fonts in a non-standard location). But you could import more of your own Less files here or just fill up this file with Less code.

Now let's add this to the build. Update your gulpfile.

"use strict";

var gulp = require('gulp'),
  mbf = require('main-bower-files'),
  concat = require('gulp-concat'),
  handlebars = require('gulp-handlebars'),
  wrap = require('gulp-wrap'),
  browserify = require('gulp-browserify'),
  jshint = require('gulp-jshint'),
  less = require('gulp-less');
  
gulp.task('handlebars', function () {
  return gulp.src('src/hbs/**/*.hbs')
    .pipe(handlebars())
    .pipe(wrap('module.exports = Handlebars.template(<%= contents %>);'))
    .pipe(gulp.dest('src/js/templates/'));
});

gulp.task('browserify', ['handlebars'], function () {
  gulp.src(['src/js/app.js'])
    .pipe(browserify())
    .pipe(gulp.dest('public/js/'));
});

gulp.task('bower', function () {
  gulp.src(mbf({includeDev: true}).filter(function (f) { return f.substr(-2) === 'js'; }))
    .pipe(concat('vendor.js'))
    .pipe(gulp.dest('public/js/'));
});

gulp.task('jshint', function () {
  return gulp.src(['src/js/**/*.js', '!src/js/templates/**/*.js'])
    .pipe(jshint(process.env.NODE_ENV === 'development' ? {devel: true} : {}))
    .pipe(jshint.reporter('jshint-stylish'))
    .pipe(jshint.reporter('fail'));
});

gulp.task('bootstrap', function () {
  gulp.src('bower_components/bootstrap/fonts/*')
    .pipe(gulp.dest('public/fonts/vendor/bootstrap/'));
});

gulp.task('less', function () {
  gulp.src('src/less/main.less')
    .pipe(less({
      compress: process.env.NODE_ENV === 'development' ? false : true
    }))
    .pipe(gulp.dest('public/css/'));
});

You can see that we are once again using the NODE_ENV environment variable to decide if we should compress the resulting CSS or not.

Run the Less task: gulp less
public/css/main.css now contains all of Bootstrap's CSS, as well as anything you define or import in main.less.


Build all this stuff automatically

Build tools really shine when they are set up to automatically rebuild your project as you update your source files. But we can take it one step further. With gulp, it is trivial to push those rebuilt files to your browser. I used to think, "Yeah big deal. I don't mind pressing Command+R to reload the page." Then I tried livereload in the build and I'll never go back. Speed!

Watch this

gulp's API includes gulp.watch - a simple way to execute actions when files change. Add this task to your gulpfile:

gulp.task('watch', ['handlebars', 'browserify', 'less'] ,function () {
  gulp.watch('src/js/**/*.js', [ 'browserify' ]);
  gulp.watch('src/less/**/*.less', [ 'less' ]);
  gulp.watch('src/hbs/**/*.hbs', [ 'handlebars' ]);
});

Run this task with
gulp watch

Notice that it first precompiles the templates, assembles the Javascript modules, and precompiles the Less files. That's because of the second parameter in the task function:
['handlebars', 'browserify', 'less']

It's a good idea to build everything once before we start watching. That way we can be sure that everything is in the state we expect.

This watch task instructs gulp to compile your handlebars templates when you edit and save one of your templates. gulp will reassemble your Javascript modules with your browserify task when you edit and save one of your module files. And it will precompile your Less... you get it.

Add livereload to the build

And now, the coup de grâce. (Yes).
Install the livereload plugin:
npm install --save-dev gulp-livereload

and update your gulpfile:

"use strict";

var gulp = require('gulp'),
  mbf = require('main-bower-files'),
  concat = require('gulp-concat'),
  handlebars = require('gulp-handlebars'),
  wrap = require('gulp-wrap'),
  browserify = require('gulp-browserify'),
  jshint = require('gulp-jshint'),
  less = require('gulp-less'),
  livereload = require('gulp-livereload');
  
gulp.task('handlebars', function () {
  return gulp.src('src/hbs/**/*.hbs')
    .pipe(handlebars())
    .pipe(wrap('module.exports = Handlebars.template(<%= contents %>);'))
    .pipe(gulp.dest('src/js/templates/'));
});

gulp.task('browserify', ['handlebars'], function () {
  gulp.src(['src/js/app.js'])
    .pipe(browserify())
    .pipe(gulp.dest('public/js/'));
});

gulp.task('bower', function () {
  gulp.src(mbf({includeDev: true}).filter(function (f) { return f.substr(-2) === 'js'; }))
    .pipe(concat('vendor.js'))
    .pipe(gulp.dest('public/js/'));
});

gulp.task('jshint', function () {
  return gulp.src(['src/js/**/*.js', '!src/js/templates/**/*.js'])
    .pipe(jshint(process.env.NODE_ENV === 'development' ? {devel: true} : {}))
    .pipe(jshint.reporter('jshint-stylish'))
    .pipe(jshint.reporter('fail'));
});

gulp.task('bootstrap', function () {
  gulp.src('bower_components/bootstrap/fonts/*')
    .pipe(gulp.dest('public/fonts/vendor/bootstrap/'));
});

gulp.task('less', function () {
  gulp.src('src/less/main.less')
    .pipe(less({
      compress: process.env.NODE_ENV === 'development' ? false : true
    }))
    .pipe(gulp.dest('public/css/'));
});

gulp.task('watch', ['handlebars', 'browserify', 'less'] ,function () {
  gulp.watch('src/js/**/*.js', [ 'browserify' ]);
  gulp.watch('src/less/**/*.less', [ 'less' ]);
  gulp.watch('src/hbs/**/*.hbs', [ 'handlebars' ]);
  livereload.listen();
  gulp.watch('public/**').on('change', livereload.changed);
});

We simply imported gulp-livereload and added two lines to the watch task:

livereload.listen();
gulp.watch('public/**').on('change', livereload.changed);

This will spawn a livereload server that will send any file that changes under the public/ directory to a listening browser. Wonderful.

How to make your browser listen? You can attempt to follow the instructions here.

But your server framework probably has some way of testing if it is running in development mode. If so, conditionally add this Javascript snippet to your HTML layout:

<script>document.write('<script src="http://' + (location.host || 'localhost').split(':')[0] + ':35729/livereload.js?snipver=1"></' + 'script>')</script>

And there you have it. gulp is an easy way to get the most out of the legion of frontend build tools and libraries out there. We really just scratched the surface, but if you want to go deeper, I think you'll find that most of this stuff is pretty well documented.

You can find the files described in these posts in this github repo.

]]>
<![CDATA[Assembling a frontend stack part 3: Handlebars and JSHint]]>

Edit: Some of the material in the Assembling a Frontend Stack posts is outdated. Specifically, I no longer use Bower in any of my projects.

Handlebars is a good templating solution

Handlebars seems to have won the day. I don't have many complaints. And someday soon we'

]]>
https://www.bengladwell.com/assembling-a-frontend-stack-part-3-handlebars-and-jshint/6418efdb9927d60001ebbdffSun, 31 Aug 2014 03:00:58 GMT

Edit: Some of the material in the Assembling a Frontend Stack posts is outdated. Specifically, I no longer use Bower in any of my projects.

Handlebars is a good templating solution

Handlebars seems to have won the day. I don't have many complaints. And someday soon we'll have HTMLBars and everything will be amazing.

Adding Handlebars to our build

Let's add Handlebars to our build.

First create a new Backbone view that will make use of a Handlebars template at src/js/views/CountriesTable.js like this:

(function () {
  "use strict";
  
  var TableView = require('./Table'),
    template = require('../templates/countries-table');
  
  module.exports = TableView.extend({

    render: function () {
      var rows = [ {
        name: 'Austria',
        capital: 'Vienna',
        region: 'Europe'
      }, {
        name: 'Belarus',
        capital: 'Minsk',
        region: 'Europe'
      }, {
        name: 'Barbados',
        capital: 'Bridgetown',
        region: 'North America'
      }, {
        name: 'Micronesia',
        capital: 'Palikir',
        region: 'Oceania'
      }];

      this.$el.html(template({rows: rows}));

      return this;
    }
    
  });
  
}());

You can see that we are extending our basic table view and using a Handlebars template to render the view.

Let's update src/js/app.js to append a CountriesTable view instead of a simple Table view.

(function () {
  "use strict";
  
  var $ = window.$,
    CountriesTableView = require('./views/CountriesTable');
  
  $(function () {
    $('body').append(new CountriesTableView().render().el);
  });
  
}());

Now let's create that Handlebars template. Create
src/hbs/countries-table.hbs:

<tr>
  <th>Country</th>
  <th>Capital</th>
  <th>Region</th>
</tr>
{{#each rows}}
<tr>
  <td>{{name}}</td>
  <td>{{capital}}</td>
  <td>{{region}}</td>
</tr>
{{/each}}

Notice that we required this template in our CountriesTable view like this: require('../templates/countries-table'). This is possible by using gulp to precompile our Handlebars templates and then stash them at src/js/templates/, where they will be accessible to our JS modules.

Setting up the Handlebars precompilation in gulp

First, install the gulp-handlebars plugin.
npm install --save-dev gulp-handlebars

This plugin precompiles Handlebars templates. It also provides options to transform the compiled templates in various ways, but none do exactly what we want: a single compiled template per file that is simply wrapped in module.exports = ;

For that, we'll use a simple gulp plugin: gulp-wrap
npm install --save-dev gulp-wrap

Now update your gulpfile.js to look like this:

"use strict";

var gulp = require('gulp'),
  mbf = require('main-bower-files'),
  concat = require('gulp-concat'),
  handlebars = require('gulp-handlebars'),
  wrap = require('gulp-wrap'),
  browserify = require('gulp-browserify');
  
gulp.task('bower', function () {
  gulp.src(mbf({includeDev: true}).filter(function (f) { return f.substr(-2) === 'js'; }))
    .pipe(concat('vendor.js'))
    .pipe(gulp.dest('public/js/'));
});

gulp.task('browserify', ['handlebars'], function () {
  gulp.src(['src/js/app.js'])
    .pipe(browserify())
    .pipe(gulp.dest('public/js/'));
});

gulp.task('handlebars', function () {
  return gulp.src('src/hbs/**/*.hbs')
    .pipe(handlebars())
    .pipe(wrap('module.exports = Handlebars.template(<%= contents %>);'))
    .pipe(gulp.dest('src/js/templates/'));
});

Notice that we have a new task called handlebars.
In addition, the browserify task now has a second parameter: ['handlebars']. This means that the browserify task now depends on the handlebars task and will run it before starting. Run the browserify task to both precompile the handlebars templates and assemble the Javascript files.
gulp browserify

public/js/app.js now contains the code from the following files

  • src/js/app.js
  • src/js/views/CountriesTable.js
  • src/js/Table.js
  • src/js/templates/countries-table.js (which was compiled from src/hbs/countries-table.hbs)

JSHint

And now for some Javascript linting. Linting catches lots of bugs before you even run your code; very handy.

Create a .jshintrc file

You need a .jshintrc file in the root directory of your project. You can pick whatever options you want. Here's one that I like; It's pretty strict.

{
    "maxerr"        : 50,

    "bitwise"       : true,
    "camelcase"     : false,
    "curly"         : true,
    "eqeqeq"        : true,
    "forin"         : true,
    "immed"         : true,
    "indent"        : 2,
    "latedef"       : true,
    "newcap"        : true,
    "noarg"         : true,
    "noempty"       : true,
    "nonew"         : true,
    "plusplus"      : true,
    "quotmark"      : false,
    "undef"         : true,
    "unused"        : true,
    "strict"        : true,
    "maxparams"     : false,
    "maxdepth"      : false,
    "maxstatements" : false,
    "maxcomplexity" : false,
    "maxlen"        : false,

    "asi"           : false,
    "boss"          : false,
    "debug"         : false,
    "eqnull"        : false,
    "es5"           : false,
    "esnext"        : false,
    "moz"           : false,
    "evil"          : false,
    "expr"          : false,
    "funcscope"     : false,
    "globalstrict"  : false,
    "iterator"      : false,
    "lastsemic"     : false,
    "laxbreak"      : false,
    "laxcomma"      : false,
    "loopfunc"      : false,
    "multistr"      : false,
    "proto"         : false,
    "scripturl"     : false,
    "shadow"        : false,
    "sub"           : false,
    "supernew"      : false,
    "validthis"     : false,

    "browser"       : true,
    "couch"         : false,
    "devel"         : false,
    "dojo"          : false,
    "jquery"        : false,
    "mootools"      : false,
    "node"          : false,
    "nonstandard"   : false,
    "prototypejs"   : false,
    "rhino"         : false,
    "worker"        : false,
    "wsh"           : false,
    "yui"           : false,

    "globals"       : {
      "require": true,
      "module": true
    }
}

Add JSHint to the build

First install the gulp plugin (and a nice formatter):
npm install --save-dev gulp-jshint jshint-stylish

Next update your gulpfile:

"use strict";

var gulp = require('gulp'),
  mbf = require('main-bower-files'),
  concat = require('gulp-concat'),
  handlebars = require('gulp-handlebars'),
  wrap = require('gulp-wrap'),
  browserify = require('gulp-browserify'),
  jshint = require('gulp-jshint');
  
gulp.task('handlebars', function () {
  return gulp.src('src/hbs/**/*.hbs')
    .pipe(handlebars())
    .pipe(wrap('module.exports = Handlebars.template(<%= contents %>);'))
    .pipe(gulp.dest('src/js/templates/'));
});

gulp.task('browserify', ['handlebars'], function () {
  gulp.src(['src/js/app.js'])
    .pipe(browserify())
    .pipe(gulp.dest('public/js/'));
});

gulp.task('bower', function () {
  gulp.src(mbf({includeDev: true}).filter(function (f) { return f.substr(-2) === 'js'; }))
    .pipe(concat('vendor.js'))
    .pipe(gulp.dest('public/js/'));
});

gulp.task('jshint', function () {
  return gulp.src(['src/js/**/*.js', '!src/js/templates/**/*.js'])
    .pipe(jshint(process.env.NODE_ENV === 'development' ? {devel: true} : {}))
    .pipe(jshint.reporter('jshint-stylish'))
    .pipe(jshint.reporter('fail'));
});

The added jshint task makes use of a few plugin options, which you can read about here.

We also add the devel: true option when the NODE_ENV environment variable is set to "development". This allows you to leave stuff like console.log() statements in your code during development.


We're ready to move on to Bootstrap, Less, and livereload.

]]>