Stampit 0.2 Released

Stampit lets you create objects from reusable, composable behaviors. Instead of pretending that JavaScript is class-based, stampit embraces the power and flexibility of prototypes (see “Three Different Kinds of Prototypal OO”). It produces object factories which you can then compose to create more factories. When it’s time to instantiate your objects, just call the function that stampit returns. Out pops an object, with all the properties, methods, and private state you composed.

But it gets even better. Now Stampit makes it even easier to compose exactly the factory you want. You can chain `.methods()` and `.state()`, and it will combine the last object you passed into them with all the previous objects you’ve passed in.

For example:

var obj = stampit().state({
  foo: {bar: 'bar'},
  stateOverride: false
  bar: 'bar',
  stateOverride: true

obj.foo.bar; // bar
obj.bar; // bar
obj.stateOverride; // true

Download the latest:

$ npm install stampit

You can also grab it directly from Github.

JavaScript, Node

Getting Started With Node and Express

Programming JavaScript Applications

This post is an excerpt from my new book, “Programming JavaScript Applications”. Enjoy!

Node is a server-side JavaScript environment with many attractive features:

  • A fast JavaScript engine (built on V8).
  • Asynchronous by default philosophy (nothing should block).
  • Event-loop design (much like the browser environment).
  • Networking as a first class citizen (create production capable servers with few lines of code).
  • A highly usable streams API.
  • A large, rapidly growing developer community.
  • A simple, CommonJS-based module solution that guarantees module encapsulation (your var declarations are limited to module scope).
  • A developer friendly package management system with thousands of open-source packages to choose

Some of these features might take some getting used to if you are accustomed to server-side environments that allow features such as blocking IO and a single thread per connection (for convenient state management). However, you’ll find that the incredible performance boost achieved by non-blocking request/response cycles is well worth the learning effort.

Don’t underestimate the value of the asynchronous by default philosophy. That is the key to Node’s incredible performance in production environments.

Where other environments force users to wait in line while files load or network operations take place, Node fires off the request and keeps accepting new connections and executing other code paths while the asynchronous event does its work in the background.

Processes can spend an incredible amount of time waiting for file reads and network requests, especially if they encounter an error. Node just keeps cruising along. It’s like getting out of congested city streets with stop lights at every block and on to an open freeway. Node isn’t fast simply because of the performance of the V8 JavaScript engine (though that does help). It’s fast because it doesn’t waste time waiting around for things to happen.

There are other platforms that share some of JavaScript’s performance characteristics: Twisted Python and Tornado spring to mind. They’re fast for the same reason. However, even though they are more mature, they can’t compete with the active membership of the JavaScript developer community.

Node comes packaged with a module management solution called npm. It gives you access to a package registry stocked with thousands of open-source packages, and makes it very easy for you to contribute your own, or use a private git repository for proprietary work. Of course, it’s easy to mix and match open-source and proprietary packages in a single application.

Installing Node and Express

First, make sure you have node installed. There are installers available from the Node homepage, but I like to use nvm so that I can easily switch between different versions of node. To install node with nvm:

$ curl https://raw.github.com/creationix/nvm/master/install.sh | sh

For more on nvm, check out the docs on the Github repository.

With node installed, you’ll need to create a new directory for your project:

$ mkdir my-first-project
$ cd my-first project

Then initialize your project:

$ npm init

Express is currently the most popular application framework for Node. It’s easy to learn and use, and it has a vibrant developer community. If you’re going to build applications in Node, chances are you’ll eventually use express. There’s no time like the present to get started. Install express:

$ npm install --save express

That’s it. You’re ready to get started!

Node Tips

If this is your first time using Node and Express, it might be helpful to see what some of the community believes are the current set of best practices. Node Bootstrap aims to show new users some common practices in the Node / Express community, using Twitter Bootstrap. Among other things, there’s an example of using the cluster module to manage multiple instances of the server (utilizing all available CPU cores).

Organizing Files in Node

It’s a good idea to follow the emerging file organization trends in existing, popular Node repositories. That way, anybody familiar with Node should be able to find their way around your repository. Here are some common file locations:

  • Main ./index.js, ./server.js, or ./yourentryfile.js in the root.
  • Supporting files in ./lib/
  • Static http files in ./public/
  • Views or templates in ./views/
  • Command-line executables in ./bin/
  • Tests in ./test/ (or ./spec/ if you`re a jasmine cool-aid drinker)
  • Npm Scripts in ./scripts/
  • Config in ./config/
  • Documentation in ./doc/
  • Examples in ./examples/
  • Performance analysis in ./benchmarks/
  • Native c / c++ source in ./source/

The npm repository serves as a good example.

Node Libraries

Some of my favorite Node libraries include:

  • Mout Like Underscore / LoDash. Stuff that should probably be included in JavaScript.
  • Express Web application framework.
  • Qconf Application config.
  • Hogan Mustache for express.
  • Superagent Communicate with APIs.
  • Socket.io Realtime communications (websockets).
  • Q Promises.
  • Async Asynchronous functional utilities.
  • Bunyan Logging.
  • Tape Testing.
  • Cuid Better than guid/uuid for web applications.
  • Credential Easy password hashing and verification.
  • Sails Rapid application prototyping with socket.io and MVC on Express.
  • Node-http-proxy Proxy your service APIs.


Don’t include configuration data in your app repository (including secrets, paths to file locations, server hostnames, etc..).

Instead, set up environment files with examples for sane defaults. Check in the examples, but don’t check in the actual configuration. Following this rule of thumb will make deployment / ops support for the app a lot easier.

Check an example file into your app repo:



Then copy it and fill in the real values when you install the app:

$ cp s3.env.example s3.env

Use a package like nconf to make the environment variables available in your app.

Make sure that the real environment files get added to .gitignore so that you don’t accidentally check them into your repository.

Warning About State

One of the first stumbling blocks you might run into moving from browsers to Node is that you can’t rely on your closure state to be reserved for a single user. You have a single instance of the app, with a single pool of memory, and a potentially unbounded number of incoming connections.

State needs to be kept in a database, or passed as parameters through function calls. For example, each request in an Express application will have corresponding request and response objects. That may be a good place to store in-memory state for a single request/response cycle.

Likewise, singletons are a good way to store state that will be shared for all requests, such as your application configuration, but otherwise, they’re usually an anti-pattern in Node applications.


There are a lot of application frameworks available for Node. One popular framework that I find particularly useful is Express. It’s basically an HTTP server built on top of Node’s http module and Connect middleware.

Create your app

To create an express app instance, you’ll need to require express, and call the function that gets returned:

var express = require('express'),

  // Create app instance.
  app = express();


Express has a built-in app router. It’s pretty simple to use. First, request method names correspond to the methods you call to set up your route. GET is .get(). POST is .post() and so on. To create a route that will handle any request type, use .all().

Pass the route as the first parameter, and a function as the second parameter:

app.get('/', function (req, res) {
  res.setHeader('Content-Type', 'text/plain');

  res.end('Hello, world!');

Routes have easy parameter matching:

app.get('/:name', function(req, res){
  var name = req.params.name;

  res.send('Hello, ' + name);

A route can be a regular expression:

app.get(/(Hugh|Kevin)/, function (req, res, next) {
  var name = req.params[0], // Whitelisted user

  // Write something to output...

  res.send('Hello, ' + name);


Middleware is software that takes an incoming request, processes it, and passes it on to the next piece of middleware in the chain. Express middleware takes the form:

// Add some data to the request object that your other
// midleware and routes can use.
app.use(function (req, res, next) {
  req.foo = 'bar';

Here’s how it works in the context of an express server:

'use strict';
var express = require('express'),

  // Create app instance.
  app = express(),

  // Use the `PORT` environment variable, or port 44444
  port = process.env.PORT || 44444;

// The new middleware adds the property `foo` to the request
// object and sets it to 'bar'.
app.use(function (req, res, next) {
  req.foo = 'bar';

app.get('/', function (req, res) {
  res.setHeader('Content-Type', 'text/plain');

  // Send the value passed from the middleware, above.

app.listen(port, function () {
  console.log('Listening on port ' + port);

Point a browser at the new server, or just use curl:

$ curl http://localhost:44444/

Handling errors is just as simple. Again, you’ll use middleware:

'use strict';
var express = require('express'),

  // Create app instance.
  app = express(),

  // Use the `PORT` environment variable, or port 44444
  port = process.env.PORT || 44444;

// Some middleware that produces an error:
app.use(function (request, response, next) {
  var bar;

  try {

    // This will throw because `foo` is undefined.
    request.foo = foo.get('bar');

  } catch (error) {

    // Pass the error to the next error handler in the
    // middleware chain. If you forget `return` here,
    // it will continue to process the rest of the
    // function, and probably throw an unhandled exception.

    return next(error);

  // Do something with bar.

// Tell express to process routes before it gets to the error handler.

// Error handlers take four parameters. The first is the error.
// Generally, you'll want to add your error handler to the bottom of
// your app.use() stack.
app.use(function (error, request, response, next) {

  // Log the error.

  // Send the user a friendly message:
  response.send(500, 'Your request was not handled successfully. ' +
    'Our smartest fix-it guy has already been alerted. ' +
    'Contact us if you need help.');

  // Use setTimeout to give the app time to log and clean up,
  // but shut down ASAP to avoid unintended behavior.
  // Could also use setImmediate() in recent versions of Node.
  setTimeout(function () {
  }, 0);


app.get('/', function (req, res) {
  res.setHeader('Content-Type', 'text/plain');

  // Sadly, nobody will ever see this friendly greeting.
  res.end('Hello, world!');

app.listen(port, function () {
  console.log('Listening on port ' + port);

You can clean up after a lot of errors. In fact, sometimes an error is an expected probability. For example, there’s a chance that sometimes remote services won’t be available, and you can recover from that condition and try again later. However, sometimes you just won’t get the answer you’re looking for and there’s nothing you can do to recover. You don’t want to keep your server running with undefined state. In the case of errors that you can’t easily recover from, it’s important to shut down the process as quickly as possible.

Let it Crash

Processes crash. Like all things, your server’s runtime will expire. Don’t sweat it. Log the error, shut down the server, and launch a new instance. You can use Node’s cluster module, forever (a Node module available on npm), or a wide range of other server monitor utilities to detect crashes and repair the service in order to keep things running smoothly, even in the face of unexpected exceptions.


Express comes with some built-in handling of templates, but it must be configured. You have to tell Express which view engine to use in order to process templates, and where to find the views. First, you’ll want to require your template engine. For Mustache templates, you can use Hogan:

var hulk = require('hulk-hogan');

Most of the settings for express are specified with app.set(). You’ll need to use it to configure express to use the template engine of your choice. There are four options that you should be aware of:

// Tell express where to find your templates.
app.set('views', __dirname + '/views');

// By default, Express will use a generic HTML wrapper (a layout)
// to render all your pages. If you don't need that, turn it off.
app.set('view options', {layout: false});

// Tell express which engine to use.
app.set('view engine', 'hulk-hogan');

// Specify the extension you'll use for your views.
app.engine('.html', hulk.__express);

Remember to define a route that uses your new view. Assuming you’ve used your middleware to build a data object on the request object called req.data (see Middleware, above):

app.all('/', function (req, res) {
  res.render('index', req.data, function callback(err, html) {
    // Handle error.

You can leave off the callback parameter and any errors will be internally passed via next(err) for your generic error handlers to catch. If you pass the callback, that automatic error handling will not occur, and you should handle the error explicitly.

Next Steps

Of course, you want to do a lot more with your app than return a hard-coded message to your users. The good news is that there are drivers for just about any database you can dream of. You can use a variety of template libraries, and of course, serve static files. I encourage you to dive into the Node module playground and take a look around.

For starters, here’s a simple static file server example using the built-in static middleware:

var express = require('express'),

    app = express(), // Create the express app.

    // Try pulling the port from the environment. Or
    // default to 5555 if no environment variable is set.
    port = +process.env.PORT || 5555;

// .bodyParser() parses the request body and creates the
// req.body object.
app.use( express.bodyParser() );

// .methodOverride() lets you simulate DELETE and PUT
// methods with POST methods. Common boilerplate.
app.use( express.methodOverride() );

// .static() creates a static file server, which looks for
// assets in the /public directory, in this case.
app.use( express.static(__dirname + '/public') );

// app.router handles path routing for express apps.
app.use( app.router );

// Express comes with a default error handler that is
// intended for development use. You'll want to implement
// your own for production systems.
app.use( express.errorHandler() );

app.listen(port, function () {
  console.log('Server listening on port ' + port);

Have a look at the Express guide and API reference for a lot more useful examples, and the Node Manual for Node API documentation. There are lots of useful gems that you’ll want to learn more about.


Fun With JavaScript Destructuring Assignment

By now you’ve probably heard that JavaScript is getting destructuring assignment in ECMAScript 6. It does just what it sounds like it does — it helps you extract values from structured data (such as arrays and objects). You can play with this right now. Just pop open Firefox, pull up the web console, and tinker (as of writing, these features are not yet supported in Chrome).

The most basic example:

var [a, b] = [1, 2];
a === 1; // true
b === 2; // true

You can return multiple values from a function:

var f = function f() {
  return [1, 2]

var [a, b] = f();
a === 1; // true
b === 2; // true

There are several other interesting examples on the New in JavaScript 1.7 MDN page. For example, the variable swap:

var a = 1;
var b = 3;
[a, b] = [b, a];

And slightly more interesting rotation. Let’s make some candy-cane stripes:

var [a, b, c, d, e, f] =
  ['|   |', '|o  |', '|oo |', '|ooo|', '| oo|', '|  o|'];

for (var i = 0; i < 40; i++) {
  [a, b, c, d, e, f] = [b, c, d, e, f, a];

But I wrote this post to tell you about my favorite thing about destructuring assignment. You can use it to tame function parameters:

var currentSong = {
  title: 'Higher Love',
  artist: 'Depeche Mode',
  album: {
    title: 'Songs of Faith and Devotion',
    releaseDate: '1993'

var logSongInfo = function logSongInfo({ title, artist,
  album: {releaseDate: year} }) {

  console.log('Title: ' + title + 'nArtist: ' + artist
    + 'nYear: ' + year);

Title: Higher Love
Artist: Depeche Mode
Year: 1993

Note that we were able to deep dive into the object passed into logSongInfo and even specify the name. You don’t have to deep dive into an object to specify a new variable name:

var foo = function foo({ originalName : newName }) {

foo({originalName: 'the value'}); // 'the value'

Those are some neat parlor tricks, but what’s the real practical value? Well, for one thing, it makes it easier to reason about objects. For example, say you have a group of babies, and you want to find all of George’s kids:


var babies = [
    name: 'Landon',
    age: 1,
    parents: {
      father: 'Felix',
      mother: 'Ella'
    name: 'Ruby',
    age: 2,
    parents: {
      father: 'George',
      mother: 'Betty'

babies.filter(function (baby) {
  return baby.parents.father === 'George';
}); // [{name:"Ruby", age:2, parents:{father:"George", mother:"Betty"}}]


var babies = [
    name: 'Landon',
    age: 1,
    parents: {
      father: 'Felix',
      mother: 'Ella'
    name: 'Ruby',
    age: 2,
    parents: {
      father: 'George',
      mother: 'Betty'

babies.filter(function ({ parents: { father } }) {
  return father === 'George';
}); // [{name:"Ruby", age:2, parents:{father:"George", mother:"Betty"}}]

Notice how the semantics are altered. In the first example, you specify the baby object as the formal parameter. In the destructuring example, the function signature tells you exactly what you’re interested in. I find this much more readable, personally.

I’m excited for this new addition to the JavaScript specification. If you don’t want to wait, destructuring assignment is one of several great features that already exist in CoffeScript. I encourage you to try it out. It’s a bit like getting to explore the future of JavaScript today.

HTML5, JavaScript

h5Validate on cdnjs

h5Validate is now available for quick loading from cdnjs. h5Validate was developed in 2010 in order to bring HTML5 form validation features into the browsers. At the time, no browsers supported the features natively. Now, almost all of the browsers have native support for these features, but the native UI is clunky and awkward for users.

h5Validate implements best practices based on 1,000 user survey, several usability studies, and the behavior of millions of users in live production environments. h5Validate overrides awkward default browser behavior and allows you to more easily customize the user experience for the required and pattern attributes.

h5Validate on Github
Have a question or comment about h5Validate?


You’re Optimizing the Wrong Things

It seems like every time somebody suggests to the JavaScript world that there’s a different (often better) way to do things than the status quo, the knee-jerk response is to say that the old way is faster, and start pulling out perf scores. That’s all well and good, but by that standard, we should all be writing everything in C. The perf differences between one technique and another in JavaScript is just one of many considerations you should weigh while you’re working. Other major factors that should not be cast asside include developer productivity, code readability, application flexibility, and extensibility. Think about the trade off before you make it.

Don’t get me wrong, it’s essential that we keep performance in mind while we’re developing web applications, but your app isn’t going to be noticeably impacted by your choice of switch…case vs method lookup, or your choice of inheritance patterns.

Maybe you’ve read Nicholas Zakas’ “High Performance JavaScript (Build Faster Web Application Interfaces)”, or Steve Sauder’s “High Performance Websites: Essential Knowledge for Front-End Engineers” and “Even Faster Websites: Performance Best Practices for Web Developers”. You know all the perf tricks. You have a good understanding of which constructs are faster than others in JavaScript. That’s a really good start. You’re doing better than most.

The bottom line is this:

Know what to optimize. If you work too hard for a 1% gain, you miss 99% of your potential.

Concentrate on the big time wasters, first.

What slows down a typical modern web application?

For typical applications, your bottlenecks will be I/O or render-bound, or sometimes memory (a lot more a couple years ago, but even mobile devices are starting to pack a lot more RAM these days). Here’s my web application performance hit-list, in order of priority:

Too many network requests

Hitting the network comes with a high cost. If you’re having performance problems, look here first, beginning with your initial page load.

Your application start-up sequence is unavoidably going to leave a strong first impression with your users. If you make them wait, they may not stick around long enough to see all the results of all your hard work, and even after they have used your app, created an account, and made a time investment, every time you make them wait, you run the risk of driving them to your competitors.

The more separate requests you have to make, the worse it gets, because each request introduces latency. Lots of tiny requests basically turn into a latency queue. Who cares how you instantiate your objects if your page never gets to finish loading before the user hits the back button or closes your app before it’s done booting? YSlow is a great resource to help you optimize page load times.

Inefficient API endpoints can force a lot of separate requests. For example, I recently created a service that delivers a few different types of loosely related data. Sometimes, you may want to fetch each different type individually. You may need one type, but not another. And then there are times that you need more than one type. It’s good to let consumers request all the data they need with one request, or limit data by querying for the specific data that they need. Whether you’re working the full stack, or working along side API specialists, make sure that the needs of the API consumers are driving feature development in the API such that you’re optimizing to trim the number and size of requests based on actual API use cases.

API payload size is also a big factor. I’ve seen a lot of APIs that send down a lot of duplicate data, because they in-line objects that are shared between a potentially large number of parent objects. If you run into that situation, consider referencing those duplicated objects by ID, instead, and sending down a separate hash that you can look them up from. Consider including both object sets in a single request. It makes the client code a little more complicated, but if your API has an SDK (which I recommend), you can smooth that over, too.

It’s harder to say what’s faster for page loads – injecting data into the initial HTML page-load, or building a configuration endpoint that you can asynchronously fetch data with. The former can hurt page caching, and the latter will introduce another request, and add latency overhead. That’s a question you’ll need to answer based on the needs of your particular application. Most of the time I inject data into the initial HTML page-load, but that doesn’t mean that’s the right choice for everyone. However, it could be easy to profile both methods and go with the one that works best for you.


  • Compile and compress scripts to deliver a single JavaScript payload. See Browserify for Node-style modules in the browser, or r.js for AMD modules, and UglifyJS for compression. You’ll find Grunt tasks to automate all of that.
  • Compile and compress CSS.
  • Consider a web font instead of a bunch of small image icons. Bonus – those web fonts can be styled and scaled a lot better than .png files.
  • Make sure expires headers and ETags are set correctly on your servers.
  • If there are parts of your app that relatively few users take advantage of in a typical session (for instance, administration consoles, settings, or user profile sections), consider splitting the assets for them into separate payloads, instead of loading them all at startup.
  • Optimize API endpoints to minimize number of requests and payload size.

Page reflows

Page reflows can cause your app to feel unresponsive and flickery. This can be especially problematic on low-powered mobile devices. There are various causes of reflows, including late-loading CSS, changing CSS rules after page render, image tags without specified dimensions, render order in your JavaScript, etc…


  • Load your CSS in the head (avoid reflows as CSS gets loaded).
  • Specify image tag dimensions to avoid reflows as images load in the page.
  • Load (most) JavaScript at the bottom (less important with client-side rendering).
  • When rendering collections in JavaScript, add items to the collection while the list DOM is detached, and attach it only after all items have been appended. The same goes for collections of sub-views if many sub-views are used to build your complete page.

Too many DOM selections

The top jQuery newbie sin: failing to cache a selection that’s being used repeatedly. Raw metal people, you’re not off the hook. This also stands for direct consumers of the DOM Selectors API, like Document.querySelector() and Document.querySelectorAll().



Try to minimize your DOM access. Cache your selection at the beginning, and then use that cache for further access:

var $foo = $('.foo');

You can also reduce your reliance on selections by separating more of your logic from your DOM interactions. Once you move from building basic websites to building full fledged applications, you should probably be using something like Backbone.js to help you keep concerns decoupled. That way, you won’t have your data collection algorithms dependent on DOM selections, and vice verse — a common issue that slows down a lot of poorly organized applications.

Too many event listeners

You’ve probably heard this a million times already, but infinite scrolling is getting really popular, so you’d better pay attention this time: Stop binding event listeners to every item in a collection. All those listeners consume resources, and are a potential source of memory leaks.


Delegate events to a parent element. DOM events bubble up, allowing elements earlier in the hierarchy to hook up event handlers.

Blocking the event loop (for the less common CPU-bound cases)

Once in a while you’ll need to do some processing, number crunching, or collection management that could take some time. JavaScript executes in a single event loop, so if you just dive into a loop, you can freeze up everything else until that processing is done. It’s particularly distressing to users when the UI stops responding to their clicks.

Node programmers should go to lengths to avoid any blocking I/O or collection processing that would happen on every connection. It can cause connection attempts to go unanswered.


  • First, make sure you’re using an efficient algorithm. Nothing speeds an application up like selecting efficient algorithms. If you’re blocking in a way that is impacting the performance of your application, take a close look at the implementation. Maybe it can be improved by selecting a different algorithm or data structure.
  • Consider timeslicing. Timeslicing is the process of breaking iterations of an algorithm out of the normal flow control of the application, and deferring subsequent iterations to the subsequent loop cycles (called ticks). You can do that by using a function instead of a normal loop, and calling the function recursively using setTimeout().
  • For really heavy computations, you may need to break the whole job out and let it execute in a completely separate execution context. You can do that by spawning workers. Bonus — multi-core processors will be able to execute that job on a different CPU. Be aware that spawning workers has an associated overhead. You’ll want to be sure that you’re really getting a net win with this solution. For Node servers, it can be a great idea to spawn as many workers as you have CPU cores using the cluster module.


First, make sure your code works, it’s readable, flexible, and extensible. Then start optimizing. Do some profiling to figure out what’s slowing you down, and tackle the major choke points first: network and I/O, reflows, DOM selections, and blocking operations.

If you’ve done all of that, and you’re still having problems, do some more extensive profiling and figure out which language features and patterns you could swap out to gain the most from your efforts. Find out what’s actually causing you pain, instead of changing whatever you happen to know (or think) is faster.

Happy profiling!


Fluent JavaScript – Three Different Kinds of Prototypal OO

Note: I’ve presented this as a talk a couple of times. You can see the latest version from O’Reilly’s Fluent Conference: JavaScript and Beyond, 2013. The talk is called “Classical Inheritance is Obsolete: How to Think in Prototypal OO”.

In order to claim fluency in JavaScript, it’s important to understand how JavaScript’s native inheritance capabilities work. This is an often neglected area of JavaScript writing and learning, but understanding it can be dramatically empowering.

JavaScript is one of the most expressive programming languages ever created. In particular, its combination of delegate prototypes, runtime object extension, and closures allow you to express three distinct types of prototypes in JavaScript. Let’s take a closer look at each of these.

Delegation / Differential Inheritance

A delegate prototype is an object that serves as a base for another object. When you inherit from a delegate prototype, the new object gets a reference to the prototype. When you try to access a property on the new object, it checks the object’s own properties first. If it doesn’t find it there, it checks the prototype, and so on up the chain until it gets back to Object.prototype.

Method delegation is a fantastic way to preserve memory resources, because you only need one copy of each method to be shared by all instances. It’s also a great way to add capabilities at runtime to all objects which share a particular prototype.

There are a couple of ways to set up that relationship in JavaScript. The one you’re likely to see in a lot of books goes something like this:

function Greeter(name) {
  this.name = name || 'John Doe';

Greeter.prototype.hello = function hello() {
  return 'Hello, my name is ' + this.name;

var george = new Greeter('George');

See JavaScript Constructor Functions vs Factory Functions and Stop Using Constructor Functions in JavaScript for my thoughts on why you should ignore this technique. I present it here only because it’s likely to be a familiar point of reference.

I prefer this:

var proto = {
  hello: function hello() {
    return 'Hello, my name is ' + this.name;

var george = Object.create(proto);
george.name = 'George';

The one major drawback to delegation is that it’s not very good at storing state. In particular, if you try to store state as objects or arrays, mutating any member of the object or array will mutate the member for every instance that shares the prototype. In order to preserve instance safety, you need to make a copy of the state for each object.

Cloning / Concatenative Inheritance / Mixins

Prototype cloning is the process of copying the properties from one object to another, without retaining a reference between the two objects. Cloning a great way to store default state for objects. This process is commonly achieved by methods like Underscore’s .extend(), or jQuery’s .extend():

var proto = {
  hello: function hello() {
    return 'Hello, my name is ' + this.name;

var george = _.extend({}, proto, {name: 'George'});

It’s common to see this style used for mixins. For example, Backbone users can make any object an event emitter by extending from Backbone.Events:

var foo = _.extend({
  attrs: {},
  set: function (name, value) {
    this.attrs[name] = value;
    this.trigger('change', {
      name: name,
      value: value
  get: function (name, value) {
    return this.attrs[name];
}, Backbone.Events);

Closure Prototypes / Functional Inheritance

I’m cheating on the name for this one. It’s not really functional, and it’s not an object prototype. It’s a function prototype. Think of it as an alternative to a constructor / init function. It can be copied (inherited) from one factory to another, and combined with other functions like it to completely replace the need for super() (which is a code smell, and should be avoided).

Closure prototypes are functions that can be run against a target object in order to extend it. The primary advantage of this style is that it allows for encapsulation. In other words, you can enforce private state. Douglas Crockford called this style “functional inheritance” in his book, “JavaScript: The Good Parts”. It looks something like this (Foo, like above, with private attributes):

var model = function () {
  var attrs = {};

  this.set = function (name, value) {
    attrs[name] = value;
    this.trigger('change', {
      name: name,
      value: value

  this.get = function (name, value) {
    return attrs[name];

  _.extend(this, Backbone.Events);

model.call(george, 'Top secret');

george.on('change', function (e) { console.log(e); });

george.set('name', 'Sam'); // Object {name: "name", value: "Sam"}

This is all well and good, but there’s an awful lot of jumping through hoops if you want to combine the techniques — so I wrote a little library to jump through the hoops for you. It’s called Stampit.


Create objects from reusable, composable behaviors.


  • Create functions (called factories) which stamp out new objects. All of the new objects inherit all of the prescribed behavior.
  • Compose factories together to create new factories.
  • Inherit methods and default state.
  • Supports composable private state and privileged methods.
  • State is cloned for each instance, so it won’t be accidentally shared.
  • For the curious – it’s great for learning about prototypal OO. It mixes three major types of prototypes:
    1. differential inheritance, aka delegation (for methods),
    2. cloning, aka concatenation/exemplar prototypes (for state),
    3. functional / closure prototypes (for privacy / encapsulation)
  • What’s the Point?

    Prototypal OO is great, and JavaScript’s capabilities give us some really powerful tools to explore it, but it could be easier to use.

    Basic questions like “how do I inherit privileged methods and private data?” and “what are some good alternatives to inheritance hierarchies?” are stumpers for many JavaScript users.

    Let’s answer both of these questions at the same time. First, we’ll use a closure to create data privacy:

    var a = stampit().enclose(function () {
      var a = 'a';
      this.getA = function () {
        return a;

    It uses function scope to encapsulate private data. Note that the getter must be defined inside the function in order to access the closure variables.

    Let’s see if that worked:

    a(); // Object -- so far so good.
    a().getA(); // "a"

    Yes. Got it. In both of these instances, we actually created a brand new object, and then immediately threw it away, because we didn’t assign it to anything. Don’t worry about that.

    Here’s another:

    var b = stampit().enclose(function () {
      var a = 'b';
      this.getB = function () {
        return a;

    Those `a`’s are not a typo. The point is to demonstrate that `a` and `b`’s private variables won’t clash.

    But here’s the real treat:

    var c = stampit.compose(a, b);
    var foo = c(); // we won't throw this one away...
    foo.getA(); // "a"
    foo.getB(); // "b"

    WAT? Yeah. You just inherited privileged methods and private data from two sources at the same time.

    But that’s boring. Let’s see what else is on tap:

    // Some more privileged methods, with some private data.
    // Use stampit.extend() to make this feel declarative:
    var availability = stampit().enclose(function () {
      var isOpen = false; // private
      return stampit.extend(this, {
        open: function open() {
          isOpen = true;
          return this;
        close: function close() {
          isOpen = false;
          return this;
        isOpen: function isOpenMethod() {
          return isOpen;
    // Here's a mixin with public methods, and some state:
    var membership = stampit({
        add: function (member) {
          this.members[member.name] = member;
          return this;
        getMember: function (name) {
          return this.members[name];
        members: {}
    // Let's set some defaults: 
    var defaults = stampit().state({
            name: 'The Saloon',
            specials: 'Whisky, Gin, Tequila'
    // Classical inheritance has nothing on this. No parent/child coupling. No deep inheritance hierarchies.
    // Just good, clean code reusability.
    var bar = stampit.compose(defaults, availability, membership);
    // Note that you can override state on instantiation:
    var myBar = bar({name: 'Moe's'});
    // Silly, but proves that everything is as it should be.
    myBar.add({name: 'Homer' }).open().getMember('Homer');