Deploying Ember apps on the cheap with Dokku, Docker and Digital Ocean.

The rough idea

With Ember we are spoiled with an excellent ember-cli-deploy tool. Need to deploy somewhere, you can go shopping for many of the supported deploy plugins. One company that has made deployment dead simple is Heroku. When I was looking to show off some local Ember apps I wanted something cheap and easy to setup. Heroku would be nice but I think we could go cheaper.

Enter Dokku. It’s project aimed at providing Heroku support by wrapping a docker heroku-friendly project called Herokuish. Dokku gives you a PAAS by wrapping containers with an nginx proxy router. It has
great settings and plugins that help you extend it for a number of use cases. Because Dokku can detect buildpacks and leverage herokuish we can deploy via a git push, using heroku buildpacks, and get a deployed container. With buildpacks you don’t actually need to know Docker or setup the container.

The last piece of the puzzle is Digital Ocean. It provides affordable virtual machine hosting with an easy to understand interface and luckily for us a one-click install of Dokku on a droplet.

With this rough outline let’s get started.

Create your Ember project

Feel free to skip this step if you’ve already got an ember project.

We’ll use a stock ember project.
1. Go into a fresh folder, and run ember init
2. Let’s make sure we’re tracking this in a git repo, run git init
3. Let’s commit the empty ember project:

Setup your digital ocean droplet

Now let’s get your Dokku digital ocean droplet going.

  1. Login to Digital Ocean.
  2. Click ‘create droplet’.
  3. Click the “One-click apps” tab.
    ScreenShot2017-04-17at11.47.02AM
  4. Choose Dokku 0.8.0 on 16.04
    ScreenShot2017-04-17at11.46.50AM
  5. Choose a size at $5/mo (let’s keep this cheap!)
  6. Pick your preferred region
  7. Add your ssh keys if you got them, it’ll make ssh’ing in easier.
  8. Pick 1 droplet, and pick a hostname if you like.
  9. That’s it! Click “Create”

Under Droplets, check that your droplet is being created.

ScreenShot2017-04-17at11.47.49AMScreenShot2017-04-17at11.48.26AM

You should get an IP address for your droplet, in my case it gave me 162.243.242.65. Go ahead and ssh into your newly created droplet.

Let’s make sure dokku is installed alright by running:

which should return with how to use dokku and available commands.

Setup Dokku

In your browser go to http://your.ip.address with “your.ip.address” being the IP address of your digital ocean droplet above.

You should see a screen similar to:

Screen Shot 2017-04-18 at 7.51.06 AM

Paste your public key which may be the same public key you use for things like github, unless you’ve generated a different one. It might have already filled it in if you had supplied digital ocean with a public key for the droplet. Make sure you have pasted something into the public key. This page is only available once after clicking “Finish Setup”. If you are trying to keep this cheap and plan on only using an IP address make sure you leave “virtualhost naming” unchecked.

If you ever need to change any of this configuration you can do so while ssh’ed into your droplet from the dokku command and pouring over the decently written dokku documentation. Or ask me on twitter, I might be able to help, too.

Click “Finish Setup” when everything is configured.

Gotchas

Before we continue let’s take care of a few gotchas.

Firewall

Your security concerns may differ but in order to not worry about the ports picked by dokku for the running applications I’m going to go ahead disable it.

It’s not hard to manage the ports on ufw , if you’re interested you can check up on managing Ubuntu’s firewall.

Memory swap

During your build you may run into memory issues which prevent it from finishing. Since we’re going the cheap route I’m going to add some swap, but if you wanted to you could use a droplet with more memory.

instructions via dokku advanced installation and digital ocean guides

Creating your app on dokku

While we are still ssh’ed into our digital ocean box let’s go ahead and setup the application on dokku.

Configure your Ember project for dokku

This is kind of the cool part. Because dokku can be treated like Heroku we can use the wonderful work the people at Heroku have done.

  1. First, let’s install ember-cli-deploy by running  ember install ember-cli-deploy .
  2. Now install ember-cli-deploy-build by running  ember install ember-cli-deploy-build . This is the basic build plugin that takes care of the build process upon deployment.
  3. package.json  will have been modified and config/deploy.js  added. Let’s commit these files.
  4. Dokku tries to do its best to automatically determine the heroku buildpack for a given application but given ours is Ember it needs a bit more setup than a regular node app. There are many different ways to specify the buildpack for an app with Dokku but I prefer setting the .buildpacks  file, because then it’s checked into git. In your project root run

    which should create a .buildpacks file with the buildpack URL inside. If the file already existed the buildpack URL should be added to the bottom.
  5. Commit your .buildpacks  file
  6. The last step is to tell our project where to deploy. Dokku follows the Heroku-easy model of just git push . So we will add our dokku digital ocean droplet by adding it to our git remotes by running

    With “your.ip.address” being your digital ocean droplet’s IP address. Note: The user for the push is dokku, not root. After the IP address is a “:project-name”, in our case it is “ember”. So if you’re curious the breakdown is:
  7. The last step is to deploy, run

    You should see lots of scrolling text and after a 3-4 minutes you should see one of the last lines say.

    Again, with “your.ip.address” being your droplet’s IP address.
    Screen Shot 2017-04-18 at 8.09.08 AM
    and there it is, we can see in the markup that we have an Ember application with our production build fingerprinted .js files.

Bonus steps

Who wants to remember a random port number? Not me. So let’s go ahead and swap that for something we can choose. Login to your droplet via ssh.

If we wanted to access it on port 80 we would do:

Each command will reconfigure the nginx, and after the second command you should be able to access the application at the given port.

That’s all folks

And that’s it. Hopefully you were able to get your Ember application deployed. There are probably some easier solutions, like just using Heroku itself, but it’s nice to know that there are options if you’re on a budget. Also, this can scale with you for other projects across other platforms and help introduce you to the world of Docker. You can access any of the running containers that Dokku sets up for you which is pretty neat and great if you absolutely need to tail some logs or access the environment directly for debugging.

Thanks

Thanks to the developers at dokku, herokuish, heroku, and ember-cli-deploy. This was made pretty easy thanks to the work done by people from these projects. ✌🏽❤️

EmberConf 2017, being realistically optimistic

Update: While drafting this post over the last couple of weeks EmberConf 2017: State of the Union has been posted on the Ember Blog. It provides better context so go read that first.

I discovered Ember early on in the fabled times before ember-cli. Those were the days of globals, views, and lots of googling. As I’ve grown in my career I’ve carefully watched as Ember has risen to a slew of new challenges we’ve seen on the web, and quite successfully.

I was lucky enough to attend the first EmberConf, even before I moved to Portland. From that first EmberConf it seemed as if Ember was always on target, hitting home runs. ember-cli was announced at the first EmberConf and it was obvious this ecosystem was going to be something special. In no particular order the following years unveiled things like ES6 modules early on, component-driven design, the addon ecosystem, data down actions up, a powerful first-class router, htmlbars, glimmer, glimmer 2 and a slew of other wonderful magic. Ember was able to even provide a realistic upgrade path from 1.x to 2.0. They said it and then it happened.

It wasn’t until watching meet-up talks and seeing releases unfold that it became apparent that this was the year that maybe there were a few misses. Ember had scaled to a point where it had to slow down and consider the landscape as it continued to build the bark.

To nobody’s surprise things like pods, routable components, and angle-braket components didn’t make it. Glimmer turned out to include a regression in its initial render performance and Glimmer 2 ended up being a total (amazing) rewrite. This wasn’t to say there wasn’t huge wins this year. We saw Engines released into core, we have a quite-stable FastBoot nearing 1.0, and have tons of evidence of a growing ecosystem and community. We saw new design-patterns like ember-concurrency emerge, which was also mentioned in almost every EmberConf 2017 talk.

Yehuda made an interesting point about Ember’s future, learning from this year’s shortcomings. In the way I interpretted it, Ember has matured to a point where instead of promising new features it has to be more realistic in its optimism. And instead of new features, by continuing to expose stable internals and primitives  the community is able to grow Ember’s bark itself. The most obvious example of this was the announcemnent of glimmer (as a library). This would not have been possible without the rewrite of glimmer 2 as a standalone drop-in replacement rendering engine. They built within the existing walls of their Ember APIs an entirely new “unopinionated” rendering engine that could stand alone. Godfrey is continuing this work with a new RFC that aims at exposing lower level primitatives to the glimmer engine itself.

Every year EmberConf has given me confidence in the project’s ability to tackle new problems. Despite a few missed deadlines Ember continues to revaluate the landscape and adapt to be better suited for the challenges to come. Sometimes it’s important to take a step back before you can continue forward, something I think Ember has been exceptional at.

I’ll have a follow up post with some of my favourite talks and specific takeways from this year’s conference.

Ember PSA: modules and scopes

I could have saved an hour of head scratching had I kept in mind a few basic principles. Hopefully this lesson of modules and shared scope will save someone else in the future.

Within Ember we get used to creating modules and exporting these object definitions.

We’re lucky cause these automatically get picked up and registered the way we need them. Components and models, for example, end up getting registered as factories so that we can easily create many instances on the fly.

We can see in super-component  that we have a few properties, label  and container . For each instance of the component we can do whatever want to the label  and only that instance’s label  would be modified. However when it comes to doing a pushObject  (Ember’s version of push  so that it can track changes) into the array all shared instances receive the value pushed since they are all pointing to the same array reference. This would also apply if we were modifying properties on an object that was created in the module’s object definition.

Another way to look at this is that we aren’t maintaining changes to a string as changes to a string produce a new string in javascript, they’re immutable. However we can maintain the reference to the object or array, and change the things they point to, ie: adding another object into the array while maintaining reference to the array.

We  can combat this by doing a few things. When we do need a shared reference, be explicit and put it outside so that it’s obvious like containerInScope  in the example.

When we don’t want a shared reference either pass it in on the component in the template, or by using .set . When it’s the responsibility of the component to a fresh instance available set it explicitly on init  like:

While these lessons aren’t specific to Ember and essentially anybody exporting modules and relying on scoping could run into the same issue, I do feel Ember provides some magic that it’s easy to fall that there are still basic javascript pitfalls.

Lastly, I can’t blame Ember for this. It’s documented:

Arrays and objects defined directly on any Ember.Object are shared across all instances of that object.

It’s a silly mistake but one that got the best of me and was a good chance to review what is actually retained across module boundaries, and how these files are (re)used.

Why I made Vouch, my own A+ promise library

Javascript promises have been around for a bit (especially in terms of “internet years”). On both the frontend and the backend we use them as a way of control-flow, handling asynchronous craziness, and taming the once dreaded callback hell. They’ve been so great that we are starting to explore many other ideas around asyncronousity and complex UIs.

But let’s hold up for a second. We’ve all used promises, but how well do we really understand them? Myself, I thought I had a pretty solid understanding. After seeing a number of design patterns and using them as a first class citizen in Ember for awhile, what secrets could remain?

With that question in mind I went off to make my own promise library. I was lucky to find that Domenic had written a test suite that I could use to TDD my way through my little experiment. Now if you look a bit closer at at the test suite you’ll find folding of functions on functions to generate tests in a way that cover many different use cases. Yes it’s flexible, but it wasn’t always the easiest to debug. My solution was to re-write a simple surface layer of quick gotcha tests that covered some of the baseline functionality I expected. These were easier to debug and quicker to fail in a way that was easy to reason about.

It wasn’t as smooth sailing as I had expected. Early on I realized that there were a few architectural roadblocks and decided to branch off to explore other ideas within my existing boundaries. One of the awesome things about having a test suite was seeing how tackling one internal concept would unlock blocks of passed unrelated tests. Over the course of a few weeks I felt my understanding of promises and thenables level up.

Why stop there? So, I decided to go through the notions of adding it to npm, setting up the travis ci, and playing with some package release tools. I have a few outstanding issues to publish transpiled versions for older versions of node, and the browser. It would be great to submit it to the official A+ spec list of libraries, too. I’m also curious how well my implementation holds up performance-wise.

Alright, so it works and the test-suite passes, but what was the point? I don’t actually expect people to use it, in fact I hope people don’t. Vouch, was simply a chance for me to get a peek at what goes into making a library, and appreciate the depths of the spec. And most obviously I was able to level up my knowledge of thenables and promises. I would highly encourage anyone to borrow a test suite and test their implementation and understanding with a pet project, you might be surpised with what you’ll learn.

If you like the idea behind vouch go ahead and give it a star!

Quickly serve html files from node, php, or python

There will be times where you need to server some files through a browser and instead of setting a local instance of apache or MAMP, it might just be easier to use something in the command-line. I would recommend the node option as it seems to have a few more options. Mac ships with python and PHP which make those easier in some cases. Also, the PHP server will handle php files along with standard html.

Node

First install http-server globally via npm.

Then it’s as easy as

PHP

php -S <domain>:<port>

ie:  php -s localhost:8000

Python

ie:  python -m SimpleHTTPServer 8000

js puzzle

I came across a tweet that had this bit of puzzling sample code:

js puzzle

Most of this made sense to me, except for the part of the properties being assigned and then either accessible or being undefined. I had a hunch that it was related to something I blogged about previously.

Turns out when using the  .call it’s actually returning an object. That first line is the equivalent of  var five = new Number(5); . This means:

While it’s an object, you can add your properties but as soon as it’s autoboxed/primitive wrapped by the  ++ , it loses it’s abilities to hold those properties. This is shown by the fact that the  instanceof and  typeof values are now different:

The rest of the puzzle is playing with the timing of return values and the difference of an assignment from number++ and ++number.

At least that’s the way I understand it, let me know if I’ve missed anything.

PhantomJS 1.9 and KeyboardEvent

PhantomJS is awesome and one common use case is to use it as a headless browser for running a test suite. I noticed that I was getting different results in my tests where code was relying on fabricating a KeyboardEvent and dispatching it on an element. Well it looks like others have noticed that some of their events are missing, too. One proposed solution controls the type of event that is dispatched, but in all other cases I am pretty happy to use  new KeyboardEvent() , I would prefer not to write special code just to appease my tests.

As a workaround I did this:

This could be pretty dangerous depending on your use case, but at least it’s isolated to your test. I wasn’t sure what other method to use, but if you have one I would love to hear it in the comments. Also, Phantom 2.x should fix this, but it wasn’t an option in this case.

Javascript: The Good Parts

It’s been several months since I read Javascript: The Good Parts but I thought it was worth mentioning that this old classic is an excellent read. It also takes offers a more traditional look at javascript, which is important in understanding why certain changes are being made today.

We are really quite lucky that things like package managers, module loaders, and javascript features have matured to the point where they are being standardized, and browsers are iterating (as well as the spec) at a rate that is making the language more of a pleasure to use. There are things that are also being added that are difficult or impossible to polyfill like WeakMap and generators, that will be fun to play with. Crawford’s follow-up coverage “The Better Parts” I think is best shown at the Nordic.js 2014 conference, check it out:

ps: his take on  class as being a “bad part” is interesting. I don’t currently have an opinion since it makes sense how it works behind the scenes. I do like the idea of  Object.create, and it’s interesting how he finds this  as a security flaw and by not using it he didn’t need to use  Object.create  which made things even simpler. This might be a bit more of  a “functional” approach which is made easier with modules and exports.

A future without boot2docker, featuring Docker Machine

Docker has always had a few unofficial documented steps to getting things going on non-linunx environments. It usually went along the lines that if you weren’t on linux, get linux. This is understandable as docker is using LXC behind the scenes, and that requires linux. A lot of web developers are using Apple hardware with OSX, like myself, and probably felt like it was a little more setup than necessary. Projects like boot2docker made this way easier but that only solved setting up a docker host, or engine, for windows and Mac OS X. What about all those cloud providers? Pre-built images offered by Digital Ocean, etc…

Luckily Docker saw this challenge and abstracted a way to easily setup, from the client. a way to setup any docker engine. It’s called Docker Machine. They even provide migration steps for the boot2docker folk. Now we have an official resource that will work in tandem with docker’s future plans.

Getting things going on a Mac is easier than ever, and Docker provides a Toolbox installation that is easy to download and run as an installer. I prefer to avoid installers and as much as possible let homebrew handle my dependencies, and manage updates. Assuming you have homebrew already installed (and if you don’t go get it, your life will get easier).

Easy Installation with Homebrew

Prerequisite – Homebrew Cask installed? Cask lets you install installers via homebrew.

With Homebrew Cask installed, in your terminal run:

VirtualBox will run the virtual machine that runs linux, which will run Docker. Docker machine supports other means of virtualization, but I’ve only used VirtualBox as it’s free and been used for similar purposes by projects like Otto, Vagrant and boot2docker.

With VirtualBox ready, we just need Docker Machine, in your terminal run:

Now you should have access to docker-machine  on the command line, and we can go ahead and setup a virtual machine that docker can use. Let’s create a docker engine, in your terminal run:

This will create a docker machine named dev. You can take a look at the docker machines at your disposal by putting   docker-machine ls  in your terminal.

Now we have a virtual machine, configured with docker, and running. If you restart your computer, or notice that after running docker-machine ls that the STATE isn’t Running , then all you need to do is run docker-machine start dev (in this case dev denotes the name of the engine).

The last step is to be able to actually execute commands against our Docker engine, and do things like create containers. As Docker Machine will let you run commands against any number of Docker engines, whether you have multiple virtual machines, cloud instances, your local docker  command needs to be wired up so that its commands are directed at the correct engine. This is an important distinction so I’ll repeat that, your local docker client that you access by the docker  is completely agnostic to the docker engine it is running commands against. The commands could be running against an engine locally, in the cloud, it just needs to be setup to point at the right engine. This makes it really powerful by having one API to manage containers across a slew of engines.

Docker-machine makes setting up the your docker client easy by:

(dev refers to the docker engine name that you can get from  docker-machine ls). This command is setting up environment variables that your local docker uses, to see exactly what the eval  is running behind the scenes put just  docker-machine env dev in your terminal.

With all this setup, you should be able to type docker info and see all the information your local docker client has about what its current docker engine. At this point you’re free to use the docker command and have fun with containerizing your apps.

Hope this made the Mac OS X with Docker setup a little clearer and easier, and provided a homebrew way of setting things up. If anything didn’t make sense, or if I need to fix something please let me know! Thanks and Happy Dockering!

Getting a handle on Ember’s templating and rendering story

I was at EmberConf 2015 where Tom and Yehuda announced Glimmer in their keynote. They showed off a dbmon implementation of Ember with HTMLbars. While many probably didn’t notice the performance issues with basic renders prior, this demo quite easily showed the problem and that it could be better. React was proof that a solution was already in hand. Luckily they had been work and an Apple-esque (maybe from Tom’s days at Apple) announcement of the new rendering engine, Glimmer. Just like that Ember was back in the rendering game as quickly as I had known it wasn’t.

Ember’s templating has seen modest improvements over in recent history, from removing annoying metamorphosis script tags, to HTMLbars, and removing the bindAttr and classBinding, and now with a diffing engine that was even better in implementation than React’s. What was neat is that Ember was able to pull out specific parts of the template that it could denote as “dynamic” and when these values, recursively if necessary, re-render the parts that changed.

This all seemed like magic, and because behind the scenes the way you wrote your handlebar syntax hadn’t change (much), it wasn’t quite obvious what had changed to make Ember so much better. Luckily some recent videos have come out that dive deep into the magic. So deep in fact that I might have to watch the “HTMLBars Deep Dive” video again to get a better idea of what is going on architecturally . Take a look at these videos, ordered from least to most granularity:

“Inside Glimmer” is a perfect precursor to the “HTMLBars  Deep Dive”.

It’s obvious that these are not easy problems to solve, and it’s great to see Ember continue to to evolve. These ideas aren’t even completely original, and giving credit to other frameworks, Ember is able to adapt these ideas in a way that makes even better. Thanks to everyone on the core team for their continued work on connecting all the pieces of Ember making it a full featured front-end stack.