Quickly serve html files from node, php, or python

There will be times where you need to server some files through a browser and instead of setting a local instance of apache or MAMP, it might just be easier to use something in the command-line. I would recommend the node option as it seems to have a few more options. Mac ships with python and PHP which make those easier in some cases. Also, the PHP server will handle php files along with standard html.

Node

First install http-server globally via npm.

Then it’s as easy as

PHP

php -S <domain>:<port>

ie:  php -s localhost:8000

Python

ie:  python -m SimpleHTTPServer 8000

js puzzle

I came across a tweet that had this bit of puzzling sample code:

js puzzle

Most of this made sense to me, except for the part of the properties being assigned and then either accessible or being undefined. I had a hunch that it was related to something I blogged about previously.

Turns out when using the  .call it’s actually returning an object. That first line is the equivalent of  var five = new Number(5); . This means:

While it’s an object, you can add your properties but as soon as it’s autoboxed/primitive wrapped by the  ++ , it loses it’s abilities to hold those properties. This is shown by the fact that the  instanceof and  typeof values are now different:

The rest of the puzzle is playing with the timing of return values and the difference of an assignment from number++ and ++number.

At least that’s the way I understand it, let me know if I’ve missed anything.

PhantomJS 1.9 and KeyboardEvent

PhantomJS is awesome and one common use case is to use it as a headless browser for running a test suite. I noticed that I was getting different results in my tests where code was relying on fabricating a KeyboardEvent and dispatching it on an element. Well it looks like others have noticed that some of their events are missing, too. One proposed solution controls the type of event that is dispatched, but in all other cases I am pretty happy to use  new KeyboardEvent() , I would prefer not to write special code just to appease my tests.

As a workaround I did this:

This could be pretty dangerous depending on your use case, but at least it’s isolated to your test. I wasn’t sure what other method to use, but if you have one I would love to hear it in the comments. Also, Phantom 2.x should fix this, but it wasn’t an option in this case.

javascript-good-parts

Javascript: The Good Parts

It’s been several months since I read Javascript: The Good Parts but I thought it was worth mentioning that this old classic is an excellent read. It also takes offers a more traditional look at javascript, which is important in understanding why certain changes are being made today.

We are really quite lucky that things like package managers, module loaders, and javascript features have matured to the point where they are being standardized, and browsers are iterating (as well as the spec) at a rate that is making the language more of a pleasure to use. There are things that are also being added that are difficult or impossible to polyfill like WeakMap and generators, that will be fun to play with. Crawford’s follow-up coverage “The Better Parts” I think is best shown at the Nordic.js 2014 conference, check it out:

ps: his take on  class as being a “bad part” is interesting. I don’t currently have an opinion since it makes sense how it works behind the scenes. I do like the idea of  Object.create, and it’s interesting how he finds this  as a security flaw and by not using it he didn’t need to use  Object.create  which made things even simpler. This might be a bit more of  a “functional” approach which is made easier with modules and exports.

A future without boot2docker, featuring Docker Machine

Docker has always had a few unofficial documented steps to getting things going on non-linunx environments. It usually went along the lines that if you weren’t on linux, get linux. This is understandable as docker is using LXC behind the scenes, and that requires linux. A lot of web developers are using Apple hardware with OSX, like myself, and probably felt like it was a little more setup than necessary. Projects like boot2docker made this way easier but that only solved setting up a docker host, or engine, for windows and Mac OS X. What about all those cloud providers? Pre-built images offered by Digital Ocean, etc…

Luckily Docker saw this challenge and abstracted a way to easily setup, from the client. a way to setup any docker engine. It’s called Docker Machine. They even provide migration steps for the boot2docker folk. Now we have an official resource that will work in tandem with docker’s future plans.

Getting things going on a Mac is easier than ever, and Docker provides a Toolbox installation that is easy to download and run as an installer. I prefer to avoid installers and as much as possible let homebrew handle my dependencies, and manage updates. Assuming you have homebrew already installed (and if you don’t go get it, your life will get easier).

Easy Installation with Homebrew

Prerequisite – Homebrew Cask installed? Cask lets you install installers via homebrew.

With Homebrew Cask installed, in your terminal run:

VirtualBox will run the virtual machine that runs linux, which will run Docker. Docker machine supports other means of virtualization, but I’ve only used VirtualBox as it’s free and been used for similar purposes by projects like Otto, Vagrant and boot2docker.

With VirtualBox ready, we just need Docker Machine, in your terminal run:

Now you should have access to docker-machine  on the command line, and we can go ahead and setup a virtual machine that docker can use. Let’s create a docker engine, in your terminal run:

This will create a docker machine named dev. You can take a look at the docker machines at your disposal by putting   docker-machine ls  in your terminal.

Now we have a virtual machine, configured with docker, and running. If you restart your computer, or notice that after running docker-machine ls that the STATE isn’t Running , then all you need to do is run docker-machine start dev (in this case dev denotes the name of the engine).

The last step is to be able to actually execute commands against our Docker engine, and do things like create containers. As Docker Machine will let you run commands against any number of Docker engines, whether you have multiple virtual machines, cloud instances, your local docker  command needs to be wired up so that its commands are directed at the correct engine. This is an important distinction so I’ll repeat that, your local docker client that you access by the docker  is completely agnostic to the docker engine it is running commands against. The commands could be running against an engine locally, in the cloud, it just needs to be setup to point at the right engine. This makes it really powerful by having one API to manage containers across a slew of engines.

Docker-machine makes setting up the your docker client easy by:

(dev refers to the docker engine name that you can get from  docker-machine ls). This command is setting up environment variables that your local docker uses, to see exactly what the eval  is running behind the scenes put just  docker-machine env dev in your terminal.

With all this setup, you should be able to type docker info and see all the information your local docker client has about what its current docker engine. At this point you’re free to use the docker command and have fun with containerizing your apps.

Hope this made the Mac OS X with Docker setup a little clearer and easier, and provided a homebrew way of setting things up. If anything didn’t make sense, or if I need to fix something please let me know! Thanks and Happy Dockering!

Getting a handle on Ember’s templating and rendering story

I was at EmberConf 2015 where Tom and Yehuda announced Glimmer in their keynote. They showed off a dbmon implementation of Ember with HTMLbars. While many probably didn’t notice the performance issues with basic renders prior, this demo quite easily showed the problem and that it could be better. React was proof that a solution was already in hand. Luckily they had been work and an Apple-esque (maybe from Tom’s days at Apple) announcement of the new rendering engine, Glimmer. Just like that Ember was back in the rendering game as quickly as I had known it wasn’t.

Ember’s templating has seen modest improvements over in recent history, from removing annoying metamorphosis script tags, to HTMLbars, and removing the bindAttr and classBinding, and now with a diffing engine that was even better in implementation than React’s. What was neat is that Ember was able to pull out specific parts of the template that it could denote as “dynamic” and when these values, recursively if necessary, re-render the parts that changed.

This all seemed like magic, and because behind the scenes the way you wrote your handlebar syntax hadn’t change (much), it wasn’t quite obvious what had changed to make Ember so much better. Luckily some recent videos have come out that dive deep into the magic. So deep in fact that I might have to watch the “HTMLBars Deep Dive” video again to get a better idea of what is going on architecturally . Take a look at these videos, ordered from least to most granularity:

“Inside Glimmer” is a perfect precursor to the “HTMLBars  Deep Dive”.

It’s obvious that these are not easy problems to solve, and it’s great to see Ember continue to to evolve. These ideas aren’t even completely original, and giving credit to other frameworks, Ember is able to adapt these ideas in a way that makes even better. Thanks to everyone on the core team for their continued work on connecting all the pieces of Ember making it a full featured front-end stack.

Monolithic docker containers for local development environments

This post has an companion github repo using wordpress as an example, feel free to take a look.

In agency work there isn’t the same liberty to be able to deploy our lovely isolated docker containers. Often those environments are the clients, and they just want the git repo and the mysql database. This does not excuse developers from doing everything possible to try and match the production environment in their local development environment.

Often what developers ends up with is a working version of a stack consisting of an apache, mysql and php on their machine. Trying to add extensions or debug someone’s stack is usually pretty difficult because everyone ends up doing it a different way. Couple this with the fact that working on one environment, with one set of binaries and configuration, often is not going to reflect production in every project. Often things work well enough so these shortcomings are ignored. Configurations to support multiple projects running out of sub-directories with one vhost and .htaccess hacks are often culprits of easily avoided bugs, too.

What is the solution? I think vagrant comes really close but it’s a little too heavy and doesn’t do enough to abstract things like the memory, storage, networking of the vm. Essentially most people just want a container with exposed ports, mounted volumes, and an isolated environment, and that’s docker. Docker advocates for splitting up your services across multiple containers and that makes a lot of sense. However, I think it might be overkill for these basic php projects that a lot of web agencies get. I think there is a use-case for docker and running everything in a single container that is vm-esque.

This single-container approach gives you a lot of advantages like tracking your Dockerfile (and any future changes to the docker image) in your git repo, being able to run Docker with your mounted project directory, and just an overall quick and snappy setup. I have an example repo if you’re curious about an implementation of how this would work. Ideally, you would use composer or a some other package manager to track the framework and its dependencies leaving only a Dockerfile, your manifest file declaring your packages, and your application code, in your repo.

Be careful with those primitives types

This is probably a refresher for most, but I was curious about how js is handling typing. After all we have String , Number , and Boolean  global objects that have these wonderful prototypes that get us some really handy functions. So we can do something like:

Neat, and because we have these global objects we can augment their prototypes to give us access to extra methods on every instance. For example:

Ember.js augments prototypes to make things a little easier and to quickly get access to methods without having to pass it into an Ember object. This is something that requires a lot of responsibility, as there is some overhead involved and generally you don’t want to surprise people who share the environment with you.

Augmenting prototypes is also useful to polyfill functionality that might not exist, like Array.map in older versions of IE.

Prototypes also help with defining inheritance in javascript. We have useful operators like instanceof and  typeof to help us make sense of these. Where things get tricky is when you have a primitive like "hello, I'm talking loudly" being a string primitive, but also having access to the String  prototype, like how I added the exclamation  method.

We would expect that since we are using a method on the String prototype, and that "hello, I'm talking loudly" was able to access it, that “hello, I’m talking loudly” instanceof String would equal true, but it doesn’t. Oddly enough, typeof “hello, I’m talking loudly” equals “string”, and new String(“hello, I’m talking loudly”) instanceof String equals true.

If all that seems a little confusing it did to me, too. Here is a quick summary of what we’re looking at:

What is happening is that when you use “quotes” or call the function without the operator  new you actually are working on the primitive. The primitive isn’t an Object and therefore can’t be an instance. When you aren’t using the new operator, the function is simply returning the primitive. As you might recall, using the new operator on a function uses that function as a constructor and creates an instance that inherits from the function’s prototype. Check out the MDN resource for a better explanation, but essentially we’re dealing with an object.

How are we able to access these method’s on a prototype then? There is something called “autoboxing”, or more commonly as “primitive wrapper types”, happening behind the scenes that wires up methods of a literal to its appropriate Function. You can also do things like this transparently where these “objects” are handled appropriately:

Interestingly, the typeof  for each of these ends up being 'number'

The way these work also affect comparison operators:

In general, everything works the way you would expect it to except when you’re dealing with matters of “type”. If what you have is a string, number, or boolean and it’s created with the new operator you need to check instanceof String, Number and Function. If it was created as a primitive you need to check typeof value returning ‘string’, ‘number’, ‘boolean’. You could check if either returns true in a helper function, too.

Check out this stackoverflow post for some good discussion about checking types of string, and how using new shouldn’t be used considering the confusion and how unnecessary it is. While it might be in bad style I think it illustrates some of the details behind javascript’s inner workings. Also, I remember reading old javascript examples in books that make use of the new operator to try to ease people in from OOP backgrounds, so you can’t really escape that this exists. There are some interested reads on autoboxing/”primitive wrapper types” to check out, too.

Like many parts of javascript there’s always little gotchas that keep things interesting. Luckily as standards move forward and as people create libraries/frameworks/polyfills to pave the cowpaths, we will end up with an easier way to write javascript. I hope this made sense and if I made any mistakes, or need some clarify anything, please let me know in the comments.

Raspberry Pi wifi sleep issue with Edimax EW-7811Un

Although I have been reasonably happy with my current airplay speaker setup I ran into some issues where I couldn’t find the airplay speakers listed. I also couldn’t  ssh in, and on the pi directly  ifconfig wasn’t reporting an IP address either. I found myself having to continually run  ifconfig wlan0 up.

Turns out the Edimax EW-7811Un wifi adapter on Raspbian has an issue with very conservative power management. A quick google search turned up this forum post that worked me.

Essentially create a text file at  /etc/modprobe.d/8192cu.conf with the contents (may require sudo to write):

Save that, restart, and hopefully all is fixed.

Another small quirk, but still very happy with everything. In all this I also discovered that shairport-sync had been updated to 2.4 with some bugfixes, yay.

Finally, AirPlay Speakers

I’ve been toying with getting my airplay speakers setup with right combination of pieces and parts over the past few years and I’m happy to report that my tinkering has paid off. My last attempt which I didn’t report involved using Rune Audio, an ambitious attempt to consolidate the OS, audio drivers, media integrations (including airplay and spotify!), all wrapped in an easy to access web interface. Who wouldn’t want a one-stop install and get everything setup with all the bells and whistles.

Unfortunately, even with pushing through to the latest on their master branch I wasn’t able to get a solid experience out of Rune Audio. The UI was slightly buggy, and more frustrating was the dropping of airplay, and although it seems they might have fixed some of these issues I wanted something more barebones. The more I thought about it the more I realized I just wanted airplay. I didn’t need a web interface for this thing, I would use my phone or mac and simply use that for controlling, so much easier. If I want spotify, I’ll airplay spotify.

So, it was back to the drawing board. I started with a fresh install of raspbian, setup the wifi dongle, setup the audio configuration to default to my USB DAC, and lastly setup shairport. My previous instructions were for shairport 1.0, but since then there’s a brand new fork in town and it’s awesome. It’s called Shairport Sync and it allows you to sync to multiple sources (ie: multi-room setup) so that every airplay receiver running shairport sync is kept, well, synchronized. I only have the one device, so this wasn’t something I really needed but aside from this new feature it was just so much easier to install and setup. My previous instructions for the 1.x required some interpretation, but this setup worked perfectly based on the github instructions.

Happy to say that my idea of a stable, custom AirPlay speaker setup is finally complete with a raspberry pi, raspbian, audio USB DAC, wifi dongle, and shairport sync.