2-way data binding has entrenched modern front-end development as a tool allowing you to avoid boilerplate code when working with DOM, concentrate on logic and isolate the logic from your templates. All the Angular is built around this piece of technology. Pretty big part of Ember joins that. And for the Backbone framework there are numerous extensions appearing every day. It makes a lot of sense actually: 2-way data binding is pretty awesome. However the technology has its own issues, limitations and mostly important implementation details (within different frameworks). Why not speculate about it a little bit?
This article briefly describes a little piece of theory behind testing of standalone front-end projects, issues that you are likely to meet and the solution I came up with. Here’s the shortcut https://github.com/inossidabile/grunt-contrib-testem if you are already bored so far ;).
More than a year ago me and Peter Zotov released Heimdallr – gem allowing to control models’ security using shiny DSL. It was an extraction from a huge API backend project where it was used to consolidate access control, ease testing and DRY controllers.
Heimdallr was a proof-of-concept release and while I really like DSL part I never had a chance to seriously use it anywhere else. It appeared to be way too paranoid, difficult, slow and sometimes even buggy – it was so stubborn it didn’t feel Ruby way at all. But what’s even worse – it was incompatible with almost anything trying to work with ActiveRecord besides really basic interaction. Even with things like Kaminari.
Heimdallr as a proof-of-concept could afford having technical issues. And what’s good about having technical issues is that usually they can be solved. So I decided to make a Ruby-way clone of Heimdallr propagating the same idea (with similar DSL) but with really different implementation base and ideology.
And the first thing I fixed was the name: meet Protector.
If you ever tried to unify development environments across project teams you probably heard of Vagrant. It integrates into a development process like a charm and works flawlessly. The chances that you stay with that as soon as you won an epic fight against provisioning are pretty high.
But unfortunately the chances to win provisioning are not high at all.
There are two feature-rich options for provisioning: Chef and Puppet. Hereinafter I will intend Chef (as the most popular option) whenever I say “provisioning”.
Setting up a virtual environment with Chef is NOT an easy task. Chef lacks centralized repository of recipes and this results into a huge mess. There are at least ten Redis recipes with different configurations for example. Top 5 Google results are outdated and will not even start. So while in general Chef is a great piece of technology, you better be a qualified DevOps with a set of ready and tested recipes to navigate nicely in its world.
What’s for us as developers? Recently I had a chance to help with the development of something that sorts naughty provisioning out. On behalf of its author, Andrey Deryabin, let me present you Rove — the Vagrant configuration service.
For the last 6 months I tried adapting ActiveAdmin to three projects with pretty different goals. And it was a great success for each of them. However everything comes with a price. ActiveAdmin has excellent DSL but it lacks architectural quality and feature-richness (mainly due to extremely slow development progress).
The main goal of this post is to share my vision of administration frameworks potential we could expect. While ActiveAdmin in my opinion is the first one that finally felt the ground.
Blog post format is not the best one to gather all the issues (while GitHub definitely is) – so I’ll keep it short addressing main of them. After “why I think AA is the true way” introduction I’ll do a bit of interface nit-picking. And that’s probably the most interesting part for you cause you can grab all that tiny improvements and add them to your own AA integrations. Second part on the other hand describes fundamental architecture lacks and possible alternative implementations.
Turbolinks! This “award-winning” technology earned incredible amount of criticism in such a short time! But it still is on the roadmap of Rails 4. As an evangelist of client frameworks I did not find any interest in that previously. And now suddenly life has brought us together. So let’s see if it really is THAT bad. And what are the reasons if it is.
Part 1. Well-known problems
Document ready event
Problems don’t keep waiting. RailsCast #390 starts the marathon with the most popular issue: Turbolinks do not call document’s ready event.
This code runs only during direct page loads. Turbolinks fetcher ignores it.
Imagine you have a large Rails application that you are going to distribute. That’s might be a new world-crashing CMS or incredibly modern Redmine fork. Every separate installation produced by a consumer requires different configs. Or maybe even some code that will adapt your product for particular needs using amazing internal API.
Clever consumer will also want to store such a “deployment” in his own git repository. And as the last point – he will definitely require a nice way to maintain ability to upgrade your product within required version branch.
How do you achieve that?
Let me share my story first. I manage two banking products: Roundbank and Smartkiosk. They are Rails applications. Every time bank wants to deploy Roundbank-based internet banking I need a way to:
Get application core and create a nice new look that will match bank’s design using internal API.
Extend core with the transport methods that are required to integrate with bank’s core banking platform.
First two steps are pretty easy. It can even be a fork on the Github. And then comes third. Release management crashes. Especially if bank has own team that’s involved. Another downside of forks is that your consumer has the whole codebase inside his project. You might not think so but… damn! So provocative! You remember he’s not supposed to change anything right?
Most of our banking products share the same architecture. We use Rails as a REST application server and Joosy application working at browser as a client. One of the greatest advantages we get is the ability to cover the whole Ruby implementation with the acceptance tests. We use requests specs that are part of RSpec Rails integration. However it’s easier said then done: our remote banking app-server for instance has near 500 routes to test. And the number of active routes grows constantly.
Managing such a great amount of routes is a real pain no matter how good you organize your specs. To solve that my colleague Andrew prepared a small rspec plugin handling exactly this task: counting what’s tested on your behalf. We spent several days playing with it and increasing it’s functionality. Join us and have some fun with the rspec-routes_coverage gem.
Plugin will add the following stats to your basic RSpec output:
Ourdays even a lazy and his grandmother is doing his own JS MVC framework. The reason is simple: we really need it. The problem, on the other side, is that everyone is just cloning Backbone. There is also Knockout and Ember that went a different way. Still not enough to satisfy sophisticated audience. The problems are different. Some may dislike Handlebars. The others won’t fit general API. It’s a matter of taste after all. The options are always good if you choose between something different.
For the development speed and quality there are a lot of factors like motivation, management style and all that stuff. It is important indeed. But they are quite common. I’ve seen a lot of SCRUMified happy teams that spend years to create large but straightforward projects. Why is this happening over and over? The wrong points of motivation and incorrect tasks prioritization are the roots. But besides project organization these roots also have an inner reason: the conventions problem.
To take a long story short, here are three rules I encourage you follow:
If you have the requirement for a project-level convention, run away, you are doing something wrong.
All the things you can not solve with the existing tools and conventions should be turned into libraries and released publicly.
Release should be fair. It should be available on GitHub and you should notify the community about what you did.
The years of “keep your code reusable” paradigm make this sound so trivially. But there’s a great difference between just organizing your code into reusable blocks (inner project conventions) and creating the open-source libraries with public promotion. The latter is the key to success.