Front-end development with Visual Studio

Update (28/01/2015)  : My team has just released a new nuget package that combine latest versions of node, nogit and npm. Don’t hesitate to try it.

The emergence of single page applications introduces a new need for web developers: a front end build process. Javascript MV* frameworks now allow web developers to build complex and sophisticated applications with many files (js, css, sass/less, html …). We’re very far from those 3 lines of JavaScript to put “a kind of magic” on your web site.

In traditional back end development such as asp.net MVC, a compiler transforms source code written in a –human readable- programming language (C#/VB for asp.net) into another computer language (MSIL for .net). Produced files are often called binaries or executables. Compilation may contain several operations: code analysis, preprocessing, parsing, language translation, code generation, code optimization….What about JavaScript?

JavaScript was traditionally implemented as an interpreted language, which can be executed directly inside a browser or a developer console. Most of the examples and sample applications we’ll find on the web have very basic structure and file organization. Some developers may think that it’s the “natural way” to build & deploy web applications, but it’s not.

Nowadays, we’re now building entire web applications using JavaScript. Working with X files is merely different. Few frameworks like RequireJS help us to build modular JavaScript applications thanks to Asynchronous Module Definitions. Again, this is not exactly what we need here because it focuses only on scripts.

What do we need to have today to build a web site? Here are common tasks you may need:

  • Validate scripts with JSLint
  • Run tests (unit/integration/e2e) with code coverage
  • Run preprocessors for scripts (coffee, typescript) or styles (LESS, SASS)
  • Follow WPO recommendations (minify, combine, optimize images…)
  • Continuous testing and Continuous deployment
  • Manage front-end components
  • Run X, execute Y

So, what is exactly a front-end build process? An automated way to run one or more of these tasks in YOUR workflow to generate your production package.

vs-webapp-angular

Example front-end build process for an AngularJS application

Please note that asp.net Bundling & Minification is a perfect counter-example: it allows to combine & to minify styles and scripts (two important WPO recommendations) without having a build process. The current implementation is very clever but the scope is limited to these features. By the way, I will still use it for situations where I don’t need a “full control”.

I talk mainly about Javascript here but it’s pretty the same for any kind of front-end file here. SASS/LESS/Typescript/… processing is often well integrated into IDE like Visual Studio but it’s just another way to avoid using a front-end build process.

About the process

Node.js is pretty cool but it’s important to understand that this runtime is not only “Server-side Javascript”. Especially it provides awesome tools and possibilities, only limited by your imagination. You can use it while developing your application without using node.js runtime for hosting, even better, without developing a web site. For example, I use redis-commander to manage my redis database. I personally consider that Web Stack Wars are ended (PHP vs Java vs .net vs Ruby vs …) and I embrace one unique way to develop for the web. It doesn’t really matter which language & framework you use, you develop FOR the web, a world of standards.

Experienced & skilled developers know how important is automation. Most of VS extensions provide rich features and tools, like Web Essentials, but can rarely be used in an automated fashion. For sure, it could help a lot, but I don’t want this for many of my projects: I want something easily configurable, automated and fast.

Bower to manage front-end components

The important thing to note here is that Bower is just a package manager, and nothing else. it doesn’t offer the ability to concatenate or minify code, it doesn’t support a module system like AMD: its sole purpose is to manage packages for the web. Look at the search page, I’m sure you will recognize most of the available packages. Why bower and not nuget for my front-end dependencies ?

Nuget packages are commonly used to manage references in a Visual Studio Solution. A package generally contains one or more assemblies but it can also contain any kind of file … For example, the JQuery nuget package contains JavaScript files.  I have a great respect for this package manager but I don’t like to use it for file-based dependencies. Mainly because:

  • Packages are often duplicates of others package managers/official sources
  • Several packages contains too much files and you generally don’t have the full control
  • Many packages are not up-to-date and most of the times they are maintained by external contributors
  • When failing, Powershell scripts may break your project file.

But simply, this is not how work the vast majority of web developers. Bower is very popular tool and extremely simple to use.

Gulp/Grunt for the build process

Grunt, auto-proclaimed “The JavaScript Task Runner” is the most popular task runner in the Node.js world. Tasks are configured via a configuration Javascript object (gruntfile). Unless you want to write your own plugin, you mostly write no code logic. After 2 years, it now provides many plugins (tasks), approx. 3500 and the community is very active. For sure, we will find all what you need here.

Gulp “the challenger” is very similar to Grunt (plugins, cross-platform). Gulp is a code-driven build tool, in contrast with Grunt’s declarative approach to task definition, making your task definitions a bit easier to read. Gulp relies heavily on node.js concepts and non-Noders will have a hard time dealing with streams, pipes, buffers, asynchronous JavaScript in general (promises, callbacks, whatever).

Finally, Gulp.js or Grunt.js? It doesn’t really matter which tool you use, as long as it allows you to compose easily your own workflows. Here is an interesting post about Gulp vs Grunt.

Microsoft also embraces these new tools with a few extensions (see below) and even a new build process template for TFS released by MsOpenTech a few months ago.

Isn’t it painful to code in Javascript in Visual Studio ? Not at all. Let’s remember that JavaScript is now a first-class language in Visual Studio 2013 : this old-fashioned language can be used to build Windows Store, Windows Phone, Web apps, and Multi-Device Hybrid Apps. JavaScript IntelliSense is also pretty cool.

How do I work with that tool chain inside Visual Studio?

Developing with that rich ecosystem often means to work with command line. Here are a few tools, extensions, packages that may help you inside Visual Studio.

SideWaffle Template Pack

A complete pack of Snippets, Project and Item Templates for Visual Studio.

sidewaffle

More infos on the official web site http://sidewaffle.com/

Grunt Luncher

Originally a plugin made to launch grunt tasks from inside Visual studio. It has now been extended with new functionalities (bower, gulp).

gruntlauncher

Download this extension here (Repository)

ChutzpathChutzpath

Chutzpah is an open source JavaScript test runner which enables you to run unit tests using QUnit, Jasmine, Mocha, CoffeeScript and TypeScript. Tests can be run directly from the command line and from inside of Visual Studio (Test Explorer Tab).

By using custom build assemblies, it’s even possible to run all your js tests during the build process (On premises or Visual Studio Online). All the details are explained in this post on the ALM blog.

Project is hosted here on codeplex.

Package Intellisense

Search for package directly in package.json/bower.json files. Very useful if you don’t like to use the command-line to manage your package.

package_intellisense

More info on the VS gallery

TRX – Task Runner Explorertrx

This extension lets you execute any Grunt/Gulp task or target inside Visual Studio by adding a new Task Runner Explorer window. This is an early preview of the Grunt/Gulp support coming in Visual Studio “14”. It’s definitively another good step in the right direction.

Scott Hanselman blogged about this extension a few days ago.

I really like the smooth integration into VS, especially Before/After Build. The main disadvantage –in this version- is that this extension requires node.js runtime and global grunt/gulp packages. It won’t work on a workstation (or a build agent) without installing these prerequisites. Just for information, it’s not so strange to install node.js on a build agent: it’s already done for VSO agents. http://listofsoftwareontfshostedbuildserver.azurewebsites.net/

To conclude

To illustrate all these concepts and tools, I created a repository on github.  Warning, this can be considered as a dump of my thoughts. This sample is based on the popular todomvc repository. Mix an angular application and an asp.net web api in the same application may not be the best choice in terms of architecture, but it’s just to show that it’s possible to combine both in the same solution.

Points of interest

  • Front-end dependencies are managed via Bower (bower.json)
  • Portable nuget package (js, Npm, Bower, Grunt, …) created by whylee. So, I can run gulp/grunt tasks before build thanks to the custom wpp.targets.
  • Works locally as well as on TFS and Visual Studio Online.
  • Allow yourto use command line at the same time (>gulp tdd)

Does it fit into your needs? No ? That’s not a problem, just adapt it to your context, that’s the real advantage of these new tools. They are very close to developers and building a project no longer depends on a complex build template or service.

Sponsored Post Learn from the experts: Create a successful blog with our brand new courseThe WordPress.com Blog

WordPress.com is excited to announce our newest offering: a course just for beginning bloggers where you’ll learn everything you need to know about blogging from the most trusted experts in the industry. We have helped millions of blogs get up and running, we know what works, and we want you to to know everything we know. This course provides all the fundamental skills and inspiration you need to get your blog started, an interactive community forum, and content updated annually.

How speedy.js is your web site ?

As a performance officer, I recently watch by a presentation from Lara Callender Swanson about how Etsy moved towards a culture of performance and mobile web by educating, incentivizing and empowering everyone who works at Etsy.

Inspired by a repo on github and StackExchange‘s Miniprofiler, I’ve created to very simple script to display Navigation Timing stats at the top of a web page.

Navigation Timing is a JavaScript API for accurately measuring performance on the web. The API provides a simple way to get accurate and detailed timing statistics—natively—for page navigation and load events. It has always been a small challenge to measure  the time it takes to fully load a page, but Navigation Timing API now make this easy for all of us.

It’s important to understand that Navigation Timing data is very similar to network stats in developer tools.

speedy.js

Can I use … ?

Navigation Timing API is now supported by all major browsers (Can I use …?). Google Analytics and RUM services use it since a long time ago.  In case, it’s not supported by your browser an error message will be displayed.

nospeedy.js

No message => don’t hesitate to create an issue on github

Mobile ready ?

This is maybe the most interesting part. Developer tools are not available on mobile/tablet version so you don’t have any chance to evaluate page load time and to explain why it may be slow.

On production ?

Of course, it’s not recommended to display this kind of data to your users, but you may find several ways to use it on production. There are browser extensions to inject custom javascripts into any website (Cjs, Greasemonkey) ; Fiddler allows you to automatically inject scripts into a page (stackoverflow)

Here is an example on my stackoverflow profile

speedyonso

To conclude, don’t forget that Performance is a feature ! Displaying page load time on each page and to everyone is a great chance to detect performance issues early. Does a page violate your SLA? I think it’s now a little easier with this script.

Introducing Toppler

I recently created a new repository (https://github.com/Cybermaxs/Toppler ) and I would like to share with you the idea behind this project.

It’s been officially one year since I’ve discovered Redis and like every fan I can see many possibilities here and there. I’m also quite surprised that this DB is so little known in Microsoft stack but I’m sure this will change in a few months as Redis Cache will become the default caching service in Azure. But Redis is not just a cache! This Key-value Store has unique features and possibilities. Give it a chance.

So what is Toppler ? It’s just a very small package built on the top of StackExchange.Redis that helps you to count hits and get emitted events to build Rankings/Leaderboard.

Here are a few use cases where Toppler could help you

  • You want a counter for various events (an item is viewed, a game is played, … ) and get statistics about emitted events for custom time ranges (today, last week, this month, …)
  • You want to implement a leaderboard with single or incremental updates.
  • You want to track events in custom dimensions and get statistics for one, two, .. or all dimensions
  • You want to provide basic recommendations by combining most emitted events and random items for custom time ranges

How does it work?

One of the most important aspects in Toppler is the Granularity. Each hit is stored in a range of sorted sets representing different granularities of time (e.g. seconds, minutes, …).

The granularity class has 3 properties (Factor, Size, TTL) that allow to compose a smart key following this pattern [PREFIX]:[GRAN_NAME]:[ TS_ROUNDED_BY_FACTOR_AND_SIZE]:[TS_ROUNDED_BY_FACTOR] where [PREFIX] is the combination of the configured namespace with the current dimension and [TS_ROUNDED_XX] is the rounded unix timestamp for a given granularity.

Here are the values for the 4 default Granularities

Factor TTL Size
Second 1 7200 3600
Minute 60 172800 1440
Hour 3600 1209600 168
Day 86400 63113880 365

A TTL is assigned to each key (using Redis EXPIREAT) to keep a reasonable DB space usage.

So, a hit emitted at 17/07/2014 14:23:18 (UTC) will create/update these keys

  • [NAMSPACE]:[DIMENSION]:second:1405605600:1405606998
  • [NAMSPACE]:[DIMENSION]:minute:1405555200:1405606980
  • [NAMSPACE]:[DIMENSION]:hour:1405555200:1405605600
  • [NAMSPACE]:[DIMENSION]:day:1387584000:1405555200

When an event is emitted, the number of hits (often 1) is added to the target Sorted Set via the ZINCRBY command.

The retrieval of results use the same logic to recompose keys as the granularity and resolution are parameters of the Ranking method, but we use the ZUNIONSTORE command to combine all results in a single sorted set. This allows to store the result of the query (like a cache) or to apply a weight function.

Show Me the code !

topplerintro

It’s just a very basic example and many additional options are available to emit events (when, in which dimension, how many hits …) or compute statistics (single/multi dimensions, caching, granularity & resolution, weight function …).

The project is currently in Beta  so please be indulgent and patient ; Feel free to contact me, create issues, tell me what’s wrong … Thanks.

Acknowledgements

Xamarin.Forms : One UI to rule them all ?

Xamarin 3.0 has recently introduced Xamarin.Forms, a powerful UI abstraction that allows developers to easily create user interfaces that can be shared across Android, iOS, and Windows Phone.

Introducing Xamarin.Forms

Xamarin.Forms apps follow the same architecture xamarinmvvmthat traditional cross-platform applications, except there is an additional –new- project. It’s the main component here that makes everything possible.

Behind the scene what is a Xamarin.Forms project? It’s just a Portable Class Library (PCL) or a Shared Project. Shared projects were introduced with Visual Studio 2013 update 2 to provide Universal Apps.  It’s a little like file linking but with a better integration into VS.

This project depends on the Nuget package Xamarin.Forms, which contains more than 40 controls and UI componenents. It also comes with cross devices UI components such as Binding, Navigation and Dependency Injection.

There are two ways to create shared-UI views: programmatically using the API provided by Xamarin.Forms or with Xaml using the same set of controls (Is it the beginning of Xaml everywhere?) It’s a new independent meta-language to define your UI. An UILabel (iOS),  TextView (Android) or TextBlock (WP) is simply a Xamarin.Forms.View.Label. It’s also possible to create custom controls and UserControls and define renderers for each platform. Fantastic!

Without Xamarin.Forms, you have to understand each layout system to build a native UI. Now with Xamarin.Forms, you design only once.  These two samples give the same result:

formsexs

Both techniques provide advantages and drawbacks but from my point of view, the biggest problem is that you don’t have any designer to preview the UI or even intellisense: It’s important to understand that’s it’s a completely new set of controls you have to learn without any help from Visual Studio. I hope I‘m sure Xamarin will improve this soon because it’s quite counter-productive. Feel free to propose an answer to this question on stackoverflow.

Read the official introduction guide here.

What about MvvmCross?

If you follow Xamarin community since a few months – like me, you may already have heard about MvvmCross, a framework that allows developers to share logic between multiple platforms. It’s mainly maintained by Stuart Lodge and is a brilliant cross Mvvm abstraction. Even better, this framework come with an awesome list of features such as cross-platform binding, dependency injection, a plugin system,   many services (Navigation, Location, Camera, …) and a few UI controls to maximize code sharing and avoid breaking the Mvvm Pattern. The combo PCL+MvvmCross+Xamarin is often called “The Precious” aka “One language to rule them all”.

Can you use it with Xamarin.Forms?  Yes because Xamarin.Forms was designed to work with the MVVM design pattern (but no included). However, much of what MvvmCross gives you is already baked in. Xamarin.Forms comes with features like DI, Navigation … whereas MvvmCross comes with ready-to-use implementations like plugins. For example, do you want to use GeoLocation (typically GPS) functionality? Install the MvvmCross package in your solution or create 3 implementations (one per platform) with Xamarin.Forms. I think we will see new packages in the coming weeks/months for common devices sensors and services. That’s another disadvantage of this first version of Xamarin.Forms: you have to reinvent the wheel in some cases. In reality, the ambiguity may also come from MvvmCross itself because it’s not only a simple cross Mvvm framework; with all the features that are included, it’s much more  a Cross platform framework than Mvvm framework.

highres_379493672So, yes there is a little overlap between these two frameworks but this should not prevent to use them both. From my point of view, Xamarin.Forms is not an MvvmCross-killer but a fantastic new path for cross-platform development. You now have the chance to share code and UIs.

Finally, I was last week in the 3th Xamarin meetup in Paris with James Montemagno.  I really appreciated the presence of Xamarin for this event. I am quite pleased to see the French community growing so fast. Is it only in France ?

Do you use {pretty print} ?

Steve Souders began to describe Web Performance Optimization 10 years ago. WPO is the field of knowledge about increasing the speed in which web pages are downloaded and displayed on the user’s web browser. He wrote and contributed to many books (High Performance Web Sites , Even Faster Web Sites , Web Performance Daybook V2
) to explain us his best practices for performance along with the research and real-world results behind them.

One of the most important rules is to Combine & Minify resources. Bundling combines multiples files into a single file whereas Minification is the process of removing all unnecessary characters from source code without changing its functionality.

With the latest HTML5 specification and the emergence of JS frontend frameworks like JQuery and more recently AngularJS, JavaScript has never been so used and so popular. We can now create scalable, maintainable applications, unified under a single language: JavaScript !

As we’re all good web citizen, all our resources (JS/CSS) are bundled and minified on production. This is sometimes where things start going bad. Have you ever try to debug JavaScript on production? The primary drawback of this optimization is that it makes debugging your JavaScript code a nightmare, since the browser’s developer tools will show you code that is virtually unreadable.

For example, a production JavaScript may looks like this: 14K characters on the same line.

min

It’s impossible to debug the previous script because browsers can’t set breakpoint at character level (only at line-level).

Helpfully, some of the browsers have an option in developers tools to un-minify partially a JavaScript file. This option is called “pretty print” and the icon is like {}.

Chrome
chrome

Internet Explorer
,ie

In Firefox (since December 2013)
firefox

Here is the result after pretty print.

unmin

It’s fairly better and you have the nearly the same debugging experience that for dev scripts. Sometimes it’s not enough because JavaScript minification tools rename local functions & variables. It’s common to see a(), b(), c() in a minified script.  A good indentation won’t change this.

Fortunately, Source Maps provide a way of mapping the lines & columns of the production source code (bundled & minified), back to their original locations in the corresponding uncompressed source files. This feature is supported by all modern browsers.

An additional file is generated during the minification process and is added at the top of the optimized file.

map

Source Maps are easily generated by grunt-contrib-uglify or Closure Compiler. Unfortunately, it’s still not supported by Microsoft ASP.NET Web Optimization Framework. For sure, this is something that needs to be done. Web Essentials also offer this feature.

It’s a nice tip for every web developer but the good question may be : Why do you have to debug production code ?

Become a Responsible Programmer

Our job is so … special. Isn’t it?

Like me, you may have already been in this delicate situation where one of your friends ask you for help  “I have a problem with program XXX”, “I can’t order an item in this e-commerce web site”, “my computer is slow”…. Could you help me? A vast majority of people thinks that working in computers means that you know everything about computers. We –programmers- know that it’s false but it’s just a consequence of how developers are considered. The programmer’s stereotype is a perfect example: A geek/nerd living alone in his world.  Nobody can understand a programmer better than … another programmer.

Have you ever try to discuss computer science with a non-programmer? Sometimes it’s quite funny but most of the time he will have a headache; you too. How can you explain Web 2.0, the N existing programming languages, Agile Vs waterfall, scripting vs compiled language, MVC pattern …? It’s even worst between developers. In the past, We saw all endless fights & debates between antagonist technology stacks.

After 10 years of programming, my vision has also changed a lot and I’m sure it will change again in the next coming years. Thanks to communities, we’re now all connected. I have the feeling that programmers now share the same vision of their job. asp.net vNext and  .NET  foundation are great examples of Microsoft (“Linux is Cancer”, 2001) embracing Open Source.

So, as a programmer, you have great responsibilities. You have to contribute to this common ideal,  you’re responsible to keep our job a nice -and fun- place to work, to encourage programming concepts especially in education where it is learned too late, to promote team working and collaboration, to be professional and respectful towards users…

There’re a wide range of ideas I’d like to say here, but I will list only those that come immediately to my mind. So here are my 10 commandments of Responsible Programming

  • Embrace & be active in communities
  • Accept changes and react positively to events
  • Banish license violation, code/content robbery
  • Accept your mistakes, and take the opportunity to become better
  • Write efficient code and don’t hesitate to prove it
  • Write maintainable code with clean and strong architecture, following common quality rules and common security guidelines
  • Don’t deceive your users ; users satisfaction is your primary objective
  • Learn continuously and get better every day
  • Share your knowledge and your vision to everyone and especially to less experienced developers
  • Don’t to be a lone hero, but embrace team working & collaboration

What we -programmers – have done in the last 10 years has changed the world for ever. Let’s continue to change the world and make it a better place to live. This is how I consider about my job and the values I try to share.