Monday, February 13, 2023
The State of JavaScript and Modern Web Client Development
Monday, November 7, 2022
Overloaded Methods in TypeScript
If you've ever worked with a language like C# or Java you're probably often using overloaded method signatures to provide callers with an opportunity to have a similar outcome by passing a different number or type of parameters. However in TypeScript which is just a superset of JavaScript and adhering to all things JS under the covers, method overloading doesn't work the same with identical method names and parameter signature because there are no types to differentiate between. If you have (2) identical methods in JS being called the latter defined method on the prototype will be called, and the former ignored.
TypeScript has the benefit of type definitions at build time so method overloading is possible... kind of. If the goal is to have intellisense to see multiple definitions of the same overloaded method, we can certainly achieve that. If the goal is to have multiple definitions of the same method name with different parameter signatures and separate, different implementations, this out of the box is not possible and won't be like traditionally static typed, structured languages like C#. Regardless let's see how overloading does work in the vanilla form (I'll hint at conditional types at the end) using TypeScript and you can decide if you can leverage for your benefit.
The recipe for overloaded methods in TypeScript is that you can create 1...n method signatures, but only have a single implemented method representing any of the possible call combinations. Let's look at a code sample:
Above we have the overloaded method named start, that represents the potential to provide the different procedures to start an engine, based on the various engine types. Note the (5) overloaded method signatures, but only a single implemented method. The reason for this is the overloaded behavior and differentiation of methods is a design/build time only feature available due to the fact we are using TypeScript. In fact if you look at the transpiled JavaScript the only method shown is the single implemented method:
If you do try and use implementations on more than 1 method with the same name, TypeScript will warn you with a, "Duplicate function implementation" warning.
Looping back to our original goal using the properly implemented code, as the method caller if we want to see at design time a list of the various signatures, we have indeed accomplished that goal as you may scroll through and see the multiple, overloaded definitions:
However this comes at a bit of a sloppy cost for that single implemented method. In order to make this work the method signature must encapsulate all potential values that could be sent to satisfy the TypeScript compiler. This usually equates to using a Union type in the method signature to account for all possible types. The next hurdle is because overloaded methods are really a façade, you must manually pick apart what's sent and reverse engineer what you received at runtime. This usually equates to type guards, if statement, switch statements, or some combination to sniff out what you received, so you can proceed forward. All of this logic is that code above within our implemented method to determine what exactly we received.
It's even trickier to determine what's sent if you have (2) identical method signatures that are only differentiated by variable name like our 1st two methods below. This is not advisable even though it does work:
The long and the short of method overloading in TypeScript is that it is possible, but with a few caveats that may not make it sensible. I think if you only have (2) different method signatures, that are easily discernable at runtime in the implemented code, then this might make sense. However as the signature list expands, the logic to differentiate the potential values sent can get unwieldly.
Lastly another potential option may be to use conditional types in the method signatures which rely on generics to sort out the types based on what the caller is sending. This could reduce the need for the implemented method to contain all the logic to sort out which values it was sent as it will be know already. However in this post I wanted to strictly do a 1:1 look at the concept of overloading as it may be known from other languages, and how it can be accomplished in TypeScript.
Friday, December 4, 2020
The Absence of Pattern Thinking for Web Client Code
With 20 years of experience as a software engineer moving through the stack and using a lot of different languages throughout the years, I've seen a lot of good and conversely poor development implementations. I've also had the privilege of being able to work with languages (like C#) that placed a heavy emphasis as a community in enterprise development around the mindset of using patterns and practices to make code more readable, reusable, and easier to communicate redundant and repeatable ways of creating applications and services. This has helped me immensely and provided me with a mindset on how to think and truly engineer a well built solution.
The priority of pattern thinking is missing
The problem I've seen over the last 5-10 years as a primarily focused JavaScript web client developer is the absence of this type of thinking. So much of the code I see written is just jammed into a file as a means to an end to complete a needed goal. I think unfortunately the explosion in JavaScript development and rapidly created JS developers due to demand has been partly to blame. I also think there has been so much focus on, "which framework/library," that has had a byproduct of having to learn so much just to get code implemented, that pattern thinking is just not at the forefront. Plain and simple there hasn't been time to organize thoughts or thinking around a sound architectural implementation using known patterns when the web client developer is just trying to get their head above water learning the framework, array of JS libraries, state management, CSS, responsive design, offline capabilities, mobile-like features, new work, enhancements, etc.. This in contrast to more stable back end languages that have had similar implementations going on for decades (albeit with new ways to wrap code or deploy; i.e. cloud) where the tenured experience has helped provide an environment where pattern thinking is much more prominent. I know this to be true, because I've been on that side of the fence as well.
Has it always been this way?
This isn't to say there has never been a focus on pattern for front-end code. Industry leaders such as John Papa were advocating for the module and reveling module patterns with ES5 code years ago, and even today alongside Dan Wahlin to carry the flag for architecting Angular apps and having a mindset for patterns and practices. Therefore a voice does exist from advocates for sound and well written code, but overall I just don't see the concrete evidence as much as I did when working with server-side code in C#/.NET.
It's time we as a web client community when building enterprise web application push harder to use some forethought into how we implement our code borrowing concepts from known patterns and practices. It's not enough just to cram thousands of lines of code into a monolithic style .js/.ts file. This as an aside is one reason I'm not a huge fan of CSS in JS because it adds to desegregation of code and cramming everything into a single file. I don't consider myself old school to like separation of concerns (SoC), as it's really about organization of thought and practice. Much like the UI web component code we like to implement today with a mindset around segregating code into smaller, cohesive pieces of functionality, we must too apply that same style of thinking to the imperative code we write in JavaScript/TypeScript.
A high-level use case
Let me cherry pick a scenario I see more often than not in Angular. At a high-level speaking broadly, most modern Angular code has a thin service code file that makes an API call, and immediately returns an Observable with the result. The real result is that data is consumed and subscribed to in the component, and all presentation logic, data massaging, and appropriate (or sometimes inappropriate because it's IP) business logic is all done in said component. The result? A massive multi-thousand line big-ball-of-mud that's out of control and really difficult to maintain. It didn't start that way, right? The original MVP implementation was just a simple return and binding of data. However like any software that evolves, so does the need for more code and the initial pattern set forth, is scaled good or bad. In this case bad, and the component is out of control.
What if though something like the Command Pattern (or ideas from it) had be used from the inception? The component only acts as an air-traffic controller of sorts; it doesn't really know how to do anything except direct the traffic. It assembles all needed information, builds up and executes the command on the service where the real work happens (in 1..n services). This pattern also lends itself to creating immutable models in the component and they are only ever changed in the service. The service streams the data from an Observable, and all the component does (with minor exceptions) is bind to the new data. This is a much cleaner approach and a highly repeatable pattern for any experience level. Even if this particular approach seems heavy as you're building a smaller application, knowing patterns and their purpose for use will still lend your design ideas about how to better implement and organize your code.
In contrast to OO patterns and practices but still with critical and planned thinking at the forefront, we could also use a functional programming paradigm (and patterns specific to FP), and leverage things like pure functions to avoid side effects with a goal of consistency and having more readable and logical implementation. Any of these options are better than the absence of any plan, which results in a poor implementation that's prone to bugs and being difficult to maintain.
In the end we didn't complicate the code, we just implemented it differently for the major gain or readability, reusability, and testability. I liken it to these images I used over 10 years ago when talking about patterns and practices in C#. Both containers below have the same content, but which one would you rather grab to get what you need? The answer is simple; the one that is organized.
The plan forward
The question is how do we learn about these patterns and when to apply them in our code? Well the answer is a combination of getting educated on well known patterns and gaining experience using them. It is however an art, and there is a balance to be had on how to use them most effectively. The cynics hear 'patterns' and sometimes get scared off saying, "that will overcomplicate the code!" There are times I agree. How can I spot the difference? Experience. If you don't have any, learn from others. One of my favorite words in the industry is 'pragmatic.' The ability to know balance in code and when and how to use powerful patterns to aid not hinder code. If the two ends of the spectrum are anarchy and over-architecture we want to be somewhere close to the middle. The problem is in my experience, we're too often close to the anarchy-style of implementation on the web client. I think the crazy saving grace that we back into is that since web client frameworks and libraries have only a 2-4 year tenure on average before some major overhaul, all this bad code is made obsolete before it really begins to stink things up. However during that time period it would behoove us to write code that is implemented with better patterns and practices to help extend the life and make the journey a whole lot easier.
Tuesday, October 22, 2019
State of the Union: Future and Historic Thoughts on WebAssembly, JavaScript, and .NET
Blazor lets you build interactive web UIs using C# instead of JavaScript
Monday, April 1, 2019
Using Quokka.js for Real-Time Feedback in TypeScript
Once installed, all I needed to do was open the command pallet in VS Code (Ctrl+Shift+P) and select, 'Quokka.js: Start on Current File' as shown below:

If there's a value not provided in the scope of the file, Quokka.js obviously can't produce an output and this was expected behavior:

There is quite a bit more functionality offered like code coverage assessment (as it pertains to code execution paths, not unit testing), and a provided value explorer. There is also a Pro edition with additional tools available.
Check out the links below to get started:
Quokka.js
VS Code Extension for Quokka.js
Wednesday, January 30, 2019
I'm excited about WebAssembly, but not as much about my beloved .NET, Blazor's implementation
@page "/myorders"
@inject HttpClient HttpClient
<div class="main">
</div>
@if (ordersWithStatus == null)
{
<text>Loading...</text>
}
else if (ordersWithStatus.Count == 0)
{
<h2>No orders placed</h2>
<a class="btn btn-success"
href="https://www.blogger.com/2404">Order some pizza</a>
}
else
{
<text>TODO: show orders</text>
}
As a side of humor, to format code blocks on this blog I have to select a 'brush' for the language formatting. Case in point, in the above snippet I wasn't sure if I should have picked HTML, C#, or Blazor.
This is probably because I was never a fan of the classic ASP, MVC Razor, or even Blazor implementations where view logic is interlaced with the markup (for web client code I'd even throw in JSX - as explained here). This doesn't matter a pile of beans really because this is all subjective and has absolutely no bearing on how successful Blazor or any other language targeting WebAssembly will be. I actually love the .NET stack and was carved from that block since the beginnings of my professional career, but I just don't like that particular style of implementation for the web. Shoot for all of ASP.NET's webforms shortcomings, the view overall was relatively clean (keeping logic in code separate from the markup) and pure aside the fact the tags may have been ASP.NET server-side controls but still took the shape of HTML tags (analogous in form to today's custom HTML elements).
I also do not like the JSInterop style in Blazor. You can invoke JavaScript functions from within C#, and that alone is handy I suppose, but messy. The functions you want to invoke must be available on the global scope of window. Yuck. JavaScript scope and encapsulation was difficult enough to manage through the years, I'd rather not go back to the days of hanging code off the window object making everything globally available. In 100 level examples this seems fine, but get to 10's or 100's of thousands of lines of code in an enterprise app and this could get ugly.
<script>
window.MyFunction = (someValue) => {
//JS code here...
};
</script>
using Microsoft.JSInterop;
public class JsInteropExample
{
public static Task MethodName(strings someValue)
{
return JSRuntime.Current.InvokeAsync("MyFunction", someValue);
}
}
I'm probably not alone in my opinions, and that's why I'll be keeping an eye on the ability to use TypeScript as a target for WA. I like JS/TS and am comfortable with it for developing web applications. I feel many client side developers may have a similar sentiment and not want to either learn a server-side language, or just prefer the front-end languages they've been using for years. TS on it's own won't be enough (it's just a language), so it would have to be TS + some web framework (??) and we'll have to wait for the pieces to come together.
To that end I'll keep an eye on things like assemblyscript which is a TypeScript to WebAssembly compiler. It doesn't appear to have the traction some of the other compilers do at the moment, but I'm sure it or something similar will gain momentum as WA picks up and the masses of web developers may not feel like using C#, C++, or Rust.
AssemblyScript: a TypeScript to WebAssembly compiler
AssemblyScript: Status and Roadmap
I'm not counting Blazor out for sure, and it's still going to evolve as it's only an experimental project at the moment. I really enjoy developing in C# so it seems like a great match, but in it's early stages it's rough around the edges and maybe that's just my opinion that's shaped as a front-end developer. I'm pragmatic and will not count out the usefulness of developing on a single-stack and this is where Blazor shines for .NET developers.
I'd be interested in feedback or thoughts about using TypeScript to transpile to WebAssembly or anything else along these lines, so please feel free to leave a comment and let me know.
Friday, September 28, 2018
Named vs Fat Arrow Functions in TypeScript
Here is a sample TypeScript class with the two different types of functions.
class MyTsClass {
private myValue: number = 0;
myNamedFunction(): void {
setTimeout(function () {
console.log(`Inside myNamedFunction, this.myValue = ${this.myValue}`);
}, 1000);
}
myFatArrowFunction = (someValue: number) => {
this.myValue = someValue;
console.log(`Inside myFatArrowFunction, this.myValue = ${this.myValue}`);
}
}
const myClass = new MyTsClass();
myClass.myFatArrowFunction(1);
myClass.myNamedFunction();
Here is the resulting transpiled ES5 JavaScript.
"use strict";
var MyTsClass = /** @class */ (function () {
function MyTsClass() {
var _this = this;
this.myValue = 0;
this.myFatArrowFunction = function (someValue) {
_this.myValue = someValue;
console.log("Inside myFatArrowFunction, this.myValue = " + _this.myValue);
};
}
MyTsClass.prototype.myNamedFunction = function () {
var _this = this;
setTimeout(function () {
console.log("Inside myNamedFunction, this.myValue = " + _this.myValue);
}, 1000);
};
return MyTsClass;
}());
var myClass = new MyTsClass();
myClass.myFatArrowFunction(1);
myClass.myNamedFunction();
For the purpose of this post, we'll concentrate on (2) main aspects of the code
- Where the method is created on the object, and the resulting performance impact
- How the this keyword behaves
var _this = this;
myNamedFunction(): void {
setTimeout(() => {
console.log(`Inside myNamedFunction, this.myValue = ${this.myValue}`);
}, 1000);
}
If we run the code again, we get the console output we expect.
There are other use cases beyond the ones I'm mentioning here today in regards to the behavior of this with inheritance and calling the parent class as well as other differences. However these are two key areas that should be understood, as I see the usage flip-flop without intent, and developers should be aware of the performance and behavior differences and create the correct type where appropriate. They have separate purposes, so I'm not a fan of just one or the other. Using only fat arrow functions have their performance impact, but shouldn't be avoided all together as they are instrumental on capturing the correct context of this when needed.
Tuesday, September 18, 2018
Visual Studio Code Snippet - Triple Slash Directive
/// <reference path="./src/ts/myClass.ts" />
In Visual Studio Code I was looking for an extension, snippet, or shortcut to type out the above. Interestingly I came up with nothing. Maybe it's inside another extension I hadn't seen, but rather than look for a needle in a haystack, I decided to quickly hand-roll my own snippet. This is trivial to do in Visual Studio Code. Here is the snippet for a triple-slash directive, and it can be used with the following prefix name: "tripSlash"
{
"Triple_Slash_Directive": {
"prefix": "tripSlash",
"scope": "javascript,typescript",
"body": [
"/// <reference path=\"${1:path}\" />",
],
"description": "Triple Slash Directive used for declaring dependencies between files when no module loader is used"
}
}
Monday, August 13, 2018
Running ES6 Modules Native in the Browser using TypeScript
1. Create an ES6 module
export class Person {
getAddress():string{
return '123 Pine St.';
}
}2. Create another ES6 module importing the module created previously
import { Person } from "./Person.js";
export class ContactInfo{
getContactInfo() {
const person = new Person();
const address = person.getAddress();
console.log(`The address is: ${address}`)
}
}Make sure to note that "bare" modules are not supported. This restriction allows for browsers to scale in the future when using module loaders, and allow "bare" modules to contain special meaning or functionality. This syntax below is not supported and you'll get the following error in the browser, even though the application might build correctly:
import { Person } from "Person";"Uncaught TypeError: Failed to resolve module specifier "Person". Relative references must start with either "/", "./", or "../"."
The correct syntax is to reference the exact file directly.
3. Configure tsconfig.json
"compilerOptions": {
"module": "es6"
}4. Leverage the "module" type in index.html
<script src="scripts/typescript/Person.js" type="module"></script> <script src="scripts/typescript/ContactInfo.js" type="module"></script>
5. Run and check for errors
If you're wondering about the .mjs module file extension support in TypeScript, see the following discussion on GitHub: Support '.mjs' output
Tuesday, February 21, 2017
Using the Azure Mobile Apps Signing Key with JWT Bearer Authentication in ASP.NET Core
Azure Mobile Apps allow users to quickly get up and running using authentication via 3rd party providers. Once registered with your Azure Mobile Apps instance, you can use the appropriate SDK (i.e. JavaScript SDK) to authenticate users, and in turn get a signed JWT back representing the authenticated user and their claims. This JWT can be turned around and sent with requests to the server to authenticate users when making WebAPI calls.1. Get the Signing Key for your Azure Mobile Apps Instance
If you want to use JWT Bearer Authentication on the server, you'll need to configure its settings to contain the Signing Key used to generate the JWT passed back to the authenticated client from Azure. This way you can validate calls on the server and that the user was truly authenticated via your Azure Mobile App instance. The nice thing about JSON Web Tokens is they are structured to contain known information (claims) including being able to verify the signing value used and if it matches.
The signing value is unfortunantly buried on a page outside of the Azure portal. This wild goose chase I went on trying to find the value was a time dump, so hopefully with this information you'll be able to find this quickly. To get the signing key, go to the following URL (best if already signed into Azure):
https://{yoursite}.scm.azurewebsites.net/envReplace the {yoursite} portion with the name of your Azure instance. A page will load with a ton of configuration about your site. Do a 'find' for the following value:
WEBSITE_AUTH_SIGNING_KEYThe signing value will be associated with that key that is used for your Azure Mobile Apps instance. Grab it and save for use in the next step.
2. Configure JwtBearerAuthentication in ASP.NET Core
- Pull in the NuGet package via project.json in your WebAPI ASP.NET Core project. This will allow us to have the needed bits to configure the ASP.NET Core middleware for JWT Bearer Authentication:
- To prevent making Startup.cs too polluted, create a separate partial class (if desired) named something like Startup.auth.cs where we can place the JWT middleware configuration. Then in the Configure method of the main Startup.cs file, add the following to call our new method we'll create below:
- Add the JWT configuration, and replace the value in the instantiation of SymmetricSecurityKey with the value from your Azure instance we extracted above in Step #1. All of this configuration and setting of the various properties is up to you. At a minimum though I'd recommend making sure that the key was issued from the correct domain, not expired, and signed with the proper key. The key will need to be extracted from its hex value, so the helper is included.
- Add the usual Authorize attribute to the API controllers where authentication needs to be enabled.
"Microsoft.AspNetCore.Authentication.JwtBearer": "1.1.0-preview1-final"
ConfigureAuth(app);
public partial class Startup
{
/// <summary>
/// Sets up JWT middleware configuration for use when authorizing endpoints within this API
/// </summary>
/// <param name="app"></param>
private void ConfigureAuth(IApplicationBuilder app)
{
//Todo: place in configuration
var signingKey = new SymmetricSecurityKey(FromHex("YOUR_WEBSITE_AUTH_SIGNING_KEY_VALUE_HERE"));
var tokenValidationParameters = new TokenValidationParameters
{
RequireSignedTokens = true,
RequireExpirationTime = true,
SaveSigninToken = false,
ValidateActor = false,
// The signing key must match!
ValidateIssuerSigningKey = true,
IssuerSigningKey = signingKey,
// Validate the JWT Issuer (iss) claim
ValidateIssuer = true,
ValidIssuer = "https://your-site.azurewebsites.net/",
// Validate the JWT Audience (aud) claim
ValidateAudience = true,
ValidAudience = "https://your-site.azurewebsites.net/",
// Validate the token expiry
ValidateLifetime = false,
// If you want to allow a certain amount of clock drift, set that here:
ClockSkew = TimeSpan.Zero
};
app.UseJwtBearerAuthentication(new JwtBearerOptions
{
AutomaticAuthenticate = true,
AutomaticChallenge = true,
TokenValidationParameters = tokenValidationParameters
});
}
/// <summary>
/// Decodes a Hex string
/// </summary>
/// <param name="hex"></param>
/// <returns>byte[]</returns>
private static byte[] FromHex(string hex)
{
hex = hex.Replace("-", "");
byte[] raw = new byte[hex.Length / 2];
for (int i = 0; i < raw.Length; i++)
{
raw[i] = Convert.ToByte(hex.Substring(i * 2, 2), 16);
}
return raw;
}
}
[Authorize]
Saturday, May 7, 2016
Chutzpah and non-ES5 JavaScript for Unit Testing is Problematic
No matter how simple it seemed of a test harness I used in JavaScript to test using Chutzpah, I kept getting errors like the following:
Can't find variable: myVariable in file xyz....Normally this is an indication that the referenced JavaScript file is not properly referenced and the test cannot use it. Therefore the test fails and complains that a variable can't be resolved because ultimately the file is not referenced or not referenced properly. Or so it appears on the surface.
This is the red herring...
This led me on a path of vigorous path tracing, trying different path notation for referencing, using chutzpah.json files, running from the command line, debugging, refactoring, and on and on. No matter what I'd do I couldn't apparently get the file referenced for the unit test to run.
Naturally what makes this all worse was that if I open my JS tests in the browser using the default Jasmine Test Runner, they of course pass. So I know it's only a problem with Chutzpah running my tests.
I do the most basic of tests:
expect(true).toBeTruthy();
This passes. So at least I know VS.NET and Chutzpah can actually work together.
Here is the crux of it and I'm not sure why it dawned on me but I decided to investigate my JS code. I had begun to sprinkle in some ES6 syntax that by now was compatible in most current browsers. Things like 'class' or 'for...of' loops. The problem is Chutzpah uses the PhantomJS headless browser to run the unit tests. It is based off the WebKit layout engine, so its about like running tests in Safari.
However I was completely wrong in this assumption. From the following (albeit an older package the reference still holds for this topic):
"Most versions of PhantomJS do not support ES5, let alone ES6. This meant that you got all sorts of errors when you tried to test ES6 features, even if you had used the Babel/6to5 transpiler."PhantomJS today appears strictly ES5 compliant and even a single line of ES6 syntax in your JS file will cause the referenced file not to be processed and thus serving up the error that it cannot be found. It can't be found because it can't be understood by PhantomJS. Hence the red herring about not being able to understand a variable being referenced in my test because the JS class could not be used.
I had one particular JS class I thought was 100% ES5 and this made the debugging even worse. The way I finally sniffed out some ES6 syntax was to drop it in a TypeScript file and target it to transpile to ES5 JS. When I compared the files side-by-side sure enough I found some ES6 syntax in my JS files. Once I ported the ES5 compliant JS over, Chutzpah and PhantomJS both worked perfectly and my tests passed within VS.NET.
I think the lessons learned here are the following:
- PhantomJS which Chutzpah uses to run unit tests requires ES5 only JS, so make sure all of your JS is ES5 transpiled (Babel, TypeScript, etc.) and target these source files for the unit tests
- If able to write and target ES6 features and browsers and not wanting to transpile to ES5 compliant JS, consider using a different JS Test Runner than Chutzpah.
Thursday, November 12, 2015
Is Aurelia going to a realistic competitor?
To provide some context, here is a visual from Google trends based on some of the major competing frameworks (note: no matter which combination of 'aurelia' I used the results were all the same). Even if this metric isn't perfect, it still provides some level of comparison for popularity:
This GitHub thread has some interesting comments from Rob Eisenberg over this past year on Aurelia. With all the talk of it being a competitor to JavaScript frameworks like React and Angular, I was curious about its backing and support. With those frameworks you have Facebook and Google respectively behind them. I was curious if Aurelia was just a bunch of devs revolting with a new framework out of angst for what happened with the lack of use for Durandal and the ill advised direction Angular 2.0 was going according to Rob, or in the long run would this be a serious contender.
It's no secret JS frameworks and libraries seem to come and go as do the seasons, and investing heavily in one is an important decision. Durandal seemed to have lost a flame quickly in this JS framework battle, so I'm curious how Aurelia will fare.
Here are some quotes from that link from Rob:
"From a business perspective, Aurelia is backed by Durandal, Inc. Durandal is a company that is dedicated to providing open, free and commercial tools/services for developers and businesses that create software using web technologies."As a private company it is tough to see the backing or possible angel investors involved with Durandal. However for OSS with a passionate community this could be a moot point.
He does go on to mention:
"Durandal is positioned to begin raising Series A venture capital this month. That isn't to support the open source Aurelia project. That project does not need funding. Rather, it is to support Durandal Inc. which intends to offer a much richer set of tooling and services for those who want to leverage Aurelia and the web. We are building out a serious business and our entire platform will be built with Aurelia and for Aurelia. Our potential investors are very excited about our plans and we expect to have some cool stuff to show in the future"So that could add some potential to Durandal Inc. to keep this thing moving forward. He continues on about the horsepower behind it's actual creation and continued development:
"Aurelia itself is solid due to the fact that it currently has a 12 person development team distributed throughout the world and a large active community, especially considering it was only announced a couple of months ago"...a bit later he quotes:
"We have 17 members on our core team currently which contribute daily"Well hopefully those 12-17 people remain passionate :D
I think the conservative decision today is to go with ReactJS or AngularJS with Aurelia being the bold one. I'm not thinking it's going to fade away anytime soon, but with so many competing frameworks it's important for it to catch some mainstream traction or the OSS community might loose steam working for a lost cause.
I for one hope it does succeed and becomes a bit more mainstream. When comparing the syntax for ReactJS, Angular 2.0, and Aurelia, I believe I'd choose Aurelia. Unfortunately for me I'm one in the camp that actually likes Angular 1.x and it's implementation so I don't really have any gripes to it currently for switching to something different. However its shortcomings in performance and implementation are certainly going to be addressed by the radically different 2.0 which still needs to grow on me a bit.
Time will tell and the community not I will answer this question by adoption (or lack thereof) of this framework and others in the upcoming months and years.
Wednesday, November 12, 2014
Upgrading to Angular 1.3: Global Controllers Not Supported by Default
Error: [ng:areq] Argument 'MyController' is not a function, got undefined
This issue is a result of some breaking changes where AngularJS no longer supports global controllers set on the window object as of version 1.3. In reality if you have a production application using global controllers, it is not advised and would be a prime target of refactoring regardless. However you might of had a small test app or the like that upon upgrading Angular to v1.3.x it stops working unexpectedly. The intention behind this change was to prevent poor coding practices.
The actual breaking change is highlighted on GitHub here: https://github.com/angular/angular.js/blob/g3_v1_3/CHANGELOG.md#breaking-changes-13
I like how the use of global controllers according to the change was for, "examples, demos, and toy apps." I agree with the statements, so I'm OK with this change. It really is code smell to use controller functions in the global scope.
Let's look at code that would have worked in Angular versions prior with a trivial sample:
<body ng-app>
<div ng-controller="MyController">
<input ng-model='dataEntered' type='text' />
<div>You entered: {{dataEntered}}</div>
</div>
<script src='/Scripts/angular.js'></script>
<script type='text/javascript'>
function MyController($scope) {
$scope.dataEntered = null;
};
</script>
</body>
The breaking change requires one to register the Controller with a Module to provide scope and pull it off the global window object. The changes required are shown below:
<body ng-app="SimpleAngularApp">
<div ng-controller="MyController">
<input ng-model='dataEntered' type='text' />
<div>You entered: {{dataEntered}}</div>
</div>
<script src='/Scripts/angular.js'></script>
<script type='text/javascript'>
(function () {
function MyController($scope) {
$scope.dataEntered = null;
};
angular.module("SimpleAngularApp", []).controller("MyController", ["$scope", MyController]);
})();
</script>
</body>
You might find this will have the biggest impact going forward when you are throwing together quick demos or examples using Plunker or a small test harness. Just remember to register the controller with a Module to prevent running into this error.
There technically is a workaround if you must make a fix quickly, but not advised long term. You can choose to set $controllerProvider.allowGlobals(); which will allow the old code to run. You can read about it here: https://docs.angularjs.org/api/ng/provider/$controllerProvider
If your apps have previously been constructed using best practices, this should not impact you at all. For additional changes between Angular 1.2 and 1.3, see the following link: https://docs.angularjs.org/guide/migration
Wednesday, December 12, 2012
Visual Studio LIVE! Orlando Day 3
Visual Studio Live! Keynote: Building Applications with Windows Azure
James Conard, Sr. Director of Evangelism, Windows Azure
I have to admit that Azure is a technology that I thinks has a ton of potential, and the implementation seems to be done well, but it is not something I see using in the near future myself. Why you might ask? Well, professionally it's not an option for the time being. As far as personally, I looked once at messing around with hosting a site in Azure so I could get some experience with the technology. I was drawn in by the 3 or 6 months free hosting, but started looking at how much it would cost in the long run. It turns out the Azure hosting was going to cost much more than any other hosting company and did not make sense for me.
I have seen some sessions over the past year similar to this one done by Scott Guthrie. As I'm watching the demo today I have to say that the creation, deployment, and configuration couldn't be more straight forward. There is no excuse for entry if needing to do Azure development by saying it's too difficult to get set up. From the close ties in VS.NET and Azure and out to the Azure Management Portal, I can say the tooling on both ends appears to be well designed and intuitive.
The real power of Cloud Services are automation and ability to scale so easily. In Management Portal it is amazing how much can be configured. The (2) tabs I liked the most are 'Configure' and 'Scale'. It was mentioned that just recently that the VMs now support Windows Server 2012, .NET 4.5 including all of the new features like web sockets. On the 'Scale' tab you can use sliders to change the number of cores for both the front end and backend VMs. What they don't tell you (but I assume most here know) is that upping the cores used on the VM for a site that gets heavy traffic will result in a significant cost increase. Since the cloud based pricing model is based on what you use, they make it look so simple but it does come with a cost monetarily.
Managing SQL server in the cloud is just as straight forward with support for many of the things a traditional SQL instance has.
There were multiple demos from web to mobile and again in my opinion the reoccurring theme was the one of ease to create, deploy, and manage any type of project hosted in Azure. I know that if I ever do get into cloud development in the future, I'll feel confident in using the right tools with Windows Azure.
JavaScript and jQuery for .NET
John Papa, Microsoft Regional Director
Ok this room is packed! Actually there are more people for this session than there were the keynote. However the planners this year seamed to have missed a tad on which sessions would be the 'popular' ones that needed moved to the larger Pacifica 6 room. This is the 3rd session I've been in that had to move from a smaller room to this one. Seems consistent that the web and .NET Framework sessions are much more attended than the Windows 8, XAML, and Azure sessions. John's sessions at CodeCamp or Visual Studio! live seem to always attract the masses and he does a great job presenting.
He delve right into the different data types for JavaScript. The differences between Dynamic (JavaScript, Ruby, SmallTalk) and Static (.NET, Java) languages were highlighted as well. A Dynamic language like JS can have objects or anything change at runtime, where in Static languages everything is already decided at runtime.
He also highlighted a new typed JavaScript language from Microsoft named TypeScript. TypeScript is a superset of JS. Anything you already know in JS can be used in TypeScript. TypeScript will give you a lot more information at compilation time for code issues vs. getting that little yellow icon down in the status bar of the browser at runtime. Ahh, who needs something great like this, let's just type our JS perfectly and there will be no issue. From what John is highlighting, the next version of JS, ES6, should have a lot of cool enhancements in this arena that TypeScript is covering today. To see how the differences look between JS and TypeScript, check out the TypeScript Playground.
Objects in JS are hash tables and can change at runtime. You can actually add properties to change the object on the fly. Arrays are just indexed hashes that can be redimensioned implicitly at runtime just based on the index accessed.
One thing John was doing that I think was an effective way to relay his topics was to make comparisons between how we do something with an object in C# and how we do it in JS. One point to make along this lines is there are no classes in JS and you need to wrap your head around this. However the next version of JS, ES6, will start to contain the class keyword.
He also spoke to the difference between double equals (==) and triple equals (===). Main point here, if you are unsure of a type coming in and need to do a comparison, use the triple equals (===). For example (if 0 === "") will not evaluate (which is good), where (if 0== "") will evaluate (which is bad).
"Avoid globals, globals are bad" says John. Yeah this doesn't just apply to JS and is just a good message regardless. Any variable created on the fly will be a global variable.
I do like function expressions in JavaScript and have used them before in some of the jQuery I've done. As with any JS defined variables, make sure to physically define the function before calling it or you will run into errors. Where JS programming deviates in implementation is to prevent hoisting issues, go ahead and declare all of your needed variables at the top of your functions so they will be available for use.
John spent a good amount of time using the afore mentioned TypeScript playground to show the niceties of the language. It really does bridge the gap to those of us more familiar with OO languages like C#. Who knows how long it could be until the next version of JS, so TypeScript is an attractive option today.
I would have to assume that I'm probably like the majority of people in this packed room. JavaScript is not something I get really excited about and to me it is a necessary evil, especially now more than ever. I guess this sentiment comes from the fact that I do not use JS day in and day out, so I've never broke through the barrier of being really proficient. I've written a lot for my web applications over the years moving from plain JS to using libraries like jQuery, so it's not new to me. JS is one of those areas I consider myself dangerous and productive but by no means highly proficient yet. The good news is I like what I hear from John, and all of the tooling and support that has wrapped around JS recently. The stronger OO syntax support is a really nice feature in TypeScript. As well VS.NET 2012 has a lot better Intellisense to help those like me that need the extra help. I know one thing JS is here to stay and a major player in the industry so I expect the sessions on JavaScript in the future will be plentiful.
Reach the Mobile Masses with ASP.NET MVC 4 and jQuery Mobile
Keith Burnell, Senior Software Engineer, Skyline Technologies, Inc.
Developing applications that not only work on mobile devices, but have an optimal mobile experience is key today. If you ever being up a traditional website on a mobile device that was not designed with a mobile experience in mind, it will probably not get used. Most people aren't developing 'mobile only' sites, so when developing websites used on the desktop (not mobile), then it's good to sprinkle in some functionality to allow sites be multi-purpose (mobile and desktop).
Interesting, "If you get nothing else out of this talk, make sure to add the viewport meta tag to your markup". It makes sure to set the width of the page to the device. Apple devices will actually go ahead and inject this tag for you. However, don't be fooled because overall Apple has a small market share worldwide (it's the US where it is so popular). For Android and other devices this will make the content fit as it should on a mobile device. It looks like the line below:
<meta name="viewport" content="width=device-width">
In a nutshell, if you can't invest in a lot of mobile device functionality in your MVC application, adding at least the tag above will shape the content much better with little effort; there is no reason not to use it. Taking the styling to the next level is to modify sets of CSS for mobile and traditional sites.
It's pretty cool because he was creating MVC apps from project templates in VS.NET 2012, and running them both in Browser and using an simulator. One note on difference between an emulator and a simulator. An emulator actually emulates the hardware of the device, where a simulator just simulates the user experience of the phone. Keith was using various simulators like Android, Apple, and a mobile device with an Opera browser.
Tangent here, he asked: "Anyone doing JavaScript development for Windows 8?" Not a single hand was raised. Not telling necessarily of this in the future, but as of today there doesn't seem to be a ton of Win8 development going on just yet.
Next he talked about (2) different layout files: _Layout.mobile.cshtml and _Layout.cshtml.The cool thing was that based on the browser type being sniffed out at runtime, the ViewEngine (Razor) looks at the user-agent value and then uses the proper _Layout file. Even though this is great, Keith does admit we are not at a euphoria when there is a single UI codebase for mobile and desktop devices. You still have differences in files but this is to be expected. He has done a ton of mobile sites and this is always the case.
Tangent again, he asked: "How many people are own a Windows Phone?" In a room of 150ish, there were like 5 hands raised.
Next he went down a level to have multiple display modes based on device: Android, iPhone, etc. This is avaliable as of MVC 4, and if you are doing mobile development for the masses, this is reason enough to upgrade. The 'display modes' are registered in Application_Start(). He used a slick lambda expression to compare the user agent to the string of the new display mode to override the user agent (context.GetOverriddenUserAgent()). A new display mode is registered with the ViewEngine. If the newly added display mode, say "iPhone" was added, and user-agent value (i.e. "iPhone") matches then the display mode will be used. Note: Google user-agent strings if you need a reference to the actual names that are used.
jQuery mobile is a JS library for touch optimized devices (tablets, phones, etc.). The scripts can be easily downloaded from NuGet (or directly from the web). NuGet by the way can be used within the enterprise (NuGet server exists internally) to download packages (i.e. custom internal components) to keep everyone on the latest and greatest. It is HTML5 markup driven. It is supported in about 99% of any modern mobile browser so no worries there. Use the data-"dash" attribute to store data across HTML or JS. 'jQuery.Mobile.MVC' (superset of jQuery Mobile) will add everything the 'jQuery.Mobile' package does, but in addition it adds MVC views to allow switching views between the "Mobile View" and "Desktop View". It also adds the (2) Layout files: _Layout.mobile.cshtml and _Layout.cshtml.
This session had some great information on helping make MVC sites have mobile capabilities with very little work. After all we are all about working less and doing more.
Controlling ASP.NET MVC 4
Phillip Japikse, MVP & Developer, Telerik
With VS.NET 2010 and MVC 4 there have never been more project templates available to help us get started developing MVC sites. In fact enough of the industry complained and they even have a Facebook site template, yikes! For so long people would go 'File -> New Project' and then go, "Now what?" The various templates help get us started in a variety of ways. While the default home page on a MVC site may never be used out of the box, it at least shows how it's used.
So Phillip asked how many people do mobile development, and about 1/4 of the room raised their hands. Then he said how many are web developers, and the whole room did. He said, those that are developing for the web are also developing for mobile. Any web application exposed outside the firewall will be accessed by mobile devices, so it's something we need to embrace.
OAuth is not included in the 'Internet' template. We can leverage, Microsoft, Google, Facebook, etc. for the login and leverage their sign-on for creating a single-sign-on (SSO) scenario. Uncomment a few code blocks and it's done!
He also touched on the "viewport" tag which was discussed in the last session. It comes for free and makes it so we don't have to view desktop versions of a site (with a magnify glass) on a mobile device. Once again, jQuery.Mobile was touted for View Switching. He demonstrated how it adds a widget to the site to allow users to click on a link to switch between desktop and mobile versions. This is useful in scenarios where a website has not been customized for mobile devices yet. Imagine you have a production site, widely used, and all the sudden it does not work on the iPad mini. Do you have time to rewrite the CSS and markup? No, and this is where you can add in the View Switching functionality.
Love it! Phillip: "How many people are doing System.Threading.Thread.Start?... you're doing it wrong. It's hard and there's a reason C++ devs became C# developers. There is an easier way to do things." This falls right in line with multiple of my previous posts (and some still in draft form: async in C# 5.0). Async and await in Framework 4.5, or TPL since Framework 4.0. One interesting note, in MVC 3 there was no way to modify the controller without creating a separate class, inheriting from IController and putting all the controller functionality within. In MVC 4, you just subclass all of the controllers to another class that derives from AsyncController and get all of the functionality of async operations.
Next he rolled into a little on Web API. He confirms as I have in several of my comments that WCF is a bit of a bear and has a significant learning curve. I think he was trying to show that WCF is too heavy and use just Web API because of its loads of features, but several in the crowd disagreed. WCF is one of those technologies that if you just dab in it, it's tough to be fully productive. He does say, and I agree, that the majority of people that like WCF have spent the time to learn how to use it. With the Fall 2012 ASP.NET update there are Web API performance enhancements.
Tangent - "How many people use Web Matrix?" Not one person in the crowd of 100-200.
On the note about the 'Fall 2012' ASP.NET update, it's pretty significant. There are actually breaking changes like some things removed from Razor (rare non-used methods) and breaking the MVC RTM. There are NuGet packages that can be downloaded (Tools Update) from Microsoft which will fix these issues. Bottom line, if starting a new project make sure to get the Fall update before building the project.
Tangent - Phillip always cracks me up (I have been to his sessions in the past). He has everyone stand up 'to stretch'. He tells people with even number birthdays to place there hands together (like prayer), and odd number birthdays to open their arms up with palms up. You get the entire crowd looking like there are standing up praising him, and then he takes a photo. Nice!
In a nutshell (yeah a lot of O'Reilly books with that title), MVC 4 has matured greatly and is loaded with features for both desktop and mobile website development.
Creating RESTful Web Services with the Web API
Rob Daigneau, Practice Lead, Slalom Consulting
This is a session I had starred on my agenda and have been looking forward to it all week. Top it off that I think Rob is a great presenter with over 20+ years of development experience (loved his 8mhz CPU with 16mb of RAM computer and the rest is ancient history). The room is packed as I would expect. He touts Web API to be a lot better to use than WCF Rest based services which is a more clear cut opinion than that of Miguel's class on Day 1.
He started it off with a room vote of the following:
- How many people use WCF: Almost 100% of the room
- How many people use WCF RESTful services: About 1/5 of the room (including myself)
- How many using ASP.NET MVC: About 1/2 the room.
The Web API is built atop of ASP.NET and the MVC architecture. They are also based on the REST architecture. The REST architecture has constraints like statelessness, requiring a Uniform Interface (HTTP - GET,POST,PLACE,DELETE), Unique URIs, and resources manipulated through representations (from client to server back to client to change the state of the client). Bottom line, Web API does not follow the REST architecture to a 'T', but nether does WCF. Just don't tell a Restafarian that you are creating a REST based service using Web API or you might get scolded (but who really cares, this is a purist thing).
Web API has a project template in VS.NET 2012 under the 'web' heading. The default template shows an example of basic calls which is nice to get started. The cool thing is scaffolding a new controller for a Web API call. Just like scaffolding a MVC controller off an entity or model class, we can do the same for a API controller:
He also highlighted the ability for the client to set in the header the ability to request XML or JSON to be returned. How much work for the developer? None. It's all baked into the Web API project and done for you. Nice!!
For MVC developers, routing is the same using Web API. The default route template will build a route like this: /api/{controller}/{value}, where 'value' is optional. Once again convention is used when calling the controller. If a HttpGet is done, then the action sought out will be one with the name 'Get'. Cool thing is you can add descriptions on the end and it will still work (i.e. GetAllNames()) as long as the 'Get' is still there.
You can use an instance of the 'HttpClient' class to make calls to a RESTful service. Of course any type of client can call your RESTful service (Java, .NET, etc.) but this is the best way to make calls from .NET. Adding the header to request XML or JSON on this HttpClient instance is a single line of code: client.DefaultRequestHeaders.Accept.Add(). There was another method when doing a HttpPut called client.PutAsJsonAsync
He recommends not only sending back status codes from the server like (200 OK, 201 Created, 404 Not Found, 500 Internal Server Error), but also sending a timestamp. This way multiple clients trying to do say a PUT on the same resource will have the ability to handle concurrency with the time value.
Remember that HttpGet and HttpDelete are supposed to be idempotent. You can call over and over and the result will not change. A HttpPut is not idempotent.
He showed a few examples adding additional routes to constrain to HttpPost calls and allowed calling non-Http verb method name calls (i.e. DoSomething()). Obviously this is desired as mentioned before, you are really going to want to do more than just CRUD operations that map to the standard Http verbs. Just make sure to build a new route in Application_Start for this because the default route will not find a non-standard named method on the controller.
Rob also presented some examples on how you can expand beyond the XML/JSON return types to other supported media types over HTTP like 'CSV'. It's based on the client's accept header value so any of the supported types can technically be returned by the RESTful service. This was cool stuff, but I think for the majority of folks getting into REST based services will be fine with JSON and XML. This stems from the fact the the need to require a REST based service usually comes with a request to have client/technology/platform agnostic services.
A brief discussion was had on query string vs. URL parameters (between the slashes) vs. building up the body of the request with request parameter values. It's all preference, but there are URI length limits. If a query string or list of URL values gets too long, then one should build up the body of the request. Combine this with MVC model binding and you could have a pre built object from the request once it hits the server.
Lastly he spoke to errors. Returning 500 codes is not the best way. Remember with SOAP services we had rich .NET exception handling between the service and the client. This is not the case with REST based services. He suggested at a minimum to create a HttpResponseMessage(HttpStatusCode.BadRequest) and fill it with a robust description of what error occurred from the request. But the coolest method was to create a .NET exception and and add that to the Response message along with the BadRequest value.
This was one of the best sessions I've been to and I can take a lot of what I learned and that Rob provided and apply it in new Web API service applications.
Wrap Up Day 3
Another fantastic and information packed day here in Orlando! My favorite session was the one on Web API but I got great information from all of the sessions. I think the most popular session overall was John Papa's on JavaScript as it almost filled the entire keynote hall. JavaScript is not something I have an strong passion for, but I got a lot of information to sharpen my skills if needed. I'm also happy to announce we passed by 12/12/12 12:12:12.12 with no problem at all today. :-P Well it's time to rest up, eat some dessert, and get ready for another great day tomorrow!
























