Becoming Agile: Introducing the Retrospective

There are many software companies that transition from a waterfall methodology to an agile methodology to more effectively adapt as the customer’s understanding of the requirements evolves and the scope is refined. In contrast, there are companies that move from a “do anything you like” way of organizing work to an agile way of work to gain total team involvement in moving towards a defined set of goals. No matter the end of the spectrum you are approaching agile from, invariably there is period of transition where agile practices are being ramped up and prior practices are adapted or discarded.

At the first phase of this transition, I believe there is no more important practice to adopt than the Retrospective. An Agile Retrospective is a meeting that is held at the end of some predefined period, often referred to as an iteration or sprint. The Agile Retrospective is born from the heart of the Principles behind the Agile Manifesto which state that,

“At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.”

The retrospective is intended to facilitate continuous improvement of processes and practices of the team. Fortunately, there’s no agile practice easier to get started. While nearly all agile practices require buy-in from the top to the bottom of an organization, which is certainly preferable, the retrospective can be introduced with the buy-in of the team. The retrospective is for the team and by the team to facilitate the team’s own improvement. This implies that there should not be feedback from external sources that the team must address.

The retrospective can take many forms, but the easiest to get started with is Plus/Delta. At the core, the Plus/Delta discussion is about what went well, what could be better, and the formulation of action items in response.

Pluses (+)

  • What did we do well?

Emphasis improvements made since the last iteration. When completing a large increment of work, find creative ways to congratulate the team on a job well done. While I will not discourage giving recognition to individuals for effort going above and beyond, you will want to avoid causing anyone to feel left out or like they are not a part of the team. Avoid recognizing 4 out of 5 of the team members for example.

As the organizer of the meeting, I recommend taking notes during the iteration of times the time worked particularly well together to ensure that there are a good number of pluses.

Deltas (Δ)

  • What could we improve upon?

In the most positive way possible, address any areas where the team could us improvement. When there is agreement, a “plus one” can be added to existing items. This help to prioritize which deltas deserve action to be taken.

A Word of Caution

The retrospective is focused on improving the team as a whole. Avoid providing corrective feedback to individuals during the retrospective as this could work against the cohesiveness the team is trying to build during the meeting. Additionally, this type of feedback, as suggested in “Measure What Matters” by John Doerr, is best given as close as possible to the situation being addressed so that important context is not lost.

Action Items

  • What can we do to improve?

Action items are created in response to deltas, but occasionally may be used to further refine or encourage desirable behavior. There is only so much bandwidth for work towards improvement, so be thoughtful about which action items to take on.

To avoid having team members cite lack of follow through on action items at the next retrospective it is important to assign the action item to a member of the team who will take responsibility. It is often beneficial to track the progress of these action items the same way other work is currently being tracked in the organization to ensure the action items are not forgotten. If this is not an option for your organization, a whiteboard or Post-it® note will work just as well. At the following retrospective any action items from the prior retrospective may be listed as pluses if completed or continue to move forward as action items.

The Plus/Delta discussion is not a magic formula, and after a few meetings can seem monotonous. To prevent losing the interest of the team, I recommend checking out these additional resources that may address additional goals, such as team cohesion or improving the teams relationship, which debatably may be the most important type of improvement to address. When starting the journey towards being agile, there’s no better place to start than the retrospective.

Configure NuGet Package Source at the Repo Level

A long time ago in a galaxy far, far away, we used to configure NuGet package sources at the machine level in Visual Studio. Each time a new PC was dropped on our desk, we would dutifully go through our Visual Studio configuration steps, including setting up a NuGet package source pointing to our ProGet server.

Recently we have been able to reduce the amount of configuration needed at the machine level by instead configuring our package source at the repository level.

NuGet.Config

The packages source configuration is in a nuget.config that gets committed to the repo right beside the .sln.

You can learn more about nuget.config from the Microsoft Docs.

Fail Fast - Publishing NUnit Tests in Jenkins

The Problem

We’ve run into a couple of cases where at some point Jenkins silently stopped reporting test results and we did not notice the failure for quite some time, because the failure was not failing the Jenkins build.

The Solution

When configuring Jenkins to Publish an NUnit test result report, ensure that you enable to Fail the build if no test results are present.

In the Publish an NUnit test result report task of your Jenkins job config, click Advanced:

Enabled Fail the build if no test results are present:

The Why

Why do I want to fail the build?
In 2004, Jim Shore published a paper in IEEE Software titled, Fail Fast that explains the reasoning quite well. In short, we would like to know immediately if there is a problem, even if its slightly inconvenient. Bugs in the build or our code are like confrontation you read about in all those self-help books. The longer you put it off, the more difficult it becomes to resolve. Better to get it out in the open and resolve it quickly than to let it linger!

So, go resolve some conflicts in your personal life and fail fast in software! It may be difficult at first, but the alternative is much worse.

Updated with ASP.NET Core 2.0: dotnet new angular

Back in February 2017, Jeffrey T. Fritz blogged about Building Single Page Applications on ASP.NET Core with JavaScriptServices, introducing how to get started with a Angular + ASP.NET Core application. With the release of ASP.NET Core 2.0 there are some changes to this workflow.

For one, the Microsoft.AspNetCore.SpaTemplates is shipped with ASP.NET Core 2.0 #948 and does not need to be installed separately. Once you’ve installed the .NET Core 2.0 SDK, you can simply get started with your new project like so:

Create a new project:

1
2
3
mkdir mynewproject
cd mynewproject
dotnet new angular

Restore packages:

1
2
dotnet restore 
npm install

Set environment to Development:

1
setx ASPNETCORE_ENVIRONMENT "Development"

Start your dev server:

1
dotnet run

By default the server starts on port 5000, so browse to http://localhost:5000 to see the output. At this point you have a running ASP.NET Core 2.0 server application with an Angular/Bootstrap frontend. Ready for innovation!

What Else Is New?

Doing a compare folders produced from a clean dotnet new angular from ASP.NET Core 2 preview 2 and ASP.NET Core 2 RTM reveals a number of dependencies have received minor updates, including Angular which has been updated to 4.2.5. Some work has also been done to make dev builds faster and fix a couple minor issues.

Happy coding!

Geek Out: Microsoft Teams Desktop App

For anyone else curious about how Microsoft is now developing desktop applications, here’s some info on Microsoft Teams:
The Microsoft Teams app is built with Angular, TypeScript, & HTML. This web-based application is then wrapped with the Electron shell just like VSCode. The desktop client shares the same code as the Microsoft Teams web client.

How Does This Apply to Me?

Worth noting, here are some potential non-functional requirements of the Teams client that may differ from your needs:

  • Cross-platform (Windows, Linux, Mac)
  • Share code between desktop and web
  • Prove out modern web technologies in a greenfield product
  • Cloud backend support

Edit 8/14/2017

The Visual Studio 2017 installer UI is an electron app as well.

“We’re using Electron for the UI of the setup engine because it gives us the potential of sharing the installer code with other developer tools that ship on multiple platforms.” – Tim Sneath

References:

https://blog.thoughtstuff.co.uk/2017/04/under-the-hood-of-the-microsoft-teams-desktop-application/
https://techcommunity.microsoft.com/t5/Microsoft-Teams-Blog/Ask-Microsoft-Anything-Microsoft-Teams-11-10-16/ba-p/30212

Back to the Basics: Custom Types in C# Dictionary Key

The Usual Profiling, Refactoring, & Reimagining Exercises

I recently bumped into a performance problem while reworking some C# code that I haven’t thought about in quite a while. I switched the code around to use the standard BCL containers instead of a hodgepodge of custom container logic and manual hashing. As part of this work I introduced a struct to hold the Key info for a dictionary.

Below is the general idea of the struct used for the Dictionary key. The domain has been changed to cats to protect the innocent. If you think it strange that I’m using a couple of strings here, you’re probably right, but I haven’t finished profiling or refactoring yet.

Coding cat

1
2
3
4
5
6
7
8
9
10
11
12
13
public struct CatTrackingId
{
public CatTrackingId(string breed, string localId)
: this()
{
Breed = breed;
LocalId = localId;
}

public string Breed { get; private set; }

public string LocalId { get; private set; }
}

This is representative of how it is used:

1
private readonly Dictionary<CatTrackingId, CatTracking> _catTracking;

I know your ‘spidey sense’ is already tingling, but I didn’t catch my rookie mistake until I re-ran the profiler, realized it was going way too slow, and noticed a lot of reflection time being taken. I had a pretty good idea of what I’d forgotten, but did some Googling anyway, because that’s what we do.

For shame, I had forgotten my Equals() and GetHashCode(). Turns out, after all these years we still need them. When defining a struct containing reference types as I have here (good/bad idea? I’m open to input), the default Equals() comparison uses reflection to compare the fields of the struct which can be quite costly.

Instead of adding the methods to the class I opted to let ReSharper generate an IEqualityComparer<CatTrackingId> for me. Feels good.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
public struct CatTrackingId
{
private static readonly IEqualityComparer<CatTrackingId> BreedLocalIdComparerInstance =
new BreedLocalIdEqualityComparer();

public CatTrackingId(string table, string localId)
: this()
{
Breed = table;
LocalId = localId;
}

public static IEqualityComparer<CatTrackingId> BreedLocalIdComparer
{
get
{
return BreedLocalIdComparerInstance;
}
}

public string Breed { get; private set; }

public string LocalId { get; private set; }

private sealed class BreedLocalIdEqualityComparer : IEqualityComparer<CatTrackingId>
{
public bool Equals(CatTrackingId x, CatTrackingId y)
{
return string.Equals(x.Breed, y.Breed) && string.Equals(x.LocalId, y.LocalId);
}

public int GetHashCode(CatTrackingId obj)
{
unchecked
{
return ((obj.Breed != null ? obj.Breed.GetHashCode() : 0) * 397)
^ (obj.LocalId != null ? obj.LocalId.GetHashCode() : 0);
}
}
}

Next may be working out how to use something more efficient than strings to hold the Key, but as usual with performance, let the profiler be your guide.

Adventures in Azure: Adding in Azure Functions

Adventures in Azure

What would it take to publish a simple Web API in Azure? More specifically, what would be involved in building an Web API providing a service that I could securely make available on the internet and bill for usage?

Azure Functions is a simple way to deploy a small piece of code to Azure and have it run and scaled without having to think about the hardware it will run on or care about VM provisioning. The code can be triggered by messages in an incoming queue, from a GitHub WebHook, or, what I am interested in currently, by an HTTP request. Code can be written in C#, F#, Node.js, Python, PHP, Powershell, batch, or bash.

I know that I need a low number of endpoints to my simple API, so Azure Functions seems like an easy way to get things going as long as all my dependencies can be supported in the sandbox in which Azure Functions runs. The price also seems to be write, with the first 1 million requests included free each month. I am a frugal fellow, so free is one of my favorite words. There are additional constraints on Resource consumption, but this all seems reasonable at the moment.

So, I set out on testing the solution. A simple Add function will suffice for the test. My API endpoint will be a GET request with parameters a and b that get added to produce a result. The request looks something like this:

1
https://{my site}.azurewebsites.net/api/CalcAdd?a=4.5&b=37.9

For this example I would expect a JSON response body like so:

1
2
3
4
5
6
{
"operation": "Add",
"a": 4.5,
"b": 37.9,
"result": 42.4
}

Instead of using the online code editor for Azure Functions, I opted to use the VS2015 preview tooling and setup my code with continuous deployment from GitHub. This experience was much more seamless than I expected.

A single async static method serves as the entry point for the Azure Function.

The inputs are parsed as query parameters. An anonymous type is constructed on line 25 to form the response.

The only real roadblock I hit that stumped me for a few minutes is that the preview tooling creates a directory structure that is not compatible with deployment to Azure Functions. Fortunately, the documentation nicely states that all Azure Function code must be in a sub-folder directly beneath the root folder.

Developing the code locally in VS2015 gives you the ability to set breakpoints and debug code when necessary.

Feel free to fork my repo and try it for yourself.

Up next: Azure API Management

Update 05/07/2017

It looks like the VS2017 tooling for Azure Functions is being timed to release at the start of the Microsoft Build conference.