Running Karma tests in a DevContainer

I’ve recently been playing around with DevContainers. They’re one of those things that I’ve always desperately wanted without realising it. I want to be able to do a bit of work in Python, or Angular, or C# or Java, or Vue or whatever, without having to pollute my laptop with half-a-dozen tools, libraries and installations.

DevContainers to the rescue. The idea is so simple, and I’m not going to repeat it all here. Suffice to say, DevContainers give you a ready-made, self-contained development environment with VS Code, regardless of the tech-stack that you are using. And it guarentees that the next time you clone this repo, the dev environment will be ready to work, out of the box.

With this in mind, I dipped back into some Angular work again. Half-way through playing with the dev container (which came with npm and typsecript out-of-the-box), I realised I needed to run unit tests for my Angular code. I tried the standard approach but it fell over. This made sense – I’m using a linux container which doesn’t actually have a UI, so there’s no browser to run the Karma tests against. Of course, I can run up the application, and with the help of port forwarding I can hit the app from my ‘host’ machine, but this doesn’t work for Karma tests.

In order to solve this, I had to use a headless browser to run my Karma tests against. The steps were quite straight forward.

First, I needed to extend the container by using my own Dockerfile. This was required to add chrome headless, as well as set the CHROME_BIN environment variable. The docker file is reproduced below:

FROM mcr.microsoft.com/devcontainers/typescript-node:0-20

# Install necessary dependencies for Chrome Headless
RUN apt-get update && apt-get install -y wget gdebi-core curl

# Download and install Chrome Headless
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN gdebi --non-interactive google-chrome-stable_current_amd64.deb

# Clean up unnecessary files
RUN rm google-chrome-stable_current_amd64.deb

# Set Chrome as the default browser for Karma
ENV CHROME_BIN /usr/bin/google-chrome

Once that’s done, reopen your repo in the dev container.

Next, run ng generate config karma. This generates our bespoke Karma configuration file, which will alow us to override the default browser for running tests against. Once run,this gives us a “karma.conf.js” file within our Angular application.

Next, I had to add a browser section to this file:

browsers: ["ChromeHeadless"],
customLaunchers: {
    ChromeHeadless: {
        base: "Chrome",
        flags: [
            "--no-sandbox",
            "--disable-gpu",
            "--headless",
            "--remote-debugging-address=0.0.0.0",
            "--remote-debugging-port=9222",
        ],
    },
}

Once complete, I could run ng test to run the karma tests, which then run successfully.

One Issue with Agile: Customer vs User

In the words of one of the founders, Agile is “a devastated wasteland. The life has been sucked out of it. It’s a few religious rituals carried out by people who don’t understand the purpose that those rituals were intended to serve in the first place.”

Strong words. But coming from Kent Beck, they hold alot of weight. The principles and guiding philosophy has been reprimanded by a priesthood, essentially cargo-cult programmers following strict procedures in order to attain the next certificate or gold-star.

Why has this happened? Agile promised so much. The first “Agile tranformation” I went on, I was fully bought in. It seemed like intelligent people have come together, figured out the best way to deliver value, and drive it fully from feedback. But, having gone through several more iterations of the so-called “Agile transformation”, my youthful naivete has been replaced with well-worn cynicism.

But again, why has this happened? Well, in my opinion there’s many reasons. Giving teams full autonomy and control means taking it away from someone else. That results in that “someone else” feeling threatened. And usuaully that “someone else” is high up enough in managment to influence thing to ensure they maintain a level of control and/or relevance to the on-going transformation. This often works against the principles of Agile, where teams should have full autonomy and responsibility for their work.

Of course, it doesn’t have to be like that. Those threatened individuals could of course add value in the new world, but change-combined-with-threat is very difficult for people to accept, especially anyone who is comfortable with the status quo. It’s a completely normal human reaction if we’re honest.

And so, the people who enthusiastically engage in the Agile movement within an organization are managed by those who secretly feel threatened by that same wave. As the old saying goes, turkeys don’t vote for Christmas. An “Agile transformation” in such an environment is unfortunately doomed to fail.

But that’s only one part of it. The other is a deficiency in Agile itself. And just to be clear, I’m a huge fan of the Agile manifesto, and all of the principles within it. It makes perfect sense within the domain of creating excellent software that delivers value. The major criticism I have with Agile is that it doesn’t really ‘respect’ the evident truths of business.

What do I mean by this? Well, it’s quite simple. For generations, business has worked essentially on an assembly line. A customer pays for either a mass-produced good, or a bespoke, one-of-a-kind, good or service. Most transactions I’d wager have fallen into the dependeable, repeatable and highly-reliable area of mass-produced good, or well-understood service.

Developing software is different,by it’s very nature. Every single project I’ve worked on has been for a bespoke problem, that’s never been solved before for a unique customer. Every day, there’s new challenges and tasks that have to be solved, trialed, tested, proven. Hypotheses need to be worked through and investigated. And this is all on the backdrop of an ever-changing, complex world of libraries and frameworks. Oh, and the customer domain also has lots of complexity. Oh and guess what, security issues pop up that devour time, and holy moly, we need to upgrade this library to the latest but it doesn’t work with this key dependency we have…

Anyway, you get my point. This is complex stuff. Exactly why Agile was developed. Put something together quickly. Fail fast. Demo. Get feedback. Repeat. Reduce that cycle time down to the lowest possible amount. These are excellent ideals.

However, back to business, literally. The problem here is, there’s essentially a layer around everything we do as software developers. It’s one we don’t like, but it’s essential. The business layer. Without it, we don’t get paid. But it brings it’s share of evils. Sales people over-promising. Chinese-whisper requirements. Loss of focus of the end user.

And here’s the crux of it. Business cares about it’s customers; the people who pay for the software. Agile cares about users; the people who actively use the software. It’s why we have “User Stories” and not “Customer Stories”. But there’s a direct conflict between these two groups. The customer wants the product for the best price; the end user wants the product to work properly. Often times, these two ideals are in direct conflict with each other.

And then we get the loss of meaning in what we’re doing. A developer sitting at his desk might have 4/5 layers between themselves and the end user. How can they possibly know if what they’re doing is delivering actual, real value? They often have to go on the word of product owners, but they have their own conflicting commitments, and they often align with the business.

The best Agile experience I ever had was about 15 years ago, long before I ever heard the term. I would visit the customer site every 3 months to help with their installation. I knew the end users by name. I watched them use the product. I answered their questions. I understood the issues they were facing. I socialised with them, and got to know them as colleagues. They knew they could email me or call me and I could very quickly respond and hopefully solve their issues.

Today, we have all those layers. I actually can’t remember the last time I spoke with an honest-to-goodness end user for any software I’ve delivered since. Maybe with all the cargo-cultism around Agile, we might have actually lost something.

My Latest Favourite Terminal

I like having a nice terminal. My previous favourite was one called cmder which was fantastic but exclusive to Windows. Recently, I’ve come across Tabby, which is both fantastic and os-agnostic. This is a beautiful terminal with lots of customisation options and nice features. One of the nicest is a built-in SSH client, with the ability to remember connection details, as well as store passwords in an embedded vault. You can even store things like SSH port forwarding details.

Let’s examine some of the features. First off, there’s the customary colour schemes, of which there are many:

The darker ones are the pick of the bunch here (surprise surprise). Lighter schemes seem to all have horrific fonts:

Yikes

There’s also a plugin library, making this terminal feel like the VS Code of terminals. To be honest though, I haven’t had a chance to dive deep into the available plugins. Some that caught my eye though are docker and cloud-settings-sync.

Enabling the vault gives you an always-encrypted place to store sensitive items, such as SSH passwords. There’s also a huge amount of customization available around how the window is displayed, such as where it is docked and how to display tabs.

Another nice feature that I like is that you can restore terminal tab on app start, even if it was a connected SSH connection (as long as the password is stored in the vault). This is incredibly useful and saves alot of dead keystrokes through a working day if you happen to be SSH’ing into various servers.

Finally, we come to the profiles. This allows us to customize how we open a new tab. This includes which underlying shell to use (bash, zsh, powershell, SSH, etc). You can also group profiles, add icons (it defaults to font-awesome, so there should be more than enough in there), add colours to the profile, the working directory it opens in, and more. For SSH, you can do more advanced options, such as storing the password in the vault, or setting up port forwarding (although I haven’t seen that in action yet).

All-in-all, this is a fantastic tool and one that has knocked cmder off it’s perch as my favourite terminal 🙂

.Net 6 App with JSON settings file as a secret with Docker Compose

I was recently required to take some sensitive configuration settings out of the standard “appsettings.json” file and handle them some other way. The application is built as a docker image, and it’s being run using docker-compose. In the past, I’ve used the “AddKeyPerFile” approach to add secrets, but in this case I didn’t want to have to manage several different files for related config items.

I decided to extract the sensitive section from the appsettings.json file into it’s own JSON file. Then, use the standard “AddJsonFile” approach to add it to my application. I could then inject this file however I saw fit at run time. To finish out the requirement, I made use of docker-compose secrets, to include the file as a secret from a secure location. For now, that’s simply having the JSON file on my local machine, and not checked into source control.

First up, let’s take a look at the config items I was interested in obfuscating.

In this appsettings.json file, I have the standard logging and allowed hosts settings, my Azure Active Directory settings (which are the ones we need to secure) and then a dummy key for comparison purposes. I left this file exactly as is, since it does not contain the real-life values and it will help me determine at runtime, where exactly my application is taking the settings from.

Next, to test this app, I put together a very simple controller that returned those values on an API call.

Let’s run this app up from the IDE and see what the API returns:

As we can see, this is simply reading the values from our app settings JSON file; you can even see the typo in the value of the “dummyValue”.

Next, we need to override these settings by injecting them in at runtime. Since this application is run using docker-compose, that’s the mechansim I’ll be using.

I’m going to show a very basic way to override environment variables, namely through the docker-compose “environment” field. This is fine for a lot of configuration, but may not be the best choice for settings that we wish to remain secret.

Here you can see that the setting for the Azure Active Directory instance has been overriden by this value from the docker compose file on line 10. Let’s run up the API with docker compose and see the effects of that.

You can see that the instance has been overridden to the value in the docker compose file. Compare this response to the original response.

Again, this isn’t really a suitable approach for sensitive data, so we’re going to use a different approach. Let’s take a look a “Program.cs”:

There are two important lines here, namely line 12 and line 13. First, line 12 will take all of the files within the “/run/secrets” directory and add them as key/value pairs. The name of the file will be the key, while the contents will be the value. This is fine for simple pairs, but not suitable for our case, where we have a ‘family’ of related config items that I want to keep together; mainly to make life easier for whoever will be configuring or setting up this application.

I create a secrets directory, with a “Dummy.secret” file. This directory is created as a sibling of my WebApp application.

Next, I adjust my docker compose file to accept that secret value:

There’s a few things to note here. First off, the secret definition (line 15) must match the path/filename of the file on your host machine. However, when this is copied over into the container, it will be given the file name “DummyKey” as per the definition on line 14. We can inspect the files in the running container to confirm that:

If we look at this file, we can see the same contents as our source file:

Now let’s hit that endpoint again to see what affect this has had on what’s returned:

The dummy value has been updated to return the contents of the secret file. That’s because the filename matches the key in the app settings.

So this leads us finally to our JSON object and how that is injected. It’s relatively straight-forward, once we understand the previous approaches. We’ve already added the section to program.cs (see above). So now let’s build out our JSON file and add it to the ‘secrets’ directory:

Notice how these values differ from the ‘hard-coded’ values in our appsetting.json file. Next, let’s add this secret to our docker compose file:

There’s not much different here, but it is interesting to note that I’ve actually given the json file a proper extension on line 15. I could obviously do the same with the source “.secret” file if I wanted. Let’s run up this container and have a look at the files inside it.

Sure enough, our AzureSettings.json file has been created and the values are as expected.

Let’s ping the endpoint one last time to see what we get back:

And voilà! The Azure settings are all correctly being pulled from this private file that our source control knows nothing about. Also note how our dummyValue is still being returned as the key from it’s own respective secret file.

The source code for this example can be found here.

Add WCF Service in .net 5.0

Recently, I was trying to generate WCF client code for an application written in .net 5.0. I went through the standard steps in Visual Studio 2019. To summarize those steps:

  1. Right click on the project and select “Add –> Connected Service –> Microsoft WCF Web Server Reference Provider”
  2. Provide the URL for the WSDL, select “Go”, give a relevant namespace and leave all other defaults as they are.

At this point, I had my standard, generated interface for interacting with the SOAP endpoint. Full disclosure here: alot of this is absolute black magic to me, but I know it works. It’s another one of those things that Microsoft like to hide behind some generated code, which I usually don’t like. I don’t like it, because it hides some of the details away from me. It results in a section of my application that I don’t fully understand. I don’t like the idea of that, especially if issues pop up in that area in the future. Anyway, for my purposes right now, the generated code is fine.

The last time I had to do this task was in a .net framework application. In that instance, the generated code just worked straight out of the box. However, in the .net 5.0 world, I was seeing errors:

Weird. These are core classes in the System namespace. At this point it dawned on me that perhaps WCF is not fully supported in .net core. A bit of googling proved that to be the case. Further googling solved the issue though, with this answer on Stack Overflow:

Thankfully this worked beautifully. Adding those two nuget packages (they were both up at version 4.8.1 when I used them) removed all errors.

The House Specialities: A rant about DevOps

“K8S allows you to deploy your application easily, with little-to-no orchestration knowledge required”

This is a lie.

I have just spent three days pulling my hair out, trying to deploy what I would consider a simple application to AKS. And I still haven’t been able to complete this task!

There are a number of possiblities as to why I have been unable to achieve this what-should-be-surmountable goal.

  • I’m an idiot.
  • It is not as easy as advertised.
  • Network, port forwarding, NAT, load balancing, and about a billion other concepts do not come easily to a programmer.

I’ve spent a large portion of my professional life thinking about code, architecture and testability. In a similar vein to the much-maligned computer science undergraduate who is tasked with setting up FitBits for their entire family at Christmas, this whole deployment thing is NOT MY SPECIALITY. It’s not the area I’ve read books on or watched tutorials on or spent hours considering alternatives in my mind.

Why, in this current DevOps culture, are the already stretched developers now suddenly expected to be security experts? Or network experts? Or site reliablity engineers? These are specialised disciplines. And the weird thing is that there are usually already people around who know this stuff much better than the programmers. Why is the onus on developers to construct kubernetes yaml files? There are guys who have spent their ENTIRE CAREER dealing with this stuff. Why are they not right in the middle of this revolution?

Someone has created this narrative around “configuration-as-code”. I love that idea. But it fails to address two important points.

  • Alot of operations and network specialists go into their field because they didn’t like writing code.
  • Alot of programmers became programmers because they didn’t like dealing with network issues.

It’s akin to taking plumbers and electricians and suddenly expecting the plumbers to have the skills required to wire a house to an acceptable and professional standard. That would literally be insane. And yet, in the abstract world of IT, we can just throw anything at developers and they’ll muddle through.

Something has to give here. Right now, it’s my sanity, as I again try and do a curl command on an IP address that I’m pretty sure I should be able to hit….

The Toe-Dippers Guide to: Apache JMeter

I’ve recently had to play around with Apache JMeter to perform some load testing on an API that is about to go live. This post is just a brain dump of some of the various bits and pieces that I have been using recently. It’s such a feature-heavy application that every time I come back to it, I have to re-learn how to do everything. It’s probably just a little blind spot for myself, so capturing some nuggets of info here. My future self might thank me for it.

I’m not going to waste time describing the tool or how to download and install. There’s lots of info out there on how to do that. Instead, I’m going to focus on the various samplers, listeners and config elements and how I used them to put some useful tests together; no more, no less.

Command Line Arguments

First up, a quick note on command line arguments for running JMeter from the command prompt. I used a number of arguments:

-nThis tells jmeter to run in non proxy mode
-tThis tells jmeter which test file to run (ends in .jmx)
-pThis allows us to pass in a properties file
-lThis allows us to write logs to a specific directory
-jThis allows us to write the jmeter log to a different file
Some JMeter command line arguments

There are many more, but these were all I needed for my example. To see info on all command line arguments, type:

jmeter -?

Properties and Variables

Now, I wanted my JMeter test to be parameterized. I essentially wanted to be able to change a few of the main variables from the command prompt, without having to touch the test case. I found a nice little guide on it here. I basically created a properties file alongside my JMeter test and added the required properties:

To access these from within the JMeter script, I had to use the Simplified Property Function. To make life easier, I defined them in a “User Defined Variables” config element. This means I have a single location in my JMeter script that maps command line arguments to variables within the JMeter script.

The “__P” operator will take the value from the properties file. If the value is blank, or if no properties file is supplied, it will use the default value (the value after the comma).

Note that I left the subscription key blank everywhere, as it is not something I wanted to commit into source control. This had the downside of having to remember to set it each time. I may come up with a smarter way for storing this, but it’ll do for now.

Now those variables can be used throughout my script. For example, here’s my “Thread Group” settings:

Ramp-up Period

This was a property in the “Thread Group” that I had previously misunderstood, so I’m documenting it here to try and help me remember it in the future.

The ramp-up time is how long it takes to get to the total number of threads (or users). So for example, if I have defined 10 threads, and a ramp up period of 20 seconds, that means I will have 10 users after 20 seconds. There will be a new user added roughly every two seconds. So after 20 seconds, I will have my full ‘load’ on the system.

Config Elements

I used a number of Config Elements in my JMeter script. I’m going to discuss two of them in a little more detail, namely the “HTTP Request Defaults” and the “HTTP Header Manager“.

The request defaults allows me to define common defaults for all HTTP Requests. For this particular example, all of my requests were hitting the exact same endpoint, over https.

From this point on, any other request in the Thread Group will use the ‘https’ protocol, the specified server name and the specified path (obfuscated for privacy reasons). You can override these settings if required from within a single Http Request Sample.

Next, the HTTP Header Manager. There were two headers I was interested in here, namely the standard Content-Type header and the API-specific “SubscriptionKey” header. I had two different subscription keys, to simulate two different users accessing the API. To achieve this, I set up a top-level HTTP Header Manager at the Thread Group level. Inside the Thread Group, I added two “Simple Controllers“, one for each key.

The top-level “User Defined Variables” simply contained the “Content-Type”, which would trickle down into each nested HTTP Request.

Inside each “Simple Controller“, I added a second “HTTP Header Manager” with the specific key:

Inside here, I set my “subscriptionKey” header to the specific user defined variable (firstSubscriptionKey or secondSubscriptionKey).

The second “Simple Controller” was set up the same way.

Gaussian Random Timer

This is a random timer that distributes the calls based on a gaussian function. Essentially, if you’ve ever heard of the bell curve, you’re in the right area. Within this timer, I provide two config settings; “Deviation” and “Constant Delay Offset”.

The constant delay is the midway point, and the deviation is the variance applied to it. In other words, 600 milliseconds would be the top of my bell curve, and the edges would be +400ms and -400ms. So 200 milliseconds and 1000 milliseconds. The randomness of the timer is then distrbiuted along that curve, with the majority of the requests firing around the 600ms range.

This gives a nice bit of randomness to our test, allowing us to make it feel like a real interaction with the API.

Writing to file

The final item that I’m going to cover is using the “__time” function to write logs to timestamped directories. Here I have an aggregate report:

I want each report to be written to a timestamped directory, to make it easy to find each test run’s reports. I achieved this using the time function. You can find a handy cheat sheet on the time function here.

That might be a little difficult to see, so let’s expand it out:

C:\professional\\${__time(yyyyMMdd'T'HHmmss,)}\aggregateReport.csv

It’s important to note that if you want to use a time function in a path (like I’m doing here to define a directory), then you must escape the it with an extra back-slash. Otherwise, it treats the “$” as a literal string, and we end up with a horrific looking folder.

I think that’s all I have right now on JMeter. Hopefully some of these explanations will be of use to someone, or perhaps even my future self 🙂

Adventures in Azure: Deploying docker images

I was recently required to create a docker image from an Azure DevOps build pipeline, and then publish it to an instance of the Azure Container Registry. A fairly painless task, although there was one little gotcha that is worth noting.

Microsoft is clearly an enormous organisation. Sometimes, products and services that humungous organisations produce do not particularly play nicely with each other. Such was the case when I tried to build a docker image in an Azure DevOps build pipeline.

The problem started with my dockerfile. Visual Studio’s support for docker has hugely improved over the past few years. One such improvement is the ability to automatically add “docker support” to any .net core project.

This results in a generated dockerfile:

This work beautifully. I can right-click on the dockerfile from the solution explorer and build the docker image. Really nice!

However, it’s always good to be sceptical about anything that is auto-generated, something I learned a long time ago from a really good developer. And the problem manifested itself when I tried to build this dockerfile from anywhere other than Visual Studio.

Meanwhile, on the other side of the house, Azure DevOps provides it’s own auto-generated code, in the form of yaml files for the build pipeline. Again, it’s a really nice feature:

In this case, I want to build and push an image to Azure Container Registry, so that second option looks perfect. It automatically picks up the docker file in the git repository. All you have to do is save and run.

We end up with a generated yaml file describing the build pipeline.

However, this is where the problem manifests. The build fails, with an error indicating a file could not be found:

The problem here is where VS put the dockerfile. It lives within the project iteslf (i.e. Producer) on the file system. However the failing “COPY” command tells us that it actually needs to be run from outside that project, at the root level. Otherwise, the path is incorrect. “Producer/Producer.json” is relative to the root, not the project.

The solution is thankfully simple. We need to define the “BuildContext” from the pipeline yaml file. This essentially tells the pipeline which directory we want to run the dockerfile from. We can use a pre-defined variable provided to us by Azure to get this directory:

Once this is in place the build pipeline works flawlessly.

Note, that you will also need to make sure you do the same if you are trying to build the dockerfile from your command prompt. In other words, run the docker build command from the root directory, passing in the path to the dockerfile, via the “-f” parameter. Don’t forget the “.” at the end. That tells docker that the context is this current directory.

I go over all of this in more detail in this video. Hopefully, this might help some other developer who is pulling their hair out, as to why these standard, generated entities are not behaving as expected!

Creating Certificates

Just a quick how-to, more as a reminder to myself than anything else. Often, you may need to create self-signed certificates for development testing. Everytime I have to do this I need to look up the steps, so this is a very short post to remind myself on how to do it.

First, if you’re on a Windows system, you will need OpenSSL installed. You will have this if you are using the Linux subsystem on Windows 10, but if not it can be download from here. This will install the utility at a location such as C:\Program Files\OpenSSL-Win64. Add the “bin” directory to you path environment variable.

Next, open a command prompt and try typing “openssl” to confirm the executable is accesible on the path:

Next, we’re going to create the certificate pem and key files. This is an example of a command that generates such a file (I’m light on details because I don’t fully understand all this stuff myself!).

openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem

This will create a new key with the RSA algorithm, of size 2048 bits (which is a requirement for Azure Key Vault, which is what I happen to be doing here). It will generate a key file called key.pem and a x509 certificate which will expire in one year. The pem file is the public key, while the key file is the private key.

Once complete, you will have a both files in your current working directory. Next, we need to create the actual certificate file. This can be done with the following command. It uses both the private and public keys to build your pfx file.

openssl pkcs12 -inkey key.pem -in certificate.pem -export -out certificate.pfx

You will be asked for a password at this point, and it’s very important to keep note of this, as you will not be able to retrieve it after this point. This certificate can now be used wherever you need to use a certificate.

In my current example, I’m uploading a certificate to Azure Key Vault. For an excellent walkthrough on this, have a look at this YouTube video.

DI for Testing

I was recently reminded of a great little utility for unit testing in the dot net world, called AutoMocker. This library essentially provides dependency injection for your unit tests, automatically injecting mocks anywhere a constructor needs them. Say goodbye to creating multiple mocks (and maintaining them…)

Let’s take a look at some code, which you can find here. First off, I’m going to discuss the console app, as it ties into an earlier post, and then I’m going to get stuck into AutoMocker properly. I’ve created numbered branches to show the phases of implementation, which I’ll cover below.

DI in a Console App

In an earlier post, I detailed how to add DI to a dot net core console application. As I worked through this application, I realised I had missed some key steps, so I’m going to cover those here. Check out the branch “01_ConsoleAppWithDI”.

We start with a few basic interfaces and implementations defined in the “Core” project:

Next, I created a console app for running up the application. Within this project, I have the standard “Program” class as the entry point. Let’s step through this code:

Lines 12-16 are the key items for DI. You will need to have installed the NuGet dependency: Microsoft.Extensions.DependencyInjection.

Firstly, we create a new ServiceCollection. We then add console logging, as one of the implementations needs to have a logger injected. Next, we explicitly register two singletons for the two services in core. At the end of this chain, we then build the service provider, which will allow us to grab implementations as requested.

On line 22, you can see how we grab a service. So here, we are asking the service provider to give us an IBarService. We don’t know which implementation we will get, the service provider will handle that for us.

Inside the implementation for IBarService (simply called BarService), we have a dependency on IFooService, and that implementation has a dependency on a logger. All of these are injected via Constructor Injection and provided via the Service Provider. This is all handled automatically when we use our ServiceProvider to retrieve the initial implementation. (As an aside, also note that the constructor for BarService only has a single injected dependency – more on that later).

BarService implementation
FooService implementation

Going back to our Program class; once we retrieve our IBarService, we call a method on it (line 23). The full object graph needs to be resolved before this method can actually run. So when we run this application and see valid output, we can be sure the DI has worked properly. We are getting logging, as well as functionality, both of which could only have been provided via the DI container.

Introducing AutoMocker

DI is great, but what about for unit testing? If we have a class that has a lot of dependencies (which is itself a smell, but that’s for another day…), do I need to create a Mock object for every single one? This is where AutoMocker comes in.

AutoMocker basically does the same thing as DI above, only instead of injecting ‘real’ implementations, it inserts a mocked version automatically. The beauty of this is that if a constructor changes, maybe a new dependency is required or an old one needs to be pulled out, we don’t need to change any of the “Arrange” code in our test!

Let’s look at some simple examples first. Check out the branch “02_SpecificInstanceInAutoMocker”. Have a look at the test project in there, named “Core.Test

In this test case, we create a new instance of AutoMocker. We then say to AutoMocker we want a real instance of “BarService”. This is the actual system-under-test. I’ve called it “fixture” here, out of habit, but “unitUnderTest” might be a better name.

Regardless, you can see that we do not actually need to create a Mock for the IFooService. Typically, if I was constructing an instance of “BarService” from a test, I would need to inject in the mocks that I need. Here, it is done implicitly when I call “CreateInstance”. All constructor-injected dependencies are identified, and mocks are automatically injected for them.

Then we perform the action that we want to test. In this case, we call “DoBarThing()”. We know from the code that this will result in a method on IFooService being called ten times. So in the “Assert” section of the test, I do a “GetMock” call to retrieve back out the mocked “IFooService”. I then verify the behavior on that.

The two major benefits that this gives us are:

  • It cleans up our ‘Arrange’ code, because we don’t need to explicitly create lots of mocks.
  • It has a much higher resilience to change, in that constructors can change and typically the test case won’t need to.

Going Deeper

Sometimes we might want to use a specific implementation of an interface, or maybe even a stubbed version. There could be a few reasons for this. We may want to test at a level higher than your typical unit tests; you might call these integration tests, module tests, component tests, whatever. Regardless, you may want to use a ‘real’ thing alongside your system under test sometimes.

Likewise, you might want to use a stubbed implementation sometimes. Granted, a good mocking framework like Moq means you probably won’t ever need to do that, but let’s assume you don’t want to use a mock, for whatever reason.

So in this example, I’ve created a little stubbed version of the FooService, which simply keeps a running sum of all numbers passed in:

I then tell AutoMocker that I want to use this stub if any class requests an instance of IFooService, see line 32 and 33 below.

Finally, I assert that the count is correct (the sum of all number from 1 to 9, which is 45).

Adding New Functionality

I’ve spoke in the past about the ability to add new functionality without having to rewrite tests, usually in the context of strict mocks, which are somewhat of a bug-bear of mine. The nice thing about AutoMocker is that it nicely supports the flow of adding new functionality WITHOUT having to adjust the existing tests.

Let’s check out the next branch, “03_AdjustBarService_TestsStillPass”. Now, we’ve realised some new piece of functionality is required in our “BarService”, and following the single responsibility principle, we implement this code in a new class. We then inject this into our BarService constructor:

Note that the constructor here has actually changed. If we were “new’ing” up the service in our test class, we’d need to change the test to include this new dependency. Likewise if we were using some sort of factory or builder to construct the class. But AutoMocker removes the need for this entirely. Let’s try re-running our tests from earlier:

Quite amazingly, the tests all still pass, despite a definite change to our constructor! Hopefully at this point, the full potential for a utility like AutoMocker is becoming clear.

Setting up Mocked Behaviour

So what else does AutoMocker give us? Well, let’s check out the next branch (04_SettingUpMockedBehaviour) to find out. Check out our BarService and it’s newly implemented (and delegated) functionality:

We’ve added a new method, “DoOtherThing(string item)” which exercises our newly injected dependency. We want to verify that we are calling this correctly. This can easily be accomplished with a short-hand in AutoMocker.

Here, we setup AutoMocker to use a dynamically stubbed implementation of ISomeOtherService. We simply instruct it to return the integer 30 when it receives the string “value”. We call the system under test as before. Finally we do an assertion that the correct value is returned, but also that all invocations setup on the mock are performed correctly, via the standard “VeryifyAll()” method.

Conclusion

AutoMocker does for Unit Testing what DI did for production code. It decouples implementation from behaviour. It separates mocked, stubbed or concrete implementations from the actual class under testing, giving the developer an array of tools to help write better, more resilient unit tests. I’d strongly recommend using it for any C# project.