Maarten Balliauw {blog}

ASP.NET MVC, Microsoft Azure, PHP, web development ...

NAVIGATION - SEARCH

Building future .NET projects is quite pleasant

You may remember my ranty post from a couple of months back. If not, read about how building .NET projects is a world of pain and here’s how we should solve it. With Project K ASP.NET vNext ASP.NET 5 around the corner, I thought I had to look into it again and see if things will actually get better… So here goes!

Setting up a build agent is no longer a world of pain

There, the title says it all. For all .NET development we currently do, this world of pain will still be there. No way around it, you will want to commit random murders if you want to do builds targeting .NET 2.0 – .NET 4.5. A billion SDK’s all packaged in MSI’s that come with weird silent installs so you can not really script their setup, it will be there still. Reason for that is that dependencies we have are all informal: we build against some SDK, and assume it will be there. Our application does not define what it needs, so we have to provide the whole world on our build machines…

But if we forget all that and focus just on ASP.NET 5 and the new runtime, this new world is bliss. What do we need on the build agent? A few things, still.

  • An operating system (Windows, Linux and even Mac OS X will do the job)
  • PowerShell, or any other shell like Bash
  • Some form of .NET installed, for example mono

Sounds pretty standard out-of-the-box to me. So that’s all good! What else do we need installed permanently on the machine? Nothing! That’s right: NOTHING! Builds for ASP.NET 5 are self-contained and will make sure they can run anytime, anywhere. Every project specifies its dependencies, that will all be downloaded when needed so they are available to the build. Let’s see how builds now work…

How ASP.NET 5 projects work…

As an example for this post, I will use the Entity Framework repository on GitHub, which is built against ASP.NET 5. When building a project in Visual Studio 2015, there will be .sln files that represent the solution, as well as new .kproj files that represent our project. For Visual Studio. That’s right: you can ignore these files, they are just so Visual Studio can figure out how it all fits together. “But that .kproj file is like a project file, it’s msbuild-like and I can add custom tasks in there!” – Crack! That was the sound of a whip on your fingers. Yes, you can, and the new project system actually adds some things in there to make building the project in Visual Studio work, but stay away from the .kproj files. Don’t touch them.

The real project files are these: global.json and project.json. The first one, global.json, may look like this:

{ "sources": [ "src" ] }

It defines the structure of our project, where we say that source code is in the folder named src. Multiple folders could be there, for example src and test so we can distinguish where which type of project is stored. For every project we want to make, we can create a folder under the sources folder and in there, add a project.json file. It could look like this:

{ "version": "7.0.0-*", "description": "Entity Framework is Microsoft's recommended data access technology for new applications.", "compilationOptions": { "warningsAsErrors": true }, "dependencies": { "Ix-Async": "1.2.3-beta", "Microsoft.Framework.Logging": "1.0.0-*", "Microsoft.Framework.OptionsModel": "1.0.0-*", "Remotion.Linq": "1.15.15", "System.Collections.Immutable": "1.1.32-beta" }, "code": [ "**\\*.cs", "..\\Shared\\*.cs" ], "frameworks": { "net45": { "frameworkAssemblies": { "System.Collections": { "version": "", "type": "build" }, "System.Diagnostics.Debug": { "version": "", "type": "build" }, "System.Diagnostics.Tools": { "version": "", "type": "build" }, "System.Globalization": { "version": "", "type": "build" }, "System.Linq": { "version": "", "type": "build" }, "System.Linq.Expressions": { "version": "", "type": "build" }, "System.Linq.Queryable": { "version": "", "type": "build" }, "System.ObjectModel": { "version": "", "type": "build" }, "System.Reflection": { "version": "", "type": "build" }, "System.Reflection.Extensions": { "version": "", "type": "build" }, "System.Resources.ResourceManager": { "version": "", "type": "build" }, "System.Runtime": { "version": "", "type": "build" }, "System.Runtime.Extensions": { "version": "", "type": "build" }, "System.Runtime.InteropServices": { "version": "", "type": "build" }, "System.Threading": { "version": "", "type": "build" } } }, "aspnet50": { "frameworkAssemblies": { "System.Collections": "", "System.Diagnostics.Debug": "", "System.Diagnostics.Tools": "", "System.Globalization": "", "System.Linq": "", "System.Linq.Expressions": "", "System.Linq.Queryable": "", "System.ObjectModel": "", "System.Reflection": "", "System.Reflection.Extensions": "", "System.Resources.ResourceManager": "", "System.Runtime": "", "System.Runtime.Extensions": "", "System.Runtime.InteropServices": "", "System.Threading": "" } }, "aspnetcore50": { "dependencies": { "System.Diagnostics.Contracts": "4.0.0-beta-*", "System.Linq.Queryable": "4.0.0-beta-*", "System.ObjectModel": "4.0.10-beta-*", "System.Reflection.Extensions": "4.0.0-beta-*" } } } }

Whoa! My eyes! Well, it’s not so bad. A couple of things are in here:

  • The version of our project (yes, we have to version properly, woohoo!)
  • A description (as I have been preaching a long time: every project is now a package!)
  • Where is our source code stored? II n this case, all .cs files in all folders and some in a shared folder one level up.
  • Dependencies of our project. These are identifiers of other packages, that will either be searched for on NuGet, or on the filesystem. Since every project is a package, there is no difference between a project or a NuGet package. During development, you can depend on a project. When released, you can depend on a package. Convenient!
  • The frameworks supported and the framework components we require.

That’s the project system. These are not all supported elements, there are more. But generally speaking: our project now defines what it needs. One I like is the option to run scripts at various stages of the project’s lifecycle and build lifecycle, such as restoring npm or bower packages. SLight thorn in my eye there is that the examples out there all assume npm and bower are on the build machine. Yes, that’s a hidden dependency right there…

The good things?

  • Everything is a package
  • Everything specifies their dependencies explicitly (well, almost everything)
  • It’s human readable and machine readable

So let’s see what we would have to do if we want to automate a build of, say, the Entity Framework repository on GitHub.

Automated building of ASP.NET 5 projects

This is going to be so dissappointing when you read it: to build Entity Framework, you run build.cmd (or build.sh on non-Windows OS). That’s it. It will compile everything into assemblies in NuGet packages, run tests and that’s it. But what does this build.cmd do, exactly? Let’s dissect it! Here’s the source code that’s in there at time of writing this blog post:

@echo off cd %~dp0 SETLOCAL SET CACHED_NUGET=%LocalAppData%\NuGet\NuGet.exe IF EXIST %CACHED_NUGET% goto copynuget echo Downloading latest version of NuGet.exe... IF NOT EXIST %LocalAppData%\NuGet md %LocalAppData%\NuGet @powershell -NoProfile -ExecutionPolicy unrestricted -Command "$ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest 'https://www.nuget.org/nuget.exe' -OutFile '%CACHED_NUGET%'" :copynuget IF EXIST .nuget\nuget.exe goto restore md .nuget copy %CACHED_NUGET% .nuget\nuget.exe > nul :restore IF EXIST packages\KoreBuild goto run .nuget\NuGet.exe install KoreBuild -ExcludeVersion -o packages -nocache -pre .nuget\NuGet.exe install Sake -version 0.2 -o packages -ExcludeVersion IF "%SKIP_KRE_INSTALL%"=="1" goto run CALL packages\KoreBuild\build\kvm upgrade -runtime CLR -x86 CALL packages\KoreBuild\build\kvm install default -runtime CoreCLR -x86 :run CALL packages\KoreBuild\build\kvm use default -runtime CLR -x86 packages\Sake\tools\Sake.exe -I packages\KoreBuild\build -f makefile.shade %*

Did I ever mention my dream was to have fully self-contained builds? This is one. Here’s what happens:

  • A NuGet.exe is required, if it’s found that one is reused, if not, it’s downloaded on the fly.
  • Using NuGet, 2 packages are installed (currently from the alpha feed the ASP.NET team has on MyGet, but I assume these will end up on NuGet.org someday)
    • KoreBuild
    • Sake
  • The KoreBuild package contains a few things (go on, use NuGet Package Explorer and see, I’ll wait)
    • A kvm.ps1, which is the bootstrapper for the ASP.NET 5 runtime that installs a specific runtime version and kpm, the package manager.
    • A bunch of .shade files
  • Using that kvm.ps1, the latest CoreCLR runtime is installed and activated
  • Sake.exe is run from the Sake package

Dissappointment, I can feel it! This file does botstrap having the CoreCLR runtime on the build machine, but how is the actual build performed? The answer lies in the .shade files from that KoreBuild package. A lot of information is there, but distilling it all, here’s how a build is done using Sake:

  • All bin folders underneath the current directory are removed. Consider this the old-fashioned “clean” target in msbuild.
  • The kpm restore command is run from the folder where the global.json file is. This will ensure that all dependencies for all project files are downloaded and made available on the machine the build is running on.
  • In every folder containing a project.json file, the kpm build command is run, which compiles it all and generates a NuGet package for every project.
  • In every folder containing a project.json file where a command element is found that is named test, the k test command is run to execute unit tests

This is a simplified version, as it also cleans and restores npm and bower, but you get the idea. A build is pretty easy now. KoreBuild and Sake do this, but we could also just run all steps in the same order to achieve a fully working build. So that’s what I did…

Automated building of ASP.NET 5 projects with TeamCity

To see if it all was true, I decided to try and automate things using TeamCity. Entity Framework would be to easy as that’s just calling build.bat. Which is awesome!

I crafted a little project on GitHub that has a website, a library project and a test project. The goal I set out was automating a build of all this using TeamCity, and then making sure tests are run and reported. On a clean build agent with no .NET SDK’s installed at all. I also decided to not use the Sake approach, to see if my theory about the build process was right.

So… Installing the runtime, running a clean, build and test, right? Here goes:

@echo off cd %teamcity.build.workingDir% SETLOCAL IF EXIST packages\KoreBuild goto run %teamcity.tool.NuGet.CommandLine.DEFAULT.nupkg%\tools\nuget.exe install KoreBuild -ExcludeVersion -o packages -nocache -pre -Source https://www.myget.org/F/aspnetvnext/api/v2 :run CALL packages\KoreBuild\build\kvm upgrade -runtime CLR -x86 CALL packages\KoreBuild\build\kvm install default -runtime CoreCLR -x86 CALL packages\KoreBuild\build\kvm use default -runtime CLR -x86 :clean @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% "bin" -Directory -rec -erroraction 'silentlycontinue' | Remove-Item -Recurse; exit $Lastexitcode" :restore @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% global.json -rec -erroraction 'silentlycontinue' | Select-Object -Expand DirectoryName | Foreach { cmd /C cd $_ `&`& CALL kpm restore }; exit $Lastexitcode" :buildall @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% project.json -rec -erroraction 'silentlycontinue' | Foreach { kpm build $_.FullName --configuration %mr.Configuration% }; exit $Lastexitcode" @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% *.nupkg -rec -erroraction 'silentlycontinue' | Where-Object {$_.FullName -match 'bin'} | Select-Object -Expand FullName | Foreach { Write-Host `#`#teamcity`[publishArtifacts `'$_`'`] }; exit $Lastexitcode" :testall @powershell -NoProfile -ExecutionPolicy unrestricted -Command "Get-ChildItem %mr.SourceFolder% project.json -rec -erroraction 'silentlycontinue' | Where-Object { $_.FullName -like '*test*' } | Select-Object -Expand DirectoryName | Foreach { cmd /C cd $_ `&`& k test -teamcity }; exit $Lastexitcode"

(note: this may not be optimal, it’s as experimental as it gets, but it does the job – feel free to rewrite this in Ant or Maven to make it cross platform on TeamCity agents, too)

TeamCity will now run the build and provide us with the artifacts generated during build (all the NuGet packages), and expose them in the UI after each build:

Projeckt K ASP.NET vNext TeamCity

Even better: since TeamCity has a built-in NuGet server, these packages now show up on that feed as well, allowing me to consume these in other projects:

NuGet feed in TeamCity for ASP.NET vNext

Running tests was unexpected: it seems the ASP.NET 5 xUnit runner still uses TeamCity service messages and exposes results back to the server:

Test results from xUnit vNext in TeamCity

But how to set the build number, you ask? Well, turns out that this is coming from the project.json. The build umber in there is leading, but we can add a suffix by creating a K_VERSION_NUMBER environment variable. On TeamCity, we could use our build counter for it. Or run GitVersion and use that as the version suffix.

TeamCity set ASP.NET 5 version number

Going a step further, running kpm pack even allows us to build our web applications and add the entire generated artifact to our build, ready for deployment:

ASP.NET 5 application build on TeamCity

Very, very nice! I’m liking where ASP.NET 5 is going, and forgetting everything that came before gives me high hopes for this incarnation.

Conclusion

This is really nice, and the way I dreamt it would all work. Everything is a package, and builds are self-contained. It’s all still in beta state, but it gives a great view of what we’ll soon all be doing. I hope a lot of projects will use the builds like the Entity Framework one. having one or two build.bat files in there that do the entire thing. But even if not and you have a boilerplate VS2015 project, using the steps outlined in this blog post gets the job done. In fact, I created some TeamCity meta runners for you to enjoy (contributions welcome). How about adding one build step to your ASP.NET 5 builds in TeamCity…

TeamCity ASP.NET build by convention

Go grab these meta runners now! I have created quite a few:

  • Install KRE
  • Convention-based build
  • Clean sources
  • Restore packages
  • Build one project
  • Build all projects
  • Test one project
  • Test all projects
  • Package application

PS: Thanks Mike for helping me out with some PowerShell goodness!

There is no good mobile operating system

I’m back on Windows Phone. If you follow me on Twitter, you may have read in the past weeks that I switched from being a Windows Phone user to being an Android user. Having been on the platform since before Windows Phone 7 was RTM, I found the operating system was getting slower and slower and less stable on my Nokia Lumia 620. So when I saw a shiny Android being fast, stable and having all the apps I needed, I was sold. Until today, when I switched back to a Windows Phone device. And maybe I’ll switch back again.

From Windows Phone to Android…

So after raging at my Nokia Lumia 620 for weeks, I played with a relative’s Nexus 5 and it got me hooked after an hour or two. The Lumia crashed twice a day (not an app crash, a full reboot crash!). Live tiles were not updating. Scrolling a screen was laggy. The Nexus 5 was fast, fast, fast. It synced with e-mail and calendar on both Google and Microsoft services. Fast-forward a few days and I had my Nexus 5 in the mail. Byebye Windows Phone!

The first boot started with me entering my Google Account credentials, running an over-the-air update to Android 5 (lollipop) and getting greeted by a refreshing home screen with some Google apps on there. Having most of my stuff in Outlook.com, Office 365 and so on, I decided to try the Google apps to synchronize it all. Which worked, but only for 90%. Syncing contact pictures? Nope. But not to fear, the Play Store, Google’s marketplace, is filled with nice gems, including a bunch of Microsoft apps that, surprisingly, feel better developed and are more advanced than anything I’ve seen on Windows Phone. Odd, but hey, using these apps, everything synced!

Installing these apps was interesting, as it exposed a clever thing: when entering my Microsoft Account details for one app, other apps just ask me if I want to use that same account or not. Unlike Windows Phone, where I have to login with my Facebook credentials for both the Facebook app and the Messenger app, Android just asks me once. Hear that, Windows Phone?

One thing I found weird was using Google Hangouts as an application to write text messages (SMS). Stop pushing it, Google! But hold on… Android supports plugging the texting app, and the one from Textra seemed simple enough to do just that: texting. Other people may want something more fancy with lots of whistles and bells. That’s in the Play Store, too.

Now back to the “better apps from Microsoft on Android than on Windows Phone” statement. If you, like me, use two-factor authentication using an authenticator app, you may hate the fact that you have to type the generated code in your PC’s browser every time. You would say that’s normal, right? Well, Microsoft’s Authenticator app just prompts me on the Android if I want to allow/deny a login, and handles the 2FA behind the scenes. Easy, convenient, and still secure. Hear that, Windows Phone?

Another interesting thing I found was that Android has pluggable keyboards. You can change from English to Dutch, and from Dutch to a keyboard that has only smileys. More interesting, is that apps like Keepass (a password manager), can provide a keyboard to the OS that automatically enters my credentials when I say it has to do that. In any app just switch the keyboard to Keepass and enjoy the app typing credentials. No more 2-times back-and-forth switching to my password manager app on Windows Phone. Just pick the keyboard and be done. Hear that, Windows Phone?

Next. Apps. The Play Store is filled with quality apps. There are “crapps” as well, quite a lot even. But there are quality apps for big brands, mobile banking, … They all work and are fully functional. Not some minimum viable valuable product like a lot of apps on Windows Phone. Am I saying there are no good Windows Phone apps? No. There are some. But I can’t do mobile banking on my Windows Phone. Nor inspect the logs of my Windows Azure machines. Nor see how much credit I still have on my electronic meal vouchers. On Android, it’s all there, polished and every app I could think of has at least an official version, a fan-made version and then some crap-variants that every marketplace will be polluted with. Hear that, Windows Phone? Apps!

Something else. An electronic assistant. Google Now! The thing is there, and I can say “OK Google, call wife” and it does that. It also learns my commute, where I parked my car and will tell me all this when I need it. Wait, isn’t that what Cortana is meant to be? Why yes! Except, Cortana is not available in Belgium (unless I switch my Windows Phone region and suffer from a lack of local apps that are not supported on the US region and require me to switch back and lose Cortana again). And that trick is half-baked: who in Europe wants to see local US news and the temperature in Fahrenheit… Google Now is brilliant, and it works!

Writing al this makes you feel that Android is superior, right? Well, it is. Mostly.

From Android back to Windows Phone…

It’s not all sunshine and roses. Pretty much every Android app plugs into the notifications, and if I did not put my phone into silent every now and then, it would literally make noise (or vibrate) every two seconds. A tweet? BZZZZ! An email? BZZZZZ! The current number of seconds is 32? BZZZZZ! Full moon tomorrow? BZZZZZ! The thing annoys you all the time. And yes, you can customize this, but it’s a bit of work do do right. I liked the more sensible defaults on some of the Windows Phone apps.

Android has many apps, and many good apps. But the brands and big names like Facebook and Twitter take Android seriously. This means that things like moderated timelines, all sorts of promoted tweets and posts all show up in these apps. Looking over some tweets, I would expect them to be ordered chronologically. Nope, most of the time they are and then some strange ordering kicks in at some point. Same with Facebook. On Windows Phone, my timeline was roughly identical to what I would see in my browser. On Android, I had no idea what I was seeing but definitely not the latest things. A mess.

And speaking of a mess, my personal flavor is to have an organized screen. On Android, all icons of apps are different, applications are styled differently, behavior in terms of gestures and menus in apps was different and so on. While there are a lot of apps that follow some design principles Google uses in their apps, a lot are just annoying. Since switching apps is always switching context, consistency in apps is so good to have to ease the context switch on your brain! In retrospect,

I think consistency is the main thing that got me back to Windows Phone. Yes, I am insane to take consistency above all the good things Android has to offer. I’m not sure if I’ll stay. Maybe there is no good mobile operating system after all? Or maybe there is, and it’s the iOS one I haven’t tried. Or maybe it’s Windows Phone after all. Or Android. Or a dumbphone. Or a BlackBerry. One thing I’ve learned is they all have some work to do. And all I can hope is the product teams on either side carry the phones of the others and learn.

Could not load file or assembly… NuGet Assembly Redirects

When working in larger projects, you will sometimes encounter errors similar to this one: “Could not load file or assembly 'Newtonsoft.Json, Version=4.5.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed' or one of its dependencies. The system cannot find the file specified.” Or how about this one? “System.IO.FileLoadException : Could not load file or assembly 'Moq, Version=3.1.416.3, Culture=neutral, PublicKeyToken=69f491c39445e920' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

Search all you want, most things you find on the Internet are from the pre-NuGet era and don’t really help. What now? In this post, let’s go over why this error sometimes happens. And I’ll end with a beautiful little trick that fixes this issue when you encounter it. Let’s go!

Redirecting Assembly Versions

In 90% of the cases, the errors mentioned earlier are caused by faulty assembly redirects. What are those, you ask? A long answer is on MSDN, a short answer is that assembly redirects let us trick .NET into believing that assembly A is actually assembly B. Or in other words, we can tell .NET to work with Newtonsoft.Json 6.0.0.4 whenever any other reference requires an older version of Newtonsoft.Json.

Assembly redirects are often created by NuGet, to solve versioning issues. Here’s an example which I took from an application’s Web.config:

<?xml version="1.0" encoding="utf-8"?> <configuration> <!-- ... --> <runtime> <legacyHMACWarning enabled="0" /> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-6.0.0.0" newVersion="6.0.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> </configuration>

When running an application with this config file, it will trick any assembly that wants to use any version < 6.0.0.0 of Newtonsoft.Json into working with the latest 6.0.0.0 version. Neat, as it solves dependency hell where two assemblies require a different version of a common assembly dependency. But… does it solve that?

NuGet and Assembly Redirects

The cool thing about NuGet is that it auto-detects whenever assembly redirects are needed, and adds them to the Web.config or App.config file of your project. However, this not always works well. Sometimes, old binding redirects are not removed. Sometimes, none are added at all. Resulting in fine errors like the ones I opened this post with. At compile time. Or worse! When running the application.

One way of solving this is manually checking all binding redirects in all configuration files you have in your project, checking assembly versions and so on. But here comes the trick: we can let NuGet do this for us!

All we have to do is this:

  1. From any .config file, remove the <assemblyBinding> element and its child elements. In other words: strip your app from assembly binding redirects.
  2. Open the Package Manager Console in Visual Studio. This can be done from the View | Other Windows | Package Manager Console menu.
  3. Type this one, magical command that solves it all: Get-Project -All | Add-BindingRedirect. I repeat: Get-Project -All | Add-BindingRedirect

NuGet Add Binding Redirect

NuGet will get all projects and for every project, add the correct assembly binding redirects again. Compile, run, and continue your day without rage. Enjoy!

PS: For the other cases where this trick does not help, check Damir Dobric’s post on troubleshooting NuGet references.

Automatically strong name signing NuGet packages

Some developers prefer to strong name sign their assemblies. Signing them also means that the dependencies that are consumed must be signed. Not all third-party dependencies are signed, though, for example when consuming packages from NuGet. Some are signed, some are unsigned, and the only way to know is when at compile time when we see this:

Referenced assembly does not have a strong name

That’s right: a signed assembly cannot consume an unsigned one. Now what if we really need that dependency but don’t want to lose the fact that we can easily update it using NuGet… Turns out there is a NuGet package for that!

The Assembly Strong Naming Toolkit can be installed into our project, after which we can use the NuGet Package Manager Console to sign referenced assemblies. There is also the .NET Assembly Strong-Name Signer by Werner van Deventer, which provides us with a nice UI as well.

The problem is that the above tools only sign the assemblies once we already consumed the NuGet package. With package restore enabled, that’s pretty annoying as the assemblies will be restored when we don’t have them on our system, thus restoring unsigned assemblies…

NuGet Signature

Based on the .NET Assembly Strong-Name Signer, I decided to create a small utility that can sign all assemblies in a NuGet package and creates a new package out of those. This “signed package” can then be used instead of the original, making sure we can simply consume the package in Visual Studio and be done with it. Here’s some sample code which signs the package “MyPackage” and creates “MyPackage.Signed”:

var signer = new PackageSigner(); signer.SignPackage("MyPackage.1.0.0.nupkg", "MyPackage.Signed.1.0.0.nupkg", "SampleKey.pfx", "password", "MyPackage.Signed");

This is pretty neat, if I may say so, but still requires manual intervention for the packages we consume. Unless the NuGet feed we’re consuming could sign the assemblies in the packages for us?

NuGet Signature meets MyGet Webhooks

Earlier this week, MyGet announced their webhooks feature. After enabling the feature on our feed, we could pipe events, such as “package added”, into software of our own and perform an action based on this event. Such as, signing a package.

MyGet automatically sign NuGet package

Sweet! From the GitHub repository here, download the Web API project and deploy it to Microsoft Azure Websites. I wrote the Web API project with some configuration options, which we can either specify before deploying or through the management dashboard. The application needs these:

  • Signature:KeyFile - path to the PFX file to use when signing (defaults to a sample key file)
  • Signature:KeyFilePassword - private key/password for using the PFX file
  • Signature:PackageIdSuffix - suffix for signed package id's. Can be empty or something like ".Signed"
  • Signature:NuGetFeedUrl - NuGet feed to push signed packages to
  • Signature:NuGetFeedApiKey - API key for pushing packages to the above feed

The configuration in the Microsoft Azure Management Dashboard could look like the this:

Azure Websites

Once that runs, we can configure the web hook on the MyGet side. Be sure to add an HTTP POST hook that posts to <url to your deployed app>/api/sign, and only with the package added event.

MyGet Webhook configuration

From now on, all packages that are added to our feed will be signed when the webhook is triggered. Here’s an example where I pushed several packages to the feed and the NuGet Signature application signed the packages themselves.

NuGet list signed packages

The nice thing is in Visual Studio we can now consume the “.Signed” packages and no longer have to worry about strong name signing.

Thanks to Werner for the .NET Assembly Strong-Name Signer I was able to base this on.

Enjoy!

Developing for the Tessel with WebStorm

image_thumb[1]In a previous post, I mentioned that (finally) my Tessel arrived, “an internet-connected microcontroller programmable in JavaScript”. I like WebStorm a lot as an IDE, and since the Tessel runs on JavaScript code (via node), why not see if WebStorm can be more than just an editor for Tessel development…

Developing JavaScript

The Tessel runs JavaScript, so naturally a JavaScript IDE like WebStorm will be splendid at that part. It provides a project system, code completion, navigation, inspections to check whether my code is as it should be (which from the screenshot below, it is not, yet ;-)) and so on.

WebStorm Tessel Node JavaScript

What I like a lot is that everything related to the device-side of my project (a thermometer thing that posts data to the Internet), is in one place. The project system ensures the IDE can be intelligent about code completion and navigation, I can see the npm modules I have installed, I can use version control and directly push my changes back to a GitHub repository. The Terminal tool window lets me run the Tessel command line to run scripts and so on. No fiddling with additional tools so far!

Tessel Command Line Tools

As I explained in a previous blog post, the Tessel comes with a command line that is used toconnect the thing to WiFi, run and deploy scripts and read logs off it (and more). I admit it: I am bad at command line things. After a long time, commands get engraved in my memory and I’m quite fast at using them, but new command line tools, like Tessel’s, are something that I always struggle with at the start.

To help me learn, I thought I’d add the Tessel command line to WebStorm’s Command Line Tools. Through the Project Settings | Command Line Tool Support,, I added the path to Tessel’s tool (%APPDATA%\npm\tessel.cmd). Note that you may have to install the Command Line Tools Plugin into WebStorm, I’m unsure if it’s bundled.

Tessel Command Line 

This helps in getting the Tessel commands available in the Command Line Tools after pressign Ctrl+Shift+X (or Tools | Run Command…), but it still does not help me in learning this new command’s syntax, right? Copy this gist into C:\Users\<your username>\.WebStorm8\config\commandlinetools\Custom_tessel.xml and behold: completion for these commands!

Tessel command line auto completion

Again, I consider them as training wheels until I start memorizing the commands. I can remember tessel run, but it’s all the one’s that I’m not using cntinuously that I tend to forget…

Running Code on the Tessel

Running code on the Tessel can be done using the tessel run <script.js> command. However, I dislike having to always jump into a console or even the command line tools mentioned earlier to just run and see if things work. WebStorm has the concept of Run/Debug Configurations, where using a simple keystroke (Shift+F10) I can run the active configuration without having to switch my brain context to a console.

I created two configurations: one that runs nodejs on my own computer so I can test some things, and one that invokes tessel run. Provide the path to node, set the working directory to the current project folder, specify the Tessel command line script as the file to execute and provide run somescript.js as the parameters.

Tessel Run

Quick note here: after a few massive errors coming from Tessel’s command line tool that mentioned the device only supports one connection, it’s bes tto check the Single instance only box for the run configuration. This ensures the process is killed and restarted whenever the script is ran.

Save, Shift+F10 and we’re deploying and running whenever we want to test our code.

Run code on Tessel from WebStorm

Debugging does not work, as the Tessel does not support this. I hope support for it will be added, ideally using the V8 debugger so WebStorm can hook into it, too. Currently I’m doing “poor man’s debugging”: dumping variables using console.log() mostly…

External Tools

When I first added Tessel to WebStorm, I figured it would be nice to have some menu entries to invoke commands like updating the firmware (a weekly task,Tessel is being actively developed it seems!) or showing the device’s WiFi status. So I did!

Tessel External Tools

External Tools can be added under the IDE Settings | External Tools and added to groups and so on. Here’s what I entered for the “Update firmware” command:

Update Tessel from WebStorm

It’s basically just running node, passing it the path to the Tessel command line script and appending the correct parameter.

Now, I don’t use my newly created menu too much I must say. Using the command line tools directly is more straightforward. But adding these external tools does give an additional advantage: since I have to re-connect to the WiFi every now and then (Tessel’s WiFi chip is a bit flakey when further away from the access point), I added an external tool for connectingit to WiFi and assigned a shortcut to it (IDE Settings | Keymaps, search for whatever label you gave the command and use the context menu to assign a keyboard shortcut). On my machine, Ctrl+Alt+W resets the Tessel’s WiFi now!

Installing npm Packages

This one may be overkill, but I found searching npm for Tessel-related packages quite handy through the IDE. From Project Settings | Node.JS and NPM, searching packages is pretty simple. And installing them, too! Careful, Tessel’s 32 MB of storage may not like too many modules…

NPM webstorm

Fun fact: writing this blog post, I noticed the grunt-tessel package which contains tasks that run or deploy scripts to the device. If you prefer using Grunt for doing that, know WebStorm comes with a Grunt runner, too.

That’s it, for now, I do hope to tinker away on the Tessel in the next weeks nad finish my thermometer and the app so I can see the (historical) temperature in my house,

Getting Started with the Tessel

Tessel LogoSomewhere last year (I honestly no longer remember when), I saw a few tweets that piqued my interest: a crowdfunding project for the Tessel, “an internet-connected microcontroller programmable in JavaScript”. Since everyone was doing Arduino and Netduino and JavaScript is not the worst language ever, I thought: let’s give these guys a bit of money! A few months later, they reached their goal and it seemed Tessel was going to production. Technical Machine, the company behind the device, sent status e-mails on their production process every couple of weeks and eventually after some delays, there it was!

Plug, install (a little), play!

After unpacking the Tessel, I was happy to see it was delivered witha micro-USB cable to power it, a couple of stuickers and the climate module I ordered with it (a temperature and humidity sensor). The one-line manual said “http://tessel.io/start”, so that’s where I went.

The setup is pretty easy: plug it in a USB port so that Windows installs the drivers, install the tessel package using NPM and update the device to the latest firmware.

npm install -g tessel tessel update

Very straightforward! Next, connecting it to my WiFi:

tessel wifi -n <ssid> -p <password> -s wpa2 -t 120

And as a test, I managed to deploy “blinky”, a simple script that blinks the leds on the Tessel.

tessel blinky

Now how do I develop for this thing…

My first script (with the climate module)

One of the very cool things about Tessel is that all additional modules have something printed on them… The climate module, for example, has the text “climate-si7005” printed on it.

climate-si7005

Now what does that mean? Well, it’s also the name of the npm package to install to work with it! In a new directory, I can now simply initialzie my project and install theclimate module dependency.

npm init npm install climate-si7005

All modules have their npm package name printed on them so finding the correct package to work with the Tessel module is quite easy. All it takes is the ability to read. The next thing to do is write some code that can be deployed to the Tessel. Here goes:

The above code uses the climate module and prints the current temperature (in Celsius, metric system for the win!) on the console every second. Here’s a sample, climate.js.

var tessel = require('tessel'); var climatelib = require('climate-si7005'); var climate = climatelib.use(tessel.port['A']); climate.on('ready', function () { setImmediate(function loop () { climate.readTemperature('c', function (err, temp) { console.log('Degrees:', temp.toFixed(4) + 'C'); setTimeout(loop, 1000); }); }); });

The Tessel takes two commands that run a script: tessel run climate.js, which will copy the script and node modules onto the Tessel and runs it, and tessel push climate.js which does the same but deploys the script as the startup script so that whenever the Tessel is powered, this script will run.

Here’s what happens when climate.js is run:

tessel run climate.js

The output of the console.log() statement is there. And yes, it’s summer in Belgium!

What’s next?

When I purchased the Tessel, I had the idea of building a thermometer that I can read from my smartphone, complete with history, min/max temperatures and all that. I’ve been coding on it on and off in the past weeks (not there yet). Since I’m a heavy user of PhpStorm and WebStorm for doing non-.NET development, I thought: why not also see what those IDE’s can do for me in terms of developing for the Tessel… I’ll tell you in a next blog post!

What happened to Code Spaces could happen to you. On Amazon, Azure and any host out there.

Earlier this week, a sad thing happened to the version control hosting service Code Spaces. A malicious person gained access to their Amazon control panel and after demanding a ransom to the owners of Code Spaces, that malicious person started deleting data and EC2 instances. After a couple of failed attempts from Code Spaces to stop this from happening, the impossible happened: the hacker rendered Code Spaces dead. Everything that was their business is gone. As they state themselves:

Code Spaces will not be able to operate beyond this point, the cost of resolving this issue to date and the expected cost of refunding customers who have been left without the service they paid for will put Code Spaces in a irreversible position both financially and in terms of on going credibility.

That’s sad. Sad for users, sad for employees and sad for business owner. Some nutcase destroyed a flourishing business over the course of 12 hours. Horrible! But the most horrible thing? It can happen to you! Or as Jeff Atwood stated:

Jeff Atwood - they are everywhere!

The fact that this could happen is bad. But security is what it is: there is always this chance of something happening, whatever we do to mitigate as much of this as possible. Any service out there, whether Amazon Microsoft Azure or your hosting control panel are open for everyone with a username and password. Being a Microsoft Azure fan, I’ll use this post to scare everyone using the service and tools about what can happen. Knowing about what can happen is the first step towards mitigating it.

Disclaimer and setting the stage

What I do NOT want to do in this post is go into the technical details of every potential mishap that can happen. We’re all developers, there’s a myriad of search engines out there that can present us with all the details. I also do not want to give people the tools to do these mishaps. I’ll give you some theory on what could happen but I don’t want to be the guy who told people to be evil. Don’t. I deny any responsibility for potential consequences of this post.

Microsoft Account

Every Microsoft Azure subscription is linked to either an organizational account or a Microsoft Account. Earlier this week, I saw someone tweet that they had 32 Microsoft Azure subscriptions linked to their Microsoft Account. If I were looking to do bad things there, I’d try and get access to that account using any of the approaches available. Trying to gain access, some social engineering, anything! 32 subscriptions is a lot of ransom I could ask for. And with potentially 20 cores of CPU available in all of them, it’s also an ideal target to go and host some spam bots or some machines to perform a DDoS.

What can we do with our Microsoft Account to make it all a bit more secure?

  • Enable 2-factor authentication on your Microsoft Account. Do it!
  • Partition. Have one Microsoft Account for every subscription. With a different, complex password.
  • Managing this many subscriptions with this many accounts is hard. Don’t be tempted to make all the accounts “Administrators” on all of the subscriptions. It’s convenient and you will have one single logon to manage it all, but it broadens the potential attack surface again.

Certificates, PowerShell, the Command Line, NuGet and Visual Studio

The Microsoft Azure Management API’s can be used to do virtually anything you can do through the management portal. And more! Access to the management API is secured using a certificate that you have to upload to the portal. Great! Unless that management certificate was generated on your end without any security in mind. Not having a passphrase to use it or storing that passphrase on your system means that anyone with access to your computer could, in theory, use the management API with that certificate. But this is probably unlikely since as an attacker I’d have to have access to your computer. There are more clever ways!

Those PowerShell and cross-platform tools are great! Using them, we can script against the management API to create storage accounts, provision and deprovision resources, add co-administrators and so forth. What if an attacker got some software on your system? Malware. A piece of sample code. Anything! If you’re using the PowerShell or cross-platform tools, you’ve probably used them before and set the active subscription. All an attacker would have to do is run the command to create a co-admin or delete or provision something. No. Credentials. Needed.

Not possible, you say? You never install any software that is out there? And you’re especially wary when getting something through e-mail? Good for you! “But that NuGet thing is so damn tempting. I installed half of NuGet.org so far!” – sounds familiar? Did you know NuGet packages can run PowerShell code when installed in Visual Studio? What if… an attacker put a package named “jQeury” out there? And other potential spelling mistakes? They could ship the contents of the real jQuery package in them so you don’t see anything unusual. In that package, someone could put some call to the Azure PowerShell CmdLets and a fallback using the cross-platform tools to create a storage account, mirror a couple of TB of illegal content and host it on your account. Or delete all your precious VMs.

Not using any of the PowerShell or cross-platform tools? No worries: attackers could also leverage the $dte object and invoke stuff inside Visual Studio and trigger any of the ample commands available in there. You may notice something in the activity log when this happens, but still.

What can we do to use these tools but make it a bit more secure?

  • Think about good certificate management. Give them a shorter lifetime, replace them every now and then. Don’t store passphrases.
  • Using the PowerShell or cross-platform tools? Make sure that after every use you either invalidate the credential used. Don’t just set the active subscription in these tools to null. There’s a list command of which an attacker could set the currect subscription id.
  • That publish settings file? It contains the management certificate. Don't distribute it.
  • Automate using all the tools! But not on all developer machines, do it on the build server.

All these tools are very useful and handy to work with, but use them with some common sense. If you have other tips for locking it all down, leave them in the comments.

Enjoy your night rest.

Microsoft Azure cloud plugin for TeamCity (dabbling in Java code)

NOTE: While the content is this blog post will still work, JetBrains now has a plugin that is the recommended way of working with TeamCity and build agents on Azure. Please check this blog post to learn more about it.


If you follow me on Twitter, you may have seen me in several stages of anger at Java. After two weeks of learning, experimenting, coding and even getting it all to compile, I’m proud to announce an inital very early preview of my Microsoft Azure cloud plugin for TeamCity.

This plugin provides Microsoft Azure cloud support for TeamCity. By configuring a Microsoft Azure cloud in TeamCity, a set of known virtual build agents can be started and stopped on demand by the TeamCity server so we can benefit from Microsoft Azure’s cost model (a stopped VM is almost free) and scaling model (only start new instances when we need them).

Curious to try it? Make sure you know it is all still very early alpha version software so use with caution. I wanted to get an early preview out to gather some comments on it. Here are the installation steps:

  • Download the plugin ZIP file from the latest GitHub release.
  • Copy it to the TeamCity plugins folder
  • Restart TeamCity server and verify the plugin was installed from Administration | Plugins List

Creating a new cloud profile

From TeamCity’s Administration | Agent Cloud, we can create a new cloud configuration making use of the Microsoft Azure plugin for TeamCity. All we have to do is select “Microsoft Azure” as the cloud type and enter the requested details.

TeamCity agent on Azure VM

Once we enter some preconfigured and pre-provisioned VM names, we’re good to save and profit.

Known issue: only one Microsoft Azure cloud configuration can be created per TeamCity server because the KeyStore being configured by the plugin only stores one management certificate. Contribute a fix if you feel up for it!

What’s up?

From Agents | Cloud, we can now see which VM instances are stopped/running on Microsoft Azure.

Start stop TeamCity agent on Azure

Known issue: status of the VM displayed is not always current. The VM status is read from TeamCity's last known status, not from Microsoft Azure. Again, contribute a fix if you feel up for it.

What is there to come?

That’s pretty much it for now. I told you, it’s early. In my ideal world, there should also be a possibility to launch VM instances from a predefined image and destroy them when no longer needed. I also would love to convert it all to Kotlin as I still don’t like Java as a language and Kotlin looks really nice. ANd ideally, the crude UI I did for the plugin should be much nicer too.

Happy building in the cloud!

Optimizing calls to Azure storage using Fiddler

Last week, Xavier and I were really happy for achieving a milestone. After having spent quite some evenings on bringing Visual Studio Online integration to MyGet, we were happy to be mentioned in the TechEd keynote and even pop up in quite some sessions. We also learned ASP.NET vNext was coming and it would leverage NuGet as an important part of it. What we did not know, however, is that the ASP.NET team would host all vNext preview packages from MyGet. But we soon noticed and found our evening hours were going to be very focused for another few days…

On May 12th, we all of a sudden saw usage of our service double in an instant. Ouch! Here’s what Google Analytics told us:

image

Luckily for us, we are hosted on Azure and could just pull the slider to the right and add more servers. Scale out for the win! Apart for some hickups when we enabled auto scaling (we thought traffic would go down at some points during the day), MyGet handled traffic pretty well. But still, we had to double our server capacity for being able to host one high-traffic NuGet feed. And even though we doubled sever capacity, response times went up as well.

image

Time for action! But what…

Some background on our application

When we started MyGet, our idea was to leverage table storage and blob storage, and avoid SQL completely. The reason for that is back then MyGet was a simple proof-of-concept and we wanted to play with new technology. Growing, getting traction and onboarding users we found out that what we had in place to work on this back-end was very nice to work with and we’ve since evolved to a more CQRS-ish and event driven (-ish) architecture.

But with all good things come some bad things as well. Adding features, improving code, implementing quota so we could actually meter what our users were doing and put a price tag on it had added quite some calls to table storage. And while it’s blazingly fast if you know what you are doing, they are still calls to an external system that have to open up a TCP connection, do an SSL handshake and so on. Not so many milliseconds, but summing them all together they do add up. So how do you find out what is happening? Let’s see…

Analyzing Azure storage traffic

There is no profiler out there that currently allows you to easily hook into what traffic is going over the wire to Azure storage. Fortunately for us, the Azure team added a way of hooking a proxy server between your application and storage itself. Using the development storage emulator, we can simply change our storage connection string to the following and hook Fiddler in:

UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://ipv4.fiddler

Great! Now we have Fiddler to analyze our traffic to Azure storage. All requests going to blob, table or queue storage services are now visible: URL, result, timing and so forth.

image

The picture above is not coming from MyGet but just to illustrate the idea. You can clear the list of requests, load a specific page or action in your application and see the calls going out to storage. And we had some critical paths where we did over 7 requests. If each of them is 30ms on average, that is 210ms just to grab some data. And then you’ve not even done anything with it… So we decided to tackle that in our code.

Another thing we noticed by looking at URLs here is that we had some of those requests filtering only using the table storage RowKey, resulting in a +/- 2 second roundtrip on some requests. That is bad. And so we also fixed that (on some occasions by adding some caching, on others by restructuring the way data is stored and moving our filter to PartitionKey or a combination of PartitionKey and RowKey as you should).

The result of this? Well have a look at that picture above where our response times are shown: we managed to drastically reduce response times, making ourselves happy (we could kill some VM’s), and our users as well because everything is faster.

A simple trick with great results!

Speeding up ASP.NET vNext package restore

TL;DR: If you have multiple NuGet feeds configured on your machine, it may be worth to do some tweaking in the NuGet.config file shipping with your project.

Last week, the ASP.NET team released a preview of “ASP.NET vNext”, a first step in the good direction for solving the pain building .NET projects is, but more than that a great step towards having an open and cross-platform ASP.NET that is super developer friendly. If you haven’t checked it out yet, do so now.

One of the things ASP.NET vNext leans on heavily is NuGet. In fact, every application comes with a project.json file that describes an application’s dependencies. Only when running kpm restore these dependencies are downloaded and the application can be run. Running this package restore (it’s NuGet after all) is usually pretty fast, but if you, like me, are a heavy NuGet user, chances are the restore is not happening in the most optimal way. Have a look at the output of my kpm restore command right after I installed ASP.NET vNext on my system:

Project K package restore

It’s not easy to capture a screenshot that proves the point I'm about to make, but if you do this yourself and you have multiple NuGet feeds configured on your system, you’ll see that ASP.NET vNext is trying to restore packages from all configured feeds. In my case, I’m using a personal feed on MyGet, a feed hosted on my TeamCity server, a feed on my local filesystem (testing purposes) and then the ASP.NET vNext MyGet feed as well as NuGet.org. That’s 5 feeds being checked over and over again for the dependencies listed in my project.json… Let’s see if we can reduce this a bit.

If we look at the samples shipped in ASP.NET vNext, we can find a NuGet.config file in there. And as we know, NuGet has this thing called configuration file inheritance. This means that the feeds defined in here will be enriched with the feeds configured at the machine level, in my case 5 of them. But that also means we can easily fix this: adding a <clear /> element under the <packageSources> element will do the trick of removing all previously defined feeds and using just the ones defined for the project I’m working on:

<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <clear /> <add key="AspNetVNext" value="https://www.myget.org/F/aspnetvnext/" /> <add key="NuGet.org" value="https://nuget.org/api/v2/" /> </packageSources> <!-- ... --> </configuration>

Use this trick for your own ASP.NET vNext projects as well: specify the feeds you want to use explicitly and everything will be faster for you and other developers working with your code. It ensures that kpm or NuGet for that matter only check the feeds that are relevant to your project and not every feed that is configured on your system.