Pages

Cruise Control .NET

Our development team has finally decided that each of us individually pressing 'F5' in Visual Studio is no longer an effective build system.  Another dev group in the building took up Cruise Control.NET and Octopus at the end of last year.  I've starting investigating its potential for our team as well.

So a quick rundown of our current situation:

  • We are actively developing and supporting a web application that allows users to login and complete work from IE 8+ browsers.  We are currently doing work to support both Firefox and Chrome as well, so there is ongoing testing as we develop new features.
  • Our main application uses a viewer that requires an ActiveX install on IE version 8, but uses an HTML5 canvas object for the viewer on IE9+.  
  • One member of the team has developed a central user authentication page that is driven by customized JQuery grids.  It's complex and its integration into our main application has been bumpy.  The idea was single sign on, but it isn't quite there.
  • Data is made ready for user consumption by a Windows Service that runs on various virtual machines and writes to a set of project-specific databases.  We receive some data by web service, a replication process from another set of databases, or even flat-file.  It's a moderately flexible, but aging service application.
  • We have been using SVN for source control for the last couple of years.  There are still those on the team that struggle to use it within the standards that I developed for our group, but we're getting there.
  • We've just begun to use NuGet Package Manager for a number of dll's and there is some confusion in our team about which package files are appropriate to be checked into source control.


    We have a few goals in making this change to our build processes.  First, we want a simple and repeatable build process that is scripted.  We have individuals on the team that have built various pre and post build scripts into their local machines and it creates issues when source files get checked into the repository. 

    Second, we have a new hire whose specific charge is developing a suite of automated tests for our application.  He is making progress and we would like to integrate his smoke tests into the build process.  Once a build command is sent, we want to kick off his set of tests and return a pass/fail for each of them.

    Third, we want to approach our development task list with confidence.  If the source is continuously being built and tested, we should be able to refactor existing code and develop new features with the knowledge that we are working off a solid code base.  I'd like to get us to a world where a developer's commit to the SVN trunk automatically kicks off a build process and gives us immediate feedback.  

    Cruise Control seems like a good solution. The challenges that I foresee based on my research would be as follows:
    • Setup cost.  The configuration and scripted build will take time and effort to create.
    • Our current application is built in a way that requires fairly brittle testing in order for it to be worthwhile.  
    • We do not currently have a set of scripts that will create a development environment from scratch.  Getting a new hire up to speed is time consuming.
    • And of course the reluctance of the team to make a change to our current process.  It's slow and painful, but it's known and we make it work.
    I've installed Cruise Control v.1.8.3 on my local machine.  There are some permission issues to resolve, but it was relatively painless to setup.  Next steps are to associate this with a local project to test out some of its features.  

    Closed-Source? There's An App For That

    I recently read an article on Gamasutra about developing on closed-source frameworks.  The author posts some interesting pieces on the site and I like to follow him, but this write up rankled me a bit.  He suggests some good reasons for why you should only build on open-source platforms.

    I like working with .NET.  It streamlines details of what is going on under the hood so that I don't have to worry about as much.  Maybe the article annoyed because I agree with his principles while I ignore them in favor of not having to dig deeply into the underpinnings of popular .NET frameworks.  Why poke around in the guts of XNA when it lets me get started on a game idea quickly?  Who cares how WCF works its magic when it allows me to ease the pain of bringing a handful of standard web technologies together in my distributed applications?

    While there are real benefits to using a framework out of the box, I think a deeper understanding of the frameworks we take for granted in .NET is a good thing.  Sometimes you want to open the hood and see what's going on.  I did a bit of that with Resharper before it became a commercial enterprise, but I've been lazy lately.  There's really no excuse for that when there are some easy to use open source alternatives such as ILSpy and the upcoming dotPeek.

    I've started using ILSpy and it's a nice simple tool.  Download the binaries and run the exe to start decompiling.  Enjoy turning over the rocks :)

    Decompiling a C# type in ILSpy

    Windows Communication Foundation - Contracts

    Contracts are a key concept in WCF.  The WCF framework allows you to get them into a solution at the outset for "contract-first" style development.  A contract is an interface that your service will implement.  You should create these contracts first and then build the service to conform to them.  The service contract defines the operations that the WCF service will implement.

    I'll be using the ADO.NET Entity Framework to work with my Data Model in this project.  I've already used the handy tools available in VS2010 to create a ProductsEntityModel project in my solution that contains all the connections and classes I will need to work with my test database: AdventureWorks.



    I then created a WCF project.  VS automatically creates a couple of classes for you - IService.cs and Service.cs.  I updated the name of the interface to better reflect the contract that it will define.  Next I started to create the DataContract by using the attribute of the same name to define the ProductData class.  This class contains the DataMembers that will be passed to client applications and these are defined by a DataMember attribute.  Tagging a class with the DataContract attribute identifies it as a class that can be serialized and deserialized by WCF.


    namespace Products
    {
        // Data contract describing the details of a product passed to client applicaitons
        [DataContract]
        public class ProductData
        {
            [DataMember]
            public string Name;
    
            [DataMember]
            public string ProductNumber;
    
            [DataMember]
            public string Color;
    
            [DataMember]
            public decimal ListPrice;
        }
    }
    
    
    Next I will define the service contract as an interface in the same Products namespace using the ServiceContract attribute.  Every method that should be exposed will be tagged with the OperationContract attribute.


        // Service contract describing the operations provided by the WCF service
        [ServiceContract]
        public interface iProductsService
        {
            // Get the product number of every product
            [OperationContract]
            List ListProducts();
    
            // Get the details of a single product
            [OperationContract]
            ProductData GetProduct(string productNumber);
    
            // Get the current stock level for a product
            [OperationContract]
            int CurrentStockLevel(string productNumber);
    
            // Change the stock level for a product
            [OperationContract]
            bool ChangeStockLevel(string productNumber, short newStockLevel, string shelf, int bin);
        }
    


    Now it is time to implement the service. I'll use the Service.cs class that was created when we made the ProductsService project.  Below you can see that I have defined a ProductsServiceImpl class within a Products namespace.  The ProductsServiceImpl class should inherit the interface that defines our contract. In this case, it is IProductsService.

    namespace Products
    {
        // WCF service that implements the service contract
        // This implementation provides minimal error checking and exception handling
        public class ProductsServiceImpl : iProductsService
        {
            
        }
    }
    

    Within this class, I will implement all of the required methods from the interface. This would include ListProducts, GetProduct, CurrentStockLevel, and ChangeStockLevel.  The next step would be to build a WCF client application that will interact with the service.  I'll save that for a future post.

    Web Services? WCF!

    I'm back to learning more Windows Communication Foundation (WCF) today.  We have a number of distributed applications at work and I feel behind in understanding how the glue between them all functions.  That's not to say that we have used much WCF.  We haven't.  It exists in limited portions of only one application that I support.  However, if I'm building distributed web solutions in .NET, it's the technology to learn right now.  Maybe it will be time to update some things around the shop when I have a more firm grasp on the subject.

    I've worked through the introductions to WCF that Microsoft provides out on MSDN.  That was a good starting point, but I did some shopping on Amazon in order to dig deeper.  I'll be working through some of the exercises in these two books:


    So what is WCF?  John Sharp provides a fairly detailed history in his book.  I'll summarize it quickly:  The World Wide Web was popularized while Microsoft was completing their work on Component Object Model (COM).  First the web was just static pages. The second generation provided components could be downloaded to a local browser so that developers could create richer sites with more a layer of programmability.  The third generation was based on Web Services.  A Web Service is an application that runs on the computer hosting the webpage, not on the user's local machine.  Web Services made it possible to receive information from applications running on the user machine, do some processing on the host machine, and then return a response to the user's local application.  There was a need for developers on various platforms to have their applications communicate with one another.  This lead to XML becoming the current common data format.

    Developers still needed an agreed upon protocol for sending and receiving messages across the web.  SOAP was the result.  SOAP defines a number of things including:
    • The format of a SOAP message
    • How data should be encoded
    • How to send messages
    • How to handle replies to these messages
    A web service can optionally have a Web Services Description Language (WSDL) document.  This is XML that describes the messages that can be accepted by the Web Service, and the structure of the response back.  Applications use the WSDL to determine how to communicate with the Web Service.

    So where does WCF finally come in?  It provides the framework for developers to create services that conform to all these web standards.  It also supports Microsoft technologies like Enterprise Services.  A developer can use WCF to create solutions that are independent from all the things under the hood that are connecting these web technologies together.  

    Windows Azure is a cloud based platform that can be used to host applications that will invoke a developer's  services.  It is optimized to leverage the features of WCF.  I've been getting more and more email from Microsoft encouraging me to move some of my test applications over to Azure.  I'm going to check it out this weekend. 

    Working with XML data in SQL Server

    The XML data type and some associated functions were added back in SQL Server 2005. I'm looking forward to digging into this more deeply as I'm told Measurement Inc uses it extensively in their systems. Today I'll be writing some simple scripts to test out some of the functionality.

    You can retrieve the results of a query directly into XML using the FOR XML statement. There are a few different modes that format the returned data differently. I tried a number of them below:

    The results window displays links to the full XML that was returned with each query.

    The full XML returned with the PATH mode.

    This is cool stuff, but what if we want to store this XML natively in a table? Store it in an XML column like so:


    Statement 1 creates a table with a single XML column. A variable is declared with the XML data type in statement 2. The third statement saves the information from two tables in that variable. Statement 4 inserts a row into the table using the variable and the final query returns the column without using the FOR XML clause. The data is stored in the XML format, so the results look just like the SELECT statement from the first script in this post.

    More UDFs and Stored Procedures

    Today I'm working with functions again and examining the differences between User Defined Functions (UDF) and stored procedures (SP).

    • Both take parameters, but stored procedures also accept OUTPUT parameters. These are used to get modified values from stored procedures.
    • Return values are optional in stored procedures.
    • Stored procedures can be used to create other objects.
    • Stored procedures can update data.
    • Stored procedures can call a proc.

    I've been told by other developers that anything client facing should use stored procedures. Why is that? The reasoning is pretty simple: Processes outside of SQL Server should not have direct access to tables, views, or functions. This helps to prevent things like SQL injection attacks.

    But there are other benefits as well. They allow for more modular development by separating the data application layer from other programming logic. A stored procedure can be created once at the database layer and then other developers can concentrate on their code instead of SQL. The database logic can then be modified independently of the program's source code.

    Stored procedures can also be used to reduce network traffic. They are generally faster because they are compiled when first run and the query plans are cached by SQL Server's optimizer. You can perform an operation that requires many lines of SQL code through a single statement that executes the code in a procedure, rather than by sending hundreds of lines of code over the network.

    Here's a simple refresher on constructing stored procedures in T-SQL:


    As usual, I'm working with the MS produced "AdventureWorks" database in SQL Server 2008 R2. This script creates a stored procedure with a default value for @CustomerID. There are two calls to the procedure. When a parameter value is specified, the SP returns the single corresponding record. When no value is passed in, all the records are returned.

    Error Handling in SQL Server

    I'm looking into error handling in SQL Server this morning. As of SQL Server 2005, you can use a TRY... CATCH statement that is similar to other .NET languages. There are a number of built-in functions that you can use within the CATCH block. The values produced by these functions persist within the CATCH block and can be accessed as many times as needed. The values revert to NULL outside the CATCH block:

    • ERROR_NUMBER() Provides the error number.
    • ERROR_SEVERITY() Provides the severity of the error. The severity must exceed 10 in order to be trapped.
    • ERROR_STATE() Provides the state code of the error. This refers to the cause of the error.
    • ERROR_PROCEDURE() Returns the name of a stored procedure or trigger that caused the error.
    • ERROR_LINE() Returns the line number that caused the error.
    • ERROR_MESSAGE() Returns the actual text message describing the error.

    A simple test of the TRY... CATCH block

    The TRY...CATCH can't trap every error. If a database is not available or a table is typed in incorrectly, the batch will simply fail. A TRY...CATCH is valuable when working with transactions. If the transaction in the TRY block fails, it can be rolled back in the CATCH. Here's some code:

    Transaction rolled back within TRY...CATCH block

    Part 2.1.2 of the TRY block commits an error by trying to update a field with a unique constraint to a value that already exists. If the entire transaction had been successful, the COMMIT statement would have gone through. Since there was an error, the CATCH block is triggered and it gives us a chance to fire a message to the user and rollback the entire transaction.