Monday, November 8, 2010

Branching strategy for Team Foundation Server 2010

If you are planning to start with TFS 2010, here are few decks and videos help you on branching

http://intovsts.files.wordpress.com/2010/06/techdays2010_branchingandmergingwithtfs2010.pdf
http://channel9.msdn.com/blogs/liese/techdays-2010--branching--merging-strategies-with-team-foundation-server-2010

Wednesday, November 3, 2010

How do you monitor your service?

Many times we may not even know how many hits we get on the service or which methods are getting the highest hits. We may have some information on logs that tells us what kind of request came in and what happened to the request but how much time did the request take to process etc. may not be available on log. For an enterprise application it is very critical that we have a monitoring in place. 

We can create our own performance categories and create counters that would tell us what’s going on our service and how much time is it taking etc. We will look at how we can adopt performance counters on our wcf service. 

if you haven’t used perf monitor, take a look here http://technet.microsoft.com/en-us/library/cc749249.aspx



Lets do a sample wcf service and see how we can include perf counters on it

Create a console app which will create counters that we need to use on our service.

Make sure you have included using System.Diagnostics;

//define perf category
            string categoryName = "My Demo Service";

            //counter that we going to use
            string CounterName = "Get Data calls per sec";


            // if exists delete the category Note that you will need to have admin rights to create or delete categories/counters
            if (PerformanceCounterCategory.Exists(categoryName))
                PerformanceCounterCategory.Delete(categoryName);

            //create counters base that we will be using on this category
            var counterDataCollection = new CounterCreationDataCollection();

            var callspersecCounter = new CounterCreationData()
            {
                CounterName = CounterName,
                CounterHelp = "Get Data calls per sec",
                CounterType = PerformanceCounterType.RateOfCountsPerSecond32
            };
            // you can refer help for detailed explanation on different types

            counterDataCollection.Add(callspersecCounter);

            //create the Category
            PerformanceCounterCategory.Create(categoryName, "My Demo Service", PerformanceCounterCategoryType.SingleInstance, counterDataCollection);


Once you run above code, open perfmon from Administrative tools

When you click on App, below pop up will be opened and now you can identify the new category we just added


Select the counter and click on “OK”
The perf mon will show something like below


Since there is no activity the graph looks blank.

Lets use this counter on our service

Create a service method as below

        public string GetData(int value)
        {
            var counter = new System.Diagnostics.PerformanceCounter("My Demo Service", "Get Data calls per sec", false);
            counter.Increment();
           
           return "data " + value.ToString();
        } 

This method would just increment the counter and return the same text which is passed. Now lets refer this method and call this for couple of time.

            for (int i = 0; i < 100; i++)
            {
                ServiceReference2.Service1Client obj = new ServiceReference2.Service1Client();
                string retString = obj.GetData(i);

                if (i % 3 == 0)
                    System.Threading.Thread.Sleep(400);
            }
            MessageBox.Show("Completed");

I am going to call the same method many times with some delays so that we can identify the patterns on the monitor.

The monitor would show something like this now


This shows how many calls we got on each second. This also helps us to identify if we need to optimize the service or where we need to concentrate on performance related issues. 




Tuesday, October 19, 2010

Intro on Claims Based Authentication

I had a great opportunity to present a future state of authentication that can be adopted on many enterprise applications. Will try to explain some of the authentication related concepts and how that can help on enterprise applications.

If you have read/worked on Windows Identity Foundation or Single Sign on products like PING, you might have heard about claims based authentication. What does that mean? Why is it called claims based? How applications can get anything from this? These are some of common questions that I would try to explain here.
First of all let’s clear on what are some of the key concepts or words that you come across:

Identity - Set of attributes that describe a principal (e.g. an user) such as name,gender, age, email address, driving license number, group membership

If you look at above image, it shows different attributes describes a user/person.

Identity Provider – If you look at real life example and above image, who could be the Identity provider? It will be either any of Government office or any trusted agencies. E.g Passport office

Claims - An attribute about an identity issued by an authority.  Will explain in detail why it is called claim.

Relying party - Application that makes authorization decisions based on claims

Token - A token consists of a set of claims about the principal, and signed by an authority

So why are we calling attributes as claims not as attributes itself? Let’s look at a scenario below. 



If you look at above image, I can have identities on multiple identity providers. And the attributes could be completely different between them. As you can see on the image, age on govt records is 32 while on facebook is 18. As a relying party, if I have to decide based on these attributes, it’s just the Trust depends on scenario not on technical capability

So when I say, user has associated attributes from the identity provider, he is claiming that those are the attributes associated to him and valid. Its upto relying party to decide whether to trust the claims or not.
When the relying party requests the Identity provider to authenticate the user, it authenticates and provides all the user attributes as claims. The token will consist of all the claims that belong to the user. The Security Token Service would be responsible to authenticate the user and provide all the claims as a token.

Most of the authentication concepts on federated authentication, SAML, openID, Oauth are all based on claims. I will try to explain each of them in coming days and how they are different from each other.


Tuesday, September 28, 2010

Are you storing “strong” passwords real strong?

Confused with the above line? Well, have seen many applications which ask users to enter strong password which includes special character and numeric or combination etc. But while storing the password most of the time we convert the password text to hash which is known to be a secure way to store. This approach may be secure in last centuries not any more.  Check out http://en.wikipedia.org/wiki/Rainbow_table the mapping function from hash strings to any possible combinations of keyboard characters (alphanumeric, punctuations, etc.) have rendered this password storage / validation method insecure

Check this post http://www.codinghorror.com/blog/2007/09/rainbow-hash-cracking.html it says “The multi-platform password cracker Ophcrack is incredibly fast. How fast? It can crack the password "Fgpyyih804423" in 160 seconds

How do we strengthen storing password?

Simply provide a random salt while hashing password. Also iterate through many loops which requires extra computing burden to match the password. Normally keep the iteration count to 1000.

Here is how we can achieve that in C#  

using (Rfc2898DeriveBytes derivedBytes = new Rfc2898DeriveBytes(input, salt, hashIterations))
            {
                byte[] hash = derivedBytes.GetBytes(desiredHashBitSize / 8);
                return hash;
            }


Friday, September 24, 2010

Bitwise computation with C#

If you have written programs in C or C++ , I'm sure you might have used bitwise operators very often. We can use the same thing even in C#. I'm not going to explain the bitwise operators but would take an example of “&” operator and see how it can be used in our programs effectively.

For details on & operator you can check http://msdn.microsoft.com/en-us/library/sbf85k1c.aspx

Let us take an example of logging class where we would need to log based on the configuration. The easiest method to keep the configuration is to specify the highest level of logging required. i.e if we are planning to log exceptions, warnings, Info, Detailed messages then if we specify Detailed messages on the config that indicates all the categories below this level should be considered for logging. If we decide not to log detailed messages and we are okay with info level, then it should log except Detailed messages. How do we do this? We can approach different solutions but easiest one in terms of program maintenance and configuration is to keep the highest level of logging required on configuration and write the program accordingly.  Let us see how we can do this with our C# code.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace Diagnostics
{
    public enum LogCategory { None = 0,Exception = 1,Error = 3,Warning = 7,Info = 15 }

    class Log
    {
       
         static Log()
        {
            Log.LogLevel = 7; // read from configuration
        }

        public static LogCategory LogLevel { get; set; }

        public static void Info(string message)
        {
            if ((Log.LogLevel & LogCategory.Info) == LogCategory.Info)
                Write(LogCategory.Info, message, null);
        }
        public static void Info(string message, params object[] parameters)
        {
            if((Log.LogLevel & LogCategory.Info) == LogCategory.Info)
                Write(LogCategory.Info, message, parameters);
        }

        public static void Warning(string message)
        {
            if ((Log.LogLevel & LogCategory.Warning) == LogCategory.Warning)
                Write(LogCategory.Warning, message, null);
        }
        public static void Warning(string message, params object[] parameters)
        {
            if ((Log.LogLevel & LogCategory.Warning) == LogCategory.Warning)
                Write(LogCategory.Warning, message, parameters);
        }

        public static void Error(string message)
        {
            if ((Log.LogLevel & LogCategory.Error) == LogCategory.Error)
                Write(LogCategory.Error, message, null);
        }
        public static void Error(string message, params object[] parameters)
        {
            if((Log.LogLevel & LogCategory.Error) == LogCategory.Error)
                Write(LogCategory.Error, message, parameters);
        }
        public static void Exception(Exception err)
        {
            if ((Log.LogLevel & LogCategory.Exception) == LogCategory.Exception)
            {
                string errMessage = err.Message;
                Write(LogCategory.Exception, errMessage, null);
            }
        }
        public static void Exception(Exception err, params object[] parameters)
        {
            if ((Log.LogLevel & LogCategory.Exception) == LogCategory.Exception)
            {
                string errMessage = err.Message;
                Write(LogCategory.Exception, errMessage, parameters);
            }
        }
        private static void Write(LogCategory ctgy, string message, params object[] parameters)
        {
            // write to text or DB
        }
    }
}



If you look at above class, the log category is defined as 0, 1, 3, 7, 15 … this is actually the bit representation to identify if we need to consider category level.  The bit representation of these numbers as below
0 – 0000 0000
1 – 0000 0001
3 – 0000 0011
7 – 0000 0111
15 – 0000 1111

So if we need to add another level it would be 31 which will represent as
31 – 0001 1111

Hope this will explain why we have these numbers on the enum. Now, let’s look at other methods which are written for logging the messages. All the methods use “&” to check if the message needs to be logged.
If the configuration specifies that we need to log only till warning, and when we try to log info this is what happens

Config = 7

Log.LogLevel & LogCategory.Info
0000 0111 - 7
0000 1111 - 15
0000 0111 – result – that means the Info is not configured here

What happens for Exception when the Log level is set as Warning

Log.LogLevel & LogCategory.Exception) == LogCategory.Exception
0000 0111 - 7
0000 0001 – 1
0000 0001 – result – the exception is matching and its allowed.

I had never used this before and very much interested when I was discussing with one of our developer. Really want to thank him for showing me this amazing piece. 

Thursday, September 9, 2010

Problem opening CHM file?

I had this problem many times whenever I download chm file from internet. Most of the time when I download the file and open it, it says "Navigation to the webpage was canceled" and for a moment I get annoyed thinking how to fix it. The fix is to go to property of the file and unblock the file.

Monday, August 30, 2010

Limit Sql data access from Applications using "EXECUTE AS"

When you have to retrieve SQl data from the applications, you would create login and connect using that to the sql server. Any user, who has the permissions to execute the stored procedure, runs the stored procedure under the Database's dbo user (which means it can do anything in the database, but nothing at the server-level nor on other databases). If you only allow your Logins to execute stored procedures (and not touch the tables directly), then you've effectively limited the Logins to code you've written. If you don't write any DELETE statements, then Logins can't delete anything.


With Sql 2005 and above, there is a new feature EXECUTE AS OWNER which is a great way to limit the permissions of a sql server login. Lets look at how this can be used effectively to limit the access.


This feature allows you to impersonate another user in order to validate the necessary permissions that are required to execute the code without having to grant all of the necessary rights to all the underlying objects and commands.
The EXECUTE AS clause can be added to stored procedures, functions, DML triggers, DDL triggers, queues as well as a standalone clause to change the user’s context. 


Syntax:
CREATE PROCEDURE.[proc_GetAliasByID]
      @ID bigint = 0
WITH EXECUTE  AS 'AppDMLUser'

Thursday, August 26, 2010

SQL Change Tracking Vs CDC

I had posted about SQL change tracking couple of weeks ago. There is one more option available on SQL for capturing the data and can be used for auditing. That is called as CDC – Change Data Capture


Here is the comparison on these two features

Change Tracking (CT)

Change Tracking is a synchronous mechanism which modifies change tracking tables as part of ongoing transactions to indicate when a row has been changed. It does not record past or intermediate versions of the data itself, only that a change has occurred

Change Data Capture (CDC)

Change Data Capture is asynchronous and uses the transaction log in a manner similar to replication. Past values of data are maintained and are made available in change tables by a capture process, managed by the SQL Agent, which regularly scans the T-Log. As with replication, this can prevent re-use of parts of the log. This Tracks when data has changed and includes the values as well. Entire table or subset of columns can be captured.

Tuesday, August 17, 2010

Universal Data Link (UDL) files

If you have worked with VB or ADODB objects, I am sure you will have used this earlier. Even though we have many ways to keep the connection strings in configuration I still use UDL many times even now.  Keeping connectionstring on configuration is a common requirement but to get that string we can use UDL that  keeps it simple to get the connectionstring right away from the notepad without worrying about format etc.

Here is how I normally do to get the connection string or to test the connection.
·         Open a note pad
·         Save it as “x.udl”
·         Close the notepad
·         Now open the UDL file
·         Configure your connection and save it
·         Open the UDL file in note pad again. Now you get the connectionstring

Simple way to check the connection to servers and helps to debug the connection issues

Sunday, August 15, 2010

TFS 2010 for agile scrum development

We have started using TFS 2010 for our scrum :-) . I found TFS 2010 has lot of features to work with scrum agile methodology. It has capabilities to create user stories, product backlogs, and sprints etc. This can also be used for daily scrum updates.


The TFS team portal provides access to users to update the tasks and many reporting capabilities.

Here is one of the nice ppt I found Scrum with TFS 2010 which has detailed explanation how we can make use of TFS for scrum methodology. http://www.slideshare.net/aaronbjork/scrum-with-tfs-2010

 
 
Happy reading!! :-)

Wednesday, July 28, 2010

SQL - Change tracking

Most of the applications which I worked till now, had one or the other requirement to handle auditing feature to track the changes on data. There are different ways to achieve this. Very often we use triggers to update another table whenever there is a change on primary table. Or handle it on the stored proc which is used for inserting or updating data.


Now, with Sql 2008, this made very easy. The new feature introduced on SQL 2008 “Change tracking” which enables change tracking without any coding required.

First, to enable this change tracking, we need to update the setting on DB


After that we need to enable this on the table whichever needs to be tracked.

Once the table is enabled with tracking, you will be able to track the changes by version.


We can get all the changes by using function called CHANGETABLE. sample use of function will look like below.

SELECT * FROM CHANGETABLE


(CHANGES dbo.Employees,0) as CT


This query will provide details with SYS_CHANGE_VERSION, SYS_CHANGE_OPERATION, SYS_CHANGE_COLUMNS, SYS_CHANGE_CONTEXT, ID columns.

Take a look at this article http://www.sql-server-performance.com/articles/audit/change_tracking_2008_p1.aspx  which talk about this article in detail.

Thursday, July 22, 2010

TDE (Transparent Data Encryption) on Sql 2008

Most of the Db design, we consider about security on data level. What are the critical data, if need to save them encrypted or hashed etc. There are somethings which might need more attention than just storing the data. What if the backup taken been stolen by someone? If someone gets mdf file, they can easily restore the data on to any server and get access to the data.


One of the new feuture introduced with Sql 2008 deals with this security. This is called transparent data encryption. This stores the mdf and ldf files encrypted. Data is encrypted while writing to disk and decrypted while read form the disk. The "transparent" aspect of TDE is that the encryption is performed by the database engine and SQL Server clients are completely unaware of it

Here is more details on how we can use that on our DBs http://msdn.microsoft.com/en-us/library/bb934049.aspx

Monday, July 19, 2010

WCF Exceptions and faults

The exception handling is an integral part of any application. It depends on the situations and the severity of the exception that defines what we want to do with the exception. When working on webservices, especially while interacting with third party application, I always thought its better idea to catch any exception in method and return that as string. But this doesn’t really work if we have to return some other datatype and also we need to insist client apps to check for Error string to get the error message. There are chances that client apps don’t care about the Error string that was returned by service. So the exception thrown on service will still be treated as valid process on the client apps.


When I started working with WCF, I could find different ways to handle the exceptions.

• Catch the exception on service and return as string. (I have already explained what the part of using this way)

• Throw the exception directly. But how does this effect on client apps

• And convert the exceptions to Faults which client apps can understand.

Let’s see if throwing exceptions on service really help us

If there is any exception on wcf service, the client app will always receive a fault exception as “The server was unable to process the request due to an internal error. For more information about the error, either turn on IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute or from the <serviceDebug> configuration behavior) on the server in order to send the exception information back to the client, or turn on tracing as per the Microsoft .NET Framework 3.0 SDK documentation and inspect the server trace logs.” This does not provide any information about the exception. If we enable the detail on server , it will expose implementation details to the client.

As a summary exposing exceptions to client has many limitations, so it’s always a better idea to map the exceptions on the wcf service to faults and let the client apps to deal with it.