Command Log for TFS2010

by garyg 16. August 2011 22:03

It would be great is TFS had a nice Web Service to do this, logged to the event log, etc, and there is some additional logging you can turn on but I've always found this a pain. One thing you can do (especially if you know what you are looking at for commands that suspect have been run) is the tbl_command table in the Collection DB. For instance if you suspect a witadmin command has been run you can use something like this SQL query:

   1:  select * from tbl_Command with (nolock) 
   2:  where useragent LIKE 'Team Foundation (witadmin.exe%'

The usual precautions would apply here, like not running direct DB queries durring peak production times etc. but it allows you to quicly see a list of whats been run against TFS. The Command column has a record of the command that was run. When I had an issue at client of someone running unauthorized commands I was able to use this to track down the person quickly.

A Cautionary Note for TFS2008 to TFS2010 Upgrades

by garyg 12. July 2011 09:34

I responded to a post recently on one of the MSDN TFS forums regarding a question from a user trying to upgrade from Team Foundation Server 2008 to Team Foundation Server 2010.  His question was related to trying to do a delta DB copy after an initial successful upgrade to try and speed the process (it can take 8 hours or more for the full upgrade). (see tfs 2010 re-upgrade)

There is no supported way to achieve this, but it also made me recall another very important tip:  Shut down ALL TFS related services and app pools before making your TFS 2008 DB backups to move to your new server (I’m assuming you are migrating to different hardware in this case).  

This ensures everything is in sync for the backup.  If you don’t, there is a chance a record in one of the DB’s could be updated before you have a chance to complete the backup.  I’ve seen the results of this first hand and it was both difficult to diagnose and expensive.  Your upgrade will proceed normally and you wont know there is a problem until you try and create a new team project, or create a new work item.

Organizing the Self-Organizing Team

by garyg 18. April 2011 11:33

It would be great if every self-organizing team just jumped right in and did just that, but some do need a little help.  Most new groups will traverse through the standard storming – forming – norming stages, and its mainly during the first two you’ll need to help the most.  The biggest challenge for any of us who have lead traditional teams face in facilitating a self-organizing team learning how to facilitate without dropping into traditional management behaviors.

My first realization to this came one morning at a Daily Scrum when listening to the team members report their status to one another.   It was obvious to everyone in the room that there was an extreme imbalance in the workload and a few members User Stories were falling behind.  They were struggling.  At the end of the Scrum I expected one or more of them to offer assistance to the struggling members.  I was wrong. I don’t know why, but I expected these formally very independent people to suddenly step up and help one another out simply because the rules and principles of Scrum had been laid out for them.  The people hadn't changed because the process did. 

Coming to the rapid conclusion that they just didn’t know how to start helping one another, and fighting the urge to drop into delegating PM mode, I stayed in the facilitating Scrum Master role, bringing up the Burn Down and Capacity charts instead and asked questions.  Very pointed questions on what they thought the numbers meant

Leading the team to correct realization on their own rather than telling them what to do, helped them take the first leap.  It seems like a simple step, but for this group it was the turning point for coming to grips with a key responsibility of a self-organizing team. 

So what specifically can you do to help your Agile team to the “right” conclusions? Here are a few tips I’ve put together from my experiences:

  • Highlight issues by bringing them up for discussion.  Encourage team members to vocalize the solution on their own rather than you pointing it out.
  • Get impediments out of the way.  Make sure you aren't one of them.  Facilitate communications but avoid being a go-between if at all possible.
  • You are not the Admin, do not become one.  Insist that all artifacts be created by the Team.  It encourages ownership and responsibility.
  • When everything is right, it will seem like you do nothing at all.  Once the Team gets some experience and success behind them you should do very little other than truly facilitating .

Beware the “Midas Touch”

by garyg 7. November 2010 23:36

So we are in the final go/no-go hour for a product release, and its not looking good.  My team had been charged with testing the product for last 3 day.  The development team was busy playing wack-a-mole with several P1 and P2 bugs that continued to regress.  Our QA team had no involvement at the beginning of this cycle, nor visibility to any hard requirements (I know, not a good start).  A particularly nasty P1 that appears to be in the framework of the product, is on its 4th try through the regression testing cycle.  This is one that the developer was sure he had it “this” time.  My confidence of course was not there, and I recommended this release be pulled.

The client of course was not happy and called an emergency meeting to get a handle on what was happening.  We were now going over the exact contents of what was supposed to be in this latest release, feature by feature.  Seeing the impending revelation creeping upon him the developer finally revealed there may be something “a little extra” in this code.  Something completely off the requirements list, project plan, and of course did not go through the fairly rigorous code review this client normally does.  We were unwitting victims of a classic Gold Plating gone wrong situation and nearly done in by the Midas Touch of unchecked development without matching requirements.  No malice of intent but the results were the same.

How this situation was resolved isn’t important, but how we could have prevented it is.  A simple, solid, Requirements Traceability Matrix combined with some training on early recognition of Gold Plating signs was on my list of recommendations.  I’ve often seen the costs of Gold Plating represented by actual extra work and schedule overruns.  This was the clearest case I’ve seen to date were it directly caused a quality issue of this magnitude.  A good lesson for us all.

Team Foundation Build Service 2010 Controller / Agents and FQDNs

by GaryG 20. July 2010 05:33

I know I usually write about Project Management related topics so I ask my regular readers to please bear with me.  This one was a real pain to solve so I wanted to share it since TFS 2010 is fairly new and and error its giving out doesn’t really help.  Recently while working with an enterprise client in a TFS2008 to TFS2010 migration (a real pain in itself) we came across an error in setting up the Team Build Service.  The topology here put the TFS application tier on one server and the Team Build Service on its own machine (a Windows 2008 Server), and both the Controller and Agents were on this machine. 

The problem we saw was that the the controller and agents couldn't connect (and of course all the team builds failed).  The error was:

"There was no endpoint listening at http://somemachine.company.com/Build/v3.0/Services/Contoller3 that could accept the message.  This is often caused by an incorrect address or SOAP action.  See InnerException, if present, for more details."

The error was displayed in the properties dialog for both the Controller and Agent as in the below screenshot:

image

 

After a lot of head banging, setting up traces, and a reinstall I realized that for some reason the configuration wizard put the FDQN (Fully Qualified Domain Name) rather than the machine name.  Having debugged an issue on another products Web Service I decided to change it to use just the Machine Name and it instantly connected both the Controller and Agents.

This fix is simple thankfully.  Change the local build service endpoint to NOT use a FQDN but just the machine name, restart the Build Controller and Agents.

To do this just get into the TFS Administration Console on the Build Server and click the Build Configuration node. From here click the Properties on the Build Service and you will get the following window:

image

The “Local Build Service Endpoint (incoming)” will be grayed out until you click the “stop to make changes” link.  Click the link to stop the service then click the Change button to change just the FQDN to the machine name.  From here just click the Start button and your Controller and Agents should be talking fine.  It may take a minute once you restart the Build Service for everything to reestablish communication.  I hope this helps someone on another TFS 2010 deployment.

Preventing multiple plug-in request calls with IsRedirectFollow()

by garyg 10. April 2010 10:01

Figured I'd share something I found valuable.  Did you ever make a call to a request level plug-in in a Visual Studio 2008 WebTest and get multiple calls to the same plug-in because of a 302 redirect?  Well I did and took me a little bit to find out a way to prevent it.
When you are making the call in your code, decide if it should run based on IsRedirectFollow property (http://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.webtesting.webtestrequest.isredirectfollow(VS.80).aspx)
As an example:

  1: namespace MyAppTests
  2: {
  3:     public class GetSomethingPlease: WebTestRequestPlugin
  4:     {
  5:         
  6:         public override void PostRequest(object sender, PostRequestEventArgs e)
  7:         {
  8:             if (e.Request.IsRedirectFollow == false) //only want to run this on on a primary, not a redirect
  9:             {
 10:                 //do something here
 11:              }
 12:          }
 13:       }
 14: }
 15: 

Anyway, I hope this helps someone else a little further along.  It works in VS2008 and VS2010 as well.  I'm sure there could be a more efficient way, but this worked in a pinch ;-)

Using Visual Studio 2008 Web Test Request Plug-In to Check results in a SQL DB

by garyg 10. April 2010 06:12

Some of my regular readers said they wanted to see more "technical" content, so here is one that perplexed me a while.

While assisting a client with setting up an automating testing environment using Visual Studio 2008 Web Tests (among other things) we uncovered a need to check the results of a transaction halfway through, then when its complete to verify the results of the test.

Now I know what experienced testers and SQA people are thinking, "why didn't you just use an Extraction rule from a results page and validate that?". Yes, that's the first thing I wanted to have done as well and that works quite well under most circumstances.

Unfortunately on this particular application there is no real "confirmation" screen that displays the results of the transaction, just kind of a yeah I did or no I didn't kind of page. Not good enough in our case where I wanted to have real transaction results.

Since time was very short and I'm still working with the group to adopt more "test friendly" designs we needed a sure way of verifying the results. I though that this check DB feature would be a native feature in the VSTS2008 Web Test, but the only DB connectivity included out of the box was binding to a DB for data driven testing (which is very useful as well).

So our option was a to create a request level plug-in to go out to the DB and check the transaction results, and write them back to the test results.

My goals here were:

  1. Connects to an SQL DB.
  2. Builds a query string using a Context parameter
  3. Puts the value pulled from the DB back into another Context parameter for use in follow on requests.

Here is how I did it (in a plug-in 101 type format), complete with code snippet I used to create this:

1. Create the following in a class library in your test project (that part is covered in detail in MSDN), compile, and reference it:

  1: using System.Text;
  2: 
  3: using Microsoft.VisualStudio.TestTools.WebTesting;
  4: 
  5: using System.Data.SqlClient;
  6: 
  7: 
  8: 
  9: namespace Test1
 10: {
 11: 
 12:     public class MyRequestPlugin : WebTestRequestPlugin
 13:     {
 14: 
 15:         public override void PostRequest(object sender, PostRequestEventArgs e)
 16:         {
 17: 
 18:             base.PostRequest(sender, e);
 19: 
 20:             int CustomerID = 0;
 21: 
 22: 
 23:             // this is my connection string
 24:             String connectionString = "Persist Security Info=False;Initial Catalog=dbname;Data Source=machinename; User Id=dbuser;Password=somepassword";
 25: 
 26: 
 27:             // select statement getting just the field I need.  Note that if this is;
 28:             // messed up it may throw an error saying is cant open the db.;
 29:             // This is misleading, its probably your select; 
 30:             SqlConnection connection = new SqlConnection(connectionString);
 31: 
 32:             string queryString = "Select CustomerID from Orders where OrderID=" + e.WebTest.Context["OrderID"];
 33: 
 34: 
 35: 
 36:             SqlCommand command = new SqlCommand(queryString, connection);
 37: 
 38:             command.Connection.Open();
 39: 
 40:             SqlDataReader reader = command.ExecuteReader();
 41: 
 42:             while (reader.Read())
 43:             {
 44: 
 45:                 CustomerID = Convert.ToInt32(reader[0]);
 46: 
 47:             }
 48: 
 49:             e.WebTest.Context.Add("CustomerID", CustomerID);
 50: 
 51: 
 52: 
 53:         }
 54: 
 55: 
 56: 
 57:         public override void PreRequest(object sender, PreRequestEventArgs e)
 58:         {
 59: 
 60:             base.PreRequest(sender, e);
 61: 
 62:         }
 63: 
 64:     }
 65: 
 66: }
 67: 
 68: 
 69: 

2. Insert the Request Plug-in (if you compiled and referenced it, it will be in the list.) AFTER the Context parameter you are using (in a production test you'll need error control, errors in a Plug-in are ugly and will mess up your results).

This whole exercise made think that a "data check" validation rule of sorts really should be part of this product.

Anyway hope this helps someone else a little further along some day using web tests. This same method also works in Visual Studio 2010 as well.

About the author

   
Gary Gauvin is a 20+ year Information Technologies industry leader, currently working as the Director of Application Lifecycle Management for CD-Adapco, a leading developer of CFD/CAE solutions. Working in both enterprise environments and small businesses, Gary enjoys bringing ROI to the organizations he works with through strategic management and getting hands-on wherever practical. Among other qualifications, Gary holds a Bachelor of Science in Information Technologies, an MBA, a PMP (Project Management Professional) certification, and PSM (Professional Scrum Master) certification.  Gary has also been recognized as a Microsoft Most Valuable Professional.

LinkedIn Profile: http://www.linkedin.com/in/garypgauvin

(Note: Comments on this blog are moderated for content and relevancy)


 

Month List

Page List