Wednesday, December 1, 2010

Creating a multidimensional strongly typed array in Powershell

When getting stuck for a short while trying to figure out how to create a strongly typed multidimensional array in Powershell I tried to find an example on the net just to find that there simply just is no example of it to be found. Maybe it's too easy? The Technet page on the New-Object cmdlet gave me what I needed.

The way I later on created my two dimensional array was like this:

$d = New-Object 'Object[,]' 10, 20
This is however created as an Object array while I needed int. I changed the code to this:

$d = New-Object 'Int32[,]' 10, 20
Then I thought that it should be possible to streamline it a bit so I got this:

$d = Int32[,] 10, 20
Simple enough. When I see it, I wonder why I couldn't figure it out quicker.

Tuesday, November 30, 2010

Recycle application pools using appcmd.exe in IIS 7

While hacking along on some deployment scripts today I encountered an issue when I needed to recycle specific application pools via code.

In IIS 7, appcmd.exe was added to enable a programmatic way of interacting with the IIS. On Technet, the syntax to recycle an app pool is as follows:

appcmd recycle apppool / string
This will however result in the following error message:

ERROR ( message:The attribute "" is not supported in the current command usage. )
Which can feel a bit strange. I then noticed that the article on Technet has a typo and the space between / and the name of the application pool should not be there. Without it the command will execute perfectly.

As an addition, application pools with spaces in their names can be recycled by simply wrapping the name in quotes as so:

appcmd recycle apppool /"My Application Pool Name" 
I added a comment about the issue in the Technet article for others with the same issue. I guess there is not a way to submit corrections to articles other than comments?

Friday, November 19, 2010

Calling stored procedures from BizTalk (and other applications) and the FMTONLY flag

Most developers will run into the issue of not being able to generate metadata from a stored procedure even though it is perfectly valid and can be accessed and run without a hitch from the SQL Server Management Studio or directly from code. A lot of the time there will be an error message on the lines of

Error while retrieving or generating the WSDL. Adapter message: Retrieval of Operation Metadata has failed while building WSDL at 'TypedProcedure/dbo/FetchTestData'

Microsoft.ServiceModel.Channels.Common.MetadataException: Retrieval of Operation Metadata has failed while building WSDL at 'TypedProcedure/dbo/FetchTestData' ---> System.Data.SqlClient.SqlException: Invalid object name '#temp01'.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)

In some cases there will be no error at all (SSIS is prone to this for example).

The above error message will give us something to go on though. When reviewing the stored procedure we can see that a temporary table is created and filled with data. Later on this table is used in a select in order to return data to the client. Nothing unusual, still it fails. The stored procedure looks like this (never mind the necessity of the temp table, it is just a demo of the issue):

ALTER PROCEDURE [dbo].[FetchTestData]
(@a4 varchar(4))
SELECT t1, t2, t3 INTO #temp01 FROM Table_1
SELECT t1, t2, t3 FROM #temp01

The next step is then to run profiler during a run of the generate wizard to see what is actually happening in the background. How does the metadata get generated by the wizard and why does it fail?

When running the generation, we can see the following bits in the trace

exec [dbo].[FetchTestData] @a4=NULL

Now we are getting somewhere. The FMTONLY setting is used in order to not process any rows but just return response metadata to the client. However, our stored procedure uses a temporary table which FMTONLY=ON will cause to not be created since any modifications are allowed to be made. When trying to do a select on the temporary table, it will fail since it never was created causing the error messages mentioned above.

There is a way around this issue though. Since we know what is happening, we can revert the execution of SET FMTONLY ON that the adapter does before the execution of the procedure. We should however not just add a command of SET FMTONLY OFF to the beginning of our procedure. Actually executing the entire procedure which such a solution will result in might not be a good choice. If we only do a select on data it is fine, but if the procedure also includes insert, update and delete statements, these will be called as well.

Instead, we check for the FMTONLY flag early on and if it is set, we switch it off when needed and then switch it back on again. Our modified and metadata-generation-secure procedure now look like this:

ALTER PROCEDURE [dbo].[FetchTestData]
(@a4 varchar(4))
DECLARE @FmtOnlyIsSet bit = 0
IF (1=0) BEGIN SET @FmtOnlyIsSet = 1 END
IF @FmtOnlyIsSet = 1

SELECT t1, t2, t3 INTO #temp01 FROM Table_1

IF @FmtOnlyIsSet IS NULL

SELECT t1, t2, t3 FROM #temp01


What magic is done here?

First, we declare a variable that can hold the current setting for the FMTONLY flag, we call it @FmtOnlyIsSet and set it to false by default.

The IF (1=0) bit may look a bit off, but is in fact quite clever. When FMTONLY is set to ON, all conditional statements will be ignored but the content will be executed. This since all possible return paths has to be checked. By checking for an impossible match (1=0) we can be sure that the statement inside the IF will be run only if FMTONLY is ON. Hence we set our FMTONLY flag to true here.

Then we simply check if our flag is set to true when needed and if so, switch off the FMTONLY setting. Afterwards we do the same check and switch it back on. This part is important due to what I mentioned above. If we don't switch the FMTONLY setting back on, all statements will be run just as in a normal execution of the procedure which might not be wanted.

It is noteworthy to know that the FMTONLY setting not only will be used during metadata generation in development, but also when actually calling the procedure from the application. I noticed this when using typed datasets in BizTalk server with the WCF-SQL adapter. I couldn't do alterations to the stored procedure which I handled by instead doing a mock procedure to generate schemas from. Then I assumed that I could safely call the original procedure from BizTalk but I still got the invalid object error message. A quick look in Profiler showed that the adapter will do two passes to the stored procedure. First once with FMTONLY set to ON and then one without to actually execute the code.

My guess is that the adapter is smart enough to do a check that the signature for the procedure matches the previously generated and deployed metadata before executing code that could change data in the database. If the returned dataset wouldn't match the schema, we would know before any code has been executed.

I have only seen this when using typed datasets though which make sense. By using the technique described above, it isn't an issue at all. I'd rather like the idea that the contract is checked first before executing the procedure. It is also not posing a performance hit. The result from the metadata extraction will be cached so only the first call will need to fetch metadata. I am still not sure for how long the metadata is cached before it is refreshed in a call but it seems to hold it for quite a while.

Thursday, November 11, 2010

BizTalk in the cloud - Integration as a service

I totally missed it, but a rough two weeks ago it was published on the BizTalk Server Team Blog that the future is in the cloud.

When I attended the European BizTalk Conference and the sessions based on the book Applied Architecture Patterns on the Microsoft Platform, I got the feeling that Azure is something I should start working with. Now I'm sure that it is so. Especially since the BizTalk/Azure hybrid will be relased as a CTP sometime during the spring/summer of 2011.

As I concluded during the conference, it is at this point (and in the nearest future) not a replacement we have in front of us, but an addition of tools to build solutions with. The same is written by Daniel Probert in his blogpost on the subject.

I'm also happy to see that the cloud is not viewed as the "solution to everything". The future integration platform from Microsoft will be offered as an on-premises product based on AppFabric. I'm looking forward to this since I believe it will solve a lot of problems I'm facing today regarding complex low-latency processes that will work extremely well in an AppFabric on-site platform.

So regarding integration in the future, we now have a pretty clear direction to head in and I believe after reading the announcement from Microsoft that it is the right path. Since most of the platform is going to the cloud, so should also the integration, while still having an option to keep things off-cloud if security, performance and other requirements dictate so.

Tuesday, November 9, 2010

Error when importing bindings: "Failed to update binding information."

When importing bindings into a BizTalk application, the following error message might appear:

TITLE: Import Bindings
Failed to update binding information. (mscorlib)
Cannot update send port "MoAGSendPort". (Microsoft.BizTalk.Deployment)

Cannot update transport information (address "C:\temp\SHS\ut\1.mb510i1_%SourceFileName%"). (Microsoft.BizTalk.Deployment)

The following items could not be matched up to hosts due to name and/or trust level mismatches:
Item: 'FILE' Host: 'SendHost' Trust level: 'Untrusted'
You must do one of the following:
1) Create hosts with these names and trust levels and try again
2) Re-export the MSI without the binding files and have a post import script apply a suitable binding file. (Microsoft.BizTalk.Deployment)

While the message might be correct regarding the host name or trust level, a more common reason to the failure is that the host doesn't have an adapter handler specified that matches the bindings.

In the Admin Console, browse to BizTalk Server 2009 Administration > BizTalk Group > Platform Settings > Adapters and then look at the adapter mentioned in the message. In my case it says "Item: FILE", so the File adapter is where I'm heading and it is indeed missing a send handler for the SendHost host.

To add a send handler, right-click on the adapter (or in the detail view of the window) and select New > Send Handler...

Then select the host that needed the specific send handler and click Ok.

Restart the host instance in question and then try to import the binding again.

WCF exception Could not establish trust relationship for the SSL/TLS secure channel with authority 'server:port'

I was recently working with some WCF services using the wshttp binding and therefore calling them over SSL. I had a certificate set up, but when trying to browse the wsdl in my test client, I couldn't browse the metadata. In the eventlog, I found the following error messages.

Exception Information Type[SecurityNegotiationException] Source[mscorlib] Message[Could not establish trust relationship for the SSL/TLS secure channel with authority 'server:port'.]

Exception Information Type[WebException] Source[System] Message[The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.]

Exception Information Type[AuthenticationException] Source[System] Message[The remote certificate is invalid according to the validation procedure.]

The next step was to try to browse the wsdl in Internet Explorer, where I got this:

Now it made sense. I tried to browse the service using an endpoint URL of https://localhost:11001/path when the certificate I was using was issued to the actual server name as can be seen both in Internet Explorer when checking the certificate information as well as in the MMC Certificate snap-in.

In other words, even if localhost and my servers full name can be used interchangeably in most cases, it isn't so when we are talking about security certificates where the server name is quite vital. After switching to the correct endpoint URL, in my case, it worked as expected.

It should be noted that there is a way of bypassing the certificate verification in the client by setting the ServerCertificateValidationCallback property as below:

using System.Net.Security;
using System.Security.Cryptography.X509Certificates;

System.Net.ServicePointManager.ServerCertificateValidationCallback += delegate(object sender, X509Certificate cert, X509Chain chain, SslPolicyErrors sslError)
return true;
This is however quite dangerous and should not be used in production code. A good practice if code as this is used is to wrap it in #if DEBUG statements to keep it from getting to production (but then you risk having all tests go through without any problems and have a hard to find error in the production environment that cannot be replicated in test).

Monday, November 8, 2010

WCF exception PlainXmlWriter+MaxSizeExceededException

When logging and tracing are switched on for a WCF service, the following exception might be thrown

A message was not logged.
Exception: System.InvalidOperationException: There was an error generating the XML document. ---> System.ServiceModel.Diagnostics.PlainXmlWriter+MaxSizeExceededException: Exception of type 'System.ServiceModel.Diagnostics.PlainXmlWriter+MaxSizeExceededException' was thrown.
The reason is that the maxSizeOfMessageToLog configuration parameter is set to a value that is lower than the size of the message that was trying to be logged.

<messagelogging logentiremessage="true" logmalformedmessages="true" logmessagesatservicelevel="true" logmessagesattransportlevel="true" maxmessagestolog="30000" maxSizeOfMessageToLog="200000">
And while on the subject, it can be worth checking out the MSDN recommended settings for tracing and message logging at

Friday, November 5, 2010

T-SQL Select any row, but only one per key value

A colleague of mine asked me for help with a database query. The table in question where of the normal type with an id column and several columns of data. However, the id column was not holding unique id's but the same id could be used several times, with different data for each occurrence. The task was to select only one row per id and it could be any one of the available.

My example table looked like this

The first column, t1, is holding the to-be distinct id:s. t2 and t3 is random data.

A common solution to this problem seems to be to use cursors or temporary tables. I couldn't see why it shouldn't be possible to do such a select without using them, so after some thinking, I came up with the following

SELECT t1, t2, t3
SELECT t1, t2, t3, ROW_NUMBER() OVER (PARTITION BY t1 ORDER BY t1) rowrank
FROM Table_1
) temp_a
WHERE rowrank <= 1

Basically we select over the rows creating a ranking value for each occurrence of the id. Then we select from this data set only the rows with the rank of 1, giving us just the first occurence per id. Hence we get this output

So for every occurrence of an id in the key column t1, only one row will be selected. With modification of the inner query, a choice can be made of which one of the rows that is deemed more interesting than the others (but in this case, it didn't matter).

Friday, October 29, 2010

Find all possible parameters for an MSI package installation using msiexec

While working on a library of powershell scripts to do unattended installations of BizTalk applications (and all adjacent files and packages) I needed to find out how to specify the settings for an MSI package in order to do a complete unattended install of it using msiexec.exe.

The MSI I was working with was a setup package for a WCF service. Since this installs to the IIS, both website, virtual directory as well as application pool is needed to be specified during the installation. The question is, what are the correct parameter switches for setting these?

Simple enough, these can be found by doing an install of the MSI and logging a verbose output to file. First, run msiexec with logging enabled:

msiexec /I package.msi /L*V installationlog.txt

Then look in the logfile for the text PROPERTY CHANGE. In the following example, the virtual directory is set using the property TARGETVDIR which then also can be used as a parameter to the msiexec command to set the property from outside the GUI:

Action start 15:41:42: WEBCA_TARGETVDIR.
MSI (c) (F4:8C) [15:41:42:943]: Note: 1: 2235 2: 3: ExtendedType 4: SELECT `Action`,`Type`,`Source`,`Target`, NULL, `ExtendedType` FROM `CustomAction` WHERE `Action` = 'WEBCA_TARGETVDIR'
MSI (c) (F4:8C) [15:41:42:943]: PROPERTY CHANGE: Adding TARGETVDIR property. Its value is 'MyWcfServiceLibrary'.
Action ended 15:41:42: WEBCA_TARGETVDIR. Return value 1.
MSI (c) (F4:8C) [15:41:42:943]: Doing action: WEBCA_SetTARGETSITE
Action 15:41:42: WEBCA_SetTARGETSITE.
Action start 15:41:42: WEBCA_SetTARGETSITE.

Note that the custom parameters are not to be set as normal switches with a leading slash /. In my case, the command will look like this:

msiexec /I package.msi /qb TARGETSITE="/LM/W3SVC/1" TARGET VDIR="MyWCFLibrary" TARGETAPPPOOL="BtsAppPoolC"

This will do a complete unattended install of the WCF service to IIS with basic UI and set the needed properties to my preferred values instead of the defaults.

Monday, October 25, 2010

Starting instance of host on server failed. / Could not create SSOSQL

Every now and then the BizTalk hosts won't start with the following message in the BizTalk Administration console:

Starting instance of host on server failed.
For help, click:

Failed to start the BizTalk Host instance. Check the event log on the server "" for more details.
Internal error: "The dependency service or group failed to start." (WinMgmt)

The dependency that failed can usually be found by looking at the error message in the event log. Most likely you will find that the Enterprise Single Sign-On Service is unable to start with the following two messages:

Log Name: Application
Source: ENTSSO
Date: 2010-10-26 09:47:25
Event ID: 11047
Task Category: Enterprise Single Sign-On
Level: Error
Keywords: Classic
User: N/A
Could not create SSOSQL. To fix the problem, reinstall SSO or try 'regasm SSOSQL.dll' from a Visual Studio command prompt.
Error Code: 0x80131700

Log Name: Application
Source: ENTSSO
Date: 2010-10-26 09:47:25
Event ID: 10503
Task Category: Enterprise Single Sign-On
Level: Error
Keywords: Classic
User: N/A
The SSO service failed to start.
Error Code: 0x80131700

This can as the message describes most likely be fixed by registering the SSOSQL binary. What the message doesn't mention is that the version of regasm.exe that you have to run is different if you run 32 or 64 bit Windows.

For 32 bit OS:
C:\Windows\Microsoft.NET\Framework\v2.0.50727\RegAsm.exe "C:\Program Files\Common Files\Enterprise Single Sign-On\SSOSQL.dll"

For 64 bit OS:
C:\Windows\Microsoft.NET\Framework64\v2.0.50727\RegAsm.exe "C:\Program Files\Common Files\Enterprise Single Sign-On\SSOSQL.dll"

After registering the binary, the Enterprise SSO service can be started and subsequently also the BizTalk host instances.

Thursday, October 21, 2010

Recorded sessions from Applied Architecture Patterns now online

Today the recorded sessions from the BizTalk 2010 Release Party / Applied Architecture Patterns on the Microsoft Platform in Stockholm earlier this fall got released onto Channel9. They can be found at

While they are all interesting, the one I'll check out first will probably be Pattern #4 – Cross Organization Supply Chain with a presentation on AppFabric caching which I feel can be an interesting implementation in many cases.

Saturday, September 25, 2010

BizTalk Server 2010 has shipped!

A few days late, but BizTalk 2010 is finally finished. The European BizTalk Conference I attended were meant to be the release party, but the product just missed the deadline. Now it's here though.

I'm mostly looking forward to the new and improved mapper but the enhanced granularity of performance settings as well as a standard SFTP adapter and integration with AppFabric is nice additions as well. Now I just hope for a project to kick off where this new version is chosen as the platform to use.

Saturday, September 11, 2010

Flat file schemas, delimeter characters, wrap characters and escape characters explained

A question on the BizTalk Professionals group on LinkedIn caused me to write a short answer, but I thought I'd do a more comprehensive take on it here.

The question was: what is the difference between wrap characters and escape characters?

When parsing a flat file schema, delimiter characters are used in order to split the incoming data into separate entities. Let's say we have the following data:


In this case, comma (,) is used as a delimeter which will enable us to split the string into the five separate words we want.

However, if it were to be a list of numbers with decimals and we use comma as the decimal separator as we do in Europe, using comma as a delimeter would be tricky since we don't know whether to split the string on the comma, or use it as a separator. In this case, we can use wrap characters.


In this example, the quote character (") is used as a wrap character, i.e. it wraps the separate entities. These are in turn separated with the delimiter character which is a comma (,). This will make us use the delimiter character as part of our data.

The same can be pulled off using escape characters. An escape character is placed before an otherwise reserved character in order to not parse it but to use it as part of the data. Most common is to have backslash (\) as the escape character due to it's use as such in many programming languages.


The above line will give a similar result as the one with wrapped entities if backslash (\) is defined as an escape character. It escapes the following comma (,) which then will not be parsed even if it is defined as the delimiter and so it will be used as part of the data instead.

Friday, September 10, 2010

European BizTalk Conference recap

I'm back at the office after two days at Microsoft in Stockholm and the European BizTalk Conference where I enjoyed myself together with another 150 attendees. A good event as always with the BizTalk User Group Sweden and great sessions that mostly covered the platform around BizTalk for a change.

The three speakers, Richard Seroter, Stephen W. Thomas and Ewan Fairweather, are three of the five authors of the recently published book Applied architecture patterns on the Microsoft platform, which in turn caused the event to follow the basic chapter layout of the book (a book I will review as soon as I have browsed through my copy).

Day one consisted to a large extent of sessions on the different technologies that are available. SQL Server, BizTalk 2010, AppFabric, Azure and WF 4.0 were covered. Day two had sessions covering scenarios where each of the technologies were used in the solution. On this second day, StreamInsight was used in a presentation as well as the last technology presented at the conference. All sessions got taped and so videos should be up on the net within a few weeks I presume.

A very important lesson learned at the conference was a discussion on the anxiety of BizTalk developers considering all this new technology emerging. The speakers made a strong point that BizTalk is not to be replaced by AppFabric, WF 4.0, StreamInsight, SQL Server functionality and whatever there might be. All this new technology can and will instead be used as a complement to BizTalk in order to leverage functionality that previously was hard or impossible to pull off using solely the BizTalk platform. A good example of this was shown in the StreamInsight session with a vast amount of data being streamed and analyzed in realtime.

All in all, a good two days with the crème de la crème of BizTalkers in the region as well as a few from further away in the world.

Thursday, August 12, 2010

"Root element is missing" from a WCF service hosted in BizTalk

When hosting a WCF service in BizTalk, one can come across the error message "Root element is missing" when trying to browse the service metadata.

This error is due to the account running the application pool which in turn is hosting the service in IIS not having access to the BizTalk databases. Make sure that the account has access to the appropriate BizTalk databases, and the service should be fully browsable.

Thursday, August 5, 2010

Visual Studio hangs after debugging

I have repeatedly since I started working in Visual Studio 2008 been swearing while debugging applications since the whole IDE stops responding for a while when stopping the debug session. No updates, service packs or changes to the preferences has been able to stop this behaviour. I've also seen it on numerous other machines and have never been able to figure out why this is happening.

Yesterday I started investigating the issue again and stumbled upon a discussion I've missed before:

In this discussion, the problem is both found and solutions offered.

The culprit can be many things, but the most common one is that the IDE is trying to contact a server in order to check certificate store validity. If the server can't be reached properly (due to proxies and whatnot), the IDE will hang before timing out.

So, if you have a problem with Visual Studio hanging when stopping a debug, try the following:

First, unplug the network cable (or if you are running wireless, kill the wireless NIC). If this solves the problem, you have an issue with the certificate store. If the IDE still hangs, remove all breakpoints in the code and try again. If there still is a problem, remove the .sou file and retry, this can be corrupt and cause a slowdown.

If you had a snappy IDE with the network cable unplugged, either leave it unplugged, or try one of the following solutions.

Before changing the settings for the certificate store, make sure that all unnecessary protocols are disabled in the network properties. Reports have been made that especially NetWare can interfere. This might solve your problem.

Change the certificate revocation setting in Internet Explorer. Open IE, go to Tools > Internet Options > Advanced > Security > Check for Publisher's Certificate Revocation and uncheck this option. This is a bit unsafe to do though so be aware.

Another solution is to edit the hosts file (\windows\system32\drivers\etc\hosts) to point the URL to your own machine. Add to the file. This is just as unsafe as changing the setting in IE and might end up a change that you forget which can cause grief in the future.

For me, changing the IE settings solved it and I can finally be happy when debugging in Visual Studio.

Tuesday, June 29, 2010

Build using msbuild instead of devenv

I have lately encountered a lot of build scripts that are using devenv. I have a hard time seeing why, now that msbuild is available (and has been for quite a while). Even if you don't want to use the massive amount of possibilities that msbuild brings with it, just swapping out the build part in the scripts can be worth it.

For example, the common line I tend to see is this:
DevEnv /rebuild Release %Solution%

A quick replace to this:
msbuild %Solution% /p:Configuration=Release /T:Rebuild
Will give a 30% speed increase in my cases.
If the machine is equipped with several CPU cores (most are), the multicore-switch can give an extra boost:
msbuild %Solution% /p:Configuration=Release /T:Rebuild /m:2

Saturday, June 12, 2010

The expression that you have entered is not valid.

A notorious error message in BizTalk is the irritating "The expression that you have entered is not valid.". This message will be shown in a random fashion when using expression shapes in orchestrations. The message will not make sense since the expression entered in fact is perfectly valid and it will seem impossible to make the message with it's red exclamation mark go away.

I have found two solutions to this problem (not permanent though). One is to simply close and reopen Visual Studio. This will not always work. The other is to comment out the line in question in the expression shape, build the project, and then uncomment the line again.

The reason the message can stick to the project is due to the fact that it is written to the orchestration file itself. This is one of few occasions I've seen error messages be saved to a source file as can be seen in the snippet below:

scope longrunning transaction Scope_Trans
    #error "The expression that you have entered is not valid."
    TSFactory.Framework.Logger.Log(System.Diagnostics.TraceEventType.Information, System.String.Format("{0} Orchestration Start", tracePrefix), logCategory);

By commenting out the line and rebuilding, the error message will also be deleted from the source file and then completely disappear.

Microsoft has released a hotfix for this problem, but I cannot find that it fixed the issue for me, especially in the case of calling external assemblies from an expression shape. Neither did it do any measurable difference for my colleagues. The hotfix can however be found here:

Thursday, May 20, 2010

BizTalk Performance - a short recap from BizTalk User Group Sweden 2010-05-19

Yesterday I attended BizTalk User Group Swedens gathering in Stockholm that this time focused on performance and optimization in BizTalk. A very nice seminar as always.

I didn't take any notes, but will write a short recap of the things that I remember the best from this evening. Some parts that I knew of, and some that I will explore further in some labs. So, here we go: BizTalk performance best practices:

  1. Plan your SQL storage
    This is an entire bookshelf of information on its own and something that most of the BizTalk developers won't get in touch with. However, it is very crucial for the overall performance of the platform and time and money should be spent here. Arranging LUNs, RAID arrays and similar hardware configurations are the base. It is also quite possible to increase performance by splitting the datafiles to several files and filegroups as described in the following links:
  2. Optimize SQL Server settings
    As the backbone of BizTalk, SQL Server optimizations is a way to gain performance boosts. The most prominent example from yesterdays session was enabling the T1118 flag. The only downside seems to be a possible increase in physical data allocation.
  3. Turn off antivirus
    A risky move for some and might not play well with company policies. However, antivirus will eat a lot of the processing power and speed available.
  4. Use XslCompiledTransform for maps
    The current implementation of the mapping engine in BizTalk uses the old and deprecated XslTransform class. By wrapping calls to XslCompiledTransform instead, a very noticable performance increase can be seen. At the session, about 2/3 of the execution time was removed from the first call to a map using a ~35MB XML file. The gain is made both for small files as well as larger ones.
  5. Optimize the WCF endpoint bindings
    This was covered in some part, but I'll take a different but short approach on the subject. Default is to fetch data in a buffered transfermode which enables the WCF message security and reliability. However, if performance is needed, examine using the streamed transfermode instead. On larger messages, this will have impact.
    Changing message encoding can also give a performance boost. Default is text encoding which with binary attachments/messages will result in a base-64 encoding and an increase in message size. MTOM or binary encoding will solve this.
    It is also interesting to look at the actual binding since NET.TCP can utilize binary encoding and also has the best performance. The penalty is severe though since it's both not interoperable and can be useless due to network restrictions.
  6. Watch those persistance points
    If you have more complex orchestrations, take a look at optimizing the number of persistance points. I have orchestrations that can make up to thirty calls to external services. Each of these calls will result in a persistance point. I don't need to keep state and transactions in them, so by wrapping everything in an atomic scope, the number of writes to the database is vastly reduced. Sanket has a good explanation of persistance points on his blog:
There is more to performance than this. Way more. But it's some of the simpler things you have to think of when working with BizTalk. The seminar was captured on tape, so both the slides and entire presentation should be available in a short future.

Sunday, April 25, 2010

BizTalk configuration error: Failed to connect to the SQL database 'SSODB'

Sometimes when configuring a fresh installation of BizTalk Server, the following error might appear during the configuration of the Enterprise SSO:
Failed to connect to the SQL database 'SSODB' on SQL Server ''. (SSO) Additional information: (0x80131700) (Win32)

Most interesting is that the error is due to a failure to connect when the configuration is meant to create the database in question.

I found a solution through this blogpost which while it is for Windows Vista and a bit different, still has the same solution as for the above problem. By some reason the Enterprise SSO is not correctly installed and by registering the SQL module and run the configuration tool again, it will work:

  1. Start a Visual Studio command prompt.
  2. run regasm "c:\Program Files\Common Files\Enterprise Single Sign-On\SSOSQL.dll"
  3. Run the BizTalk Server configuration tool again

Saturday, April 17, 2010

Error: 'Microsoft.XLANGs.BaseTypes.BPELExportable': inconsistent duplicate module attribute

While working on a solution we suddenly encountered the following error message while trying to build the code.
'Microsoft.XLANGs.BaseTypes.BPELExportable': inconsistent duplicate module attribute
It can be a nuisance to find the problem if you don't work with BizTalks BPEL capabilities that often. It is however quite easy to fix. The error message is due to one orchestration having a different Module Exportable setting than the other orchestrations in the project. Making sure that the setting match between all orchestrations will make the error message disappear.

Thursday, March 25, 2010

Biztalk Server 2009r2 is to be known as BizTalk Server 2010

There was a mail discussion today between a number of consultants at work regarding the upcoming BizTalk Server version. It was previously known as BizTalk Server 2009r2 but was announced this week to be named BizTalk Server 2010.

There were a few concerned voices about what customers that just purchased BizTalk 2009 are going to think about Microsofts name changing move. I myself cannot see a reason to be too concerned though, but rather see it as the correct decision from both Microsoft as well as the customers that just got BizTalk Server 2009.

I believe (without having talked to anyone at Microsoft) that the reason for changing the name is mainly two-fold: Firstly, the name will synch with Visual Studio 2010. Secondly, and probably most important, the name change will reset the support cycle for the product. If a 2009r2 release were to be, its support cycle would be based on the release date of the 2009 version. Now they will reset it to the day in this year that the 2010 version will be out. Since BizTalk Server vNext will be a bigger change in the platform partly due to AppFabric, it's vital to have a longer support cycle for the versions being released before vNext.

While 2010 will bring a few neat things that I'm looking forward to (a proper SFTP-adapter and new mapping tool among others), the customers that have 2009 will still not miss out on any major things. It's especially not worth delaying a roll-out of the BizTalk platform (or upgrade) based on the improvements we get this year.

So all in all, it feels as the correct move by Microsoft, and everybody should be happy.

Friday, February 26, 2010

BAM deployment error: SQL Analysis Services 2008 Enterprise Edition is not configured. Can not create OLAP cubes for RTAs.

If the BAM OLAP databases are not set up in SQL Server when deploying BAM definitions by running bm.exe deploy-all, the following error message might appear.

Deploying View... ERROR: The BAM deployment failed.
SQL Analysis Services 2008 Enterprise Edition is not configured. Can not create OLAP cubes for RTAs.
Run BizTalk Server 2009 Configuration and select BAM Tools. Make sure that Enable Analysis Services for BAM aggregations is selected and the Data stores valid.

Click Apply configuration after making the necessary changes and then run bm.exe to deploy the configuration once again.

Thursday, February 11, 2010

Adding an xml-stylesheet reference (or comment) to your XML file in BizTalk

A collegue of mine asked for help in adding a stylesheet reference declaration to an outgoing XML-file in BizTalk. The XML looked something like this:

<xml version="1.0" encoding="utf-16"?> 
<ns0:Report xmlns:ns0=>
<ns0:Identification> …

But the desired output had this additional element declaring an xml-stylesheet.

<xml version="1.0" encoding="utf-16"?> 
<xml-stylesheet type="text/xsl" href="System_ReportSchema_1.1.xslt"?> 
<ns0:Report xmlns:ns0=>
<ns0:Identification> …

The solution was to add the element to the XmlAsmProcessingInstructions in the XML pipeline properties in the send port. This property can also be used to add additional data that you want to add to your message, such as comments.

See for more information.

Saturday, January 16, 2010

Create child node only if input elements has appropriate data

I had a map where I needed to create the child element nodes only if some of the input nodes had a value. The rule was as follows:

For group messages, each person has to have a first name, surname, e-mailaddress and company. For sendlist messages, each person had to have first name AND/OR surname as well as a company.

The sending system would transmit messages that didn’t validate to these rules and I had to sort out the children in the messages that didn’t adhere to these rules. Easy enough to do in the maps.

By using the Logical String-functoid, I can validate if a node is a string or not, i.e. if a firstname or company is set for the person. The logical functoids (AND and OR) concatenate the results accordingly to my rules and I then use the output to decide whether to create the entire child or not in my output message.