Monday, July 1, 2013

Unit testing Azure Table Storage Queries

I was thinking about how to Unit Test queries to Azure Table Storage. My first thought was to create a shim on the TableServiceContext object that will intercept queries and return what I want instead. In some cases this is fine, but there are some cases where I'd really like to test whether the LINQ query I wrote is correct or not. For this, what I really wanted was the ability to create an IEnumerable object that contained the "contents" of my "table" for testing, then write a query that would filter my IEnumerable the same way the Azure Table Storage API would filter my real entities. Most of the solutions available online operate on the assumption that you will create the return value from the query before calling into the shim. This is fine for testing the run up to the query, and testing the code that happens after, but it does little to make sure the query itself is correct.

I set out to create a shim that would take an IEnumerable and filter it as though it were the results in Azure Tables. This is not a perfect reproduction of the Azure environment, but it works for many purposes. The main problem with this is that the CreateQuery<T> method in TableServiceContext returns a DataServiceQuery<T> object, which is not easily shimmed. However, if you are using a CloudTableQuery<T> object in your queries you can Shim the Execute method to get the outcome you want.

The really tricky part was how to run the query on your IEnumerable instead of the actual table. It turns out that one of the properties exposed by the IQueryable interface is Expression, which returns the expression being used to filter the query. In a CloudTableQuery and DataServiceQuery object, the Expression is of the data type MethodCallExpression. A little digging in the tree (by checking the Arguments property of the Expression) and you will find that somewhere in there is an Expression whose specific type is UnaryExpression. This is the actual expression that will be used to filter the results. In Azure, that means it will be converted to the filter string that's included in the REST query, but there's no reason we can't apply it to our own IEnumerable instead.

How to do this? Easy. First convert the IEnumerable to an IQueryable. Then convert the UnaryExpression to a LambdaExpression, then call the Where method on the IQueryable object and you're done.

Additionally, if you don't call AsTableServiceQuery() on your queries you're not entirely out of luck. You can pull a similar trick by putting a Shim into DataServiceQuery<T>.GetEnumerator.

So after figuring out these two things, with a little bit of extra magic for how to actually set which objects are being used, here is the code I came up with.



[TestMethod]
public void here_is_my_test()
{
    IEnumerable<MyEntityType> fakeTableEntries = GenerateFakeTableEntries();

    using (ShimsContext.Create())
    {
        TableContextSpy<MyEntityType> spy = new TableContextSpy<MyEntityType>();
        spy.AddRange(fakeTableEntries);

        DoQuery();

        AssertStuff();
    }
}

public class TableContextSpy<T> where T : TableServiceEntity
{
    SortedSet<T> FakeTable = null;

    public TableContextSpy()
        : base()
    {
        IComparer<T> comparer = new EntityComparer<T>();
        FakeTable = new SortedSet<T>(comparer);

        ShimCloudTableQuery<T>.AllInstances.Execute = (instance) =>
        {
            // Get the expression evaluator.
            MethodCallExpression ex = (MethodCallExpression)instance.Expression;

            // Depending on how I called CreateQuery, sometimes the objects
            // I need are nested one level deep.
            if (ex.Arguments[0] is MethodCallExpression)
            {
                ex = (MethodCallExpression)ex.Arguments[0];
            }

            UnaryExpression ue = ex.Arguments[1] as UnaryExpression;

            // Get the lambda expression
            Expression<Func<T, bool>> le = ue.Operand as Expression<Func<T, bool>>;

            var query = FakeTable.AsQueryable();
            query = query.Where(le);
            return query;
        };

        ShimDataServiceQuery<T>.AllInstances.GetEnumerator = (instance) =>
        {
            // Get the expression evaluator.
            MethodCallExpression ex = (MethodCallExpression)instance.Expression;

            // Depending on how I called CreateQuery, sometimes the objects
            // I need are nested one level deep.
            if (ex.Arguments[0] is MethodCallExpression)
            {
                ex = (MethodCallExpression)ex.Arguments[0];
            }

            UnaryExpression ue = ex.Arguments[1] as UnaryExpression;

            // Get the lambda expression
            Expression<Func<T, bool>> le = ue.Operand as Expression<Func<T, bool>>;

            var query = FakeTable.AsQueryable();
            query = query.Where(le);
            return query.GetEnumerator();
        };
    }

    public void Add(T entity)
    {
        FakeTable.Add(entity);
    }

    public void AddRange(IEnumerable<T> items)
    {
        FakeTable.UnionWith(items);
    }
}

There are a couple issues with this. First, it only handles queries, it does not handle adding, updating or deleting. There are fairly simple ways to do that, however, by putting Shims onto AddObject, UpdateObject, DeleteObject and SaveChanges.

Second, I have not tested this with queries that are not built using Linq. For example, if I were to do something like:

var query = from obj in CreateQuery<MyEntityType>(tableName)
            where obj.RowKey.CompareTo("foo") > 0
            select obj;
query = query.Where(obj => obj.PartitionKey == "pk");
query = query.Where(obj => obj.SomeOtherProperty == "someProp");

This might work with the code I posted above, but it also may fail terribly. In any case, this at least works for some simple queries.

Friday, April 19, 2013

ASP.NET Mixed-mode Authentication using Forms and Azure ACS in .NET 4.5

I recently found myself needing to upgrade an ASP.NET 4.5 application that was able to use Forms authentication and regular old Membership Providers for one class of users, and use Federated Identity with Azure ACS for another class of users. I assumed this would be fairly simple because most web sites do something like this, but most of the information I found pointed to the idea that I'd need to write my own STS and do away with Membership entirely.

It's odd that this was complicated, because when you create a new  ASP.NET application, the default login page says "Enter your username/password or log in with one of these providers". I know... this is ASP.NET's OAuth authentication which is somehow different from using Azure ACS and Federated Authentication... The point is, it's not that complicated, but it took me a long time to find all the needed info, so I will compile it here.

Prerequisites

I will start by saying that there was one website in particular that really got me past the point where I was stuck, and it was this one: http://netpl.blogspot.com/2011/08/quest-for-customizing-adfs-sign-ing-web.html A lot of what is discussed in this blog is the same kind of stuff I did. The only problem is that it is referring to .NET 3.5, and in .NET 4.5 Microsoft integrated WIF into the .NET Framework. This doesn't just mean the names of the assemblies changed from Microsoft.IdentityModel to System.IdentityModel. It also turns out that FederatedPassiveSignIn control that is the key to everything Wiktor sets up in his article no longer exists.

So here are the steps I took to get my system where I wanted it.
  1. Add references to System.IdentityModel and System.IdentityModel.Services
  2. Configure Azure ACS for your STS or other needs. This is a large topic in and of itself, but lots of documentation is available. Try starting here.
  3. Configure you STS using the Identity and Access Tool.
    1. Run the Identity and Access Tool by right-clicking your web project and selecting Identity and Access.
    2. Select "Use the Windows Azure Access Control Service" (or configure a different STS, I don't believe much of this is specific to Azure STS).
    3. Click the link that says "(Configure...)"
    4. Enter the ACS Namespace and the symmetric key for ACS Management:
      1. Go to https://yournamespace.accesscontrol.windows.net/
      2. Click Management Service
      3. Under "Credentials", select Symmetric Key
      4. There is a hidden key, with a button next to it that says "Show Key". Click it to get the key.
      5. Enter the Key in Visual Studio.
    5. Enter the Realm for your application.
  4. The Identity and Access Tool changed several things in web.config that we will need to undo.
    1. system.Web/authentication with mode="Forms" was commented out and replaced with one where mode="None". Switch back to the one with "Forms".
    2. In system.webServer/modules there was an element added: <remove name="FormsAuthentication" />. Comment or remove this line. We actually still need the FormsAuthentication module, because we're still using Forms Authentication.
    3. Everything else that the Identity and Access Tool changed can be left alone.
  5. You need to add a way for your users to log in to the STS. Here is the simplest way to do it. On your Login.aspx page, add a button:
    <asp:Button runat="server" ID="btnSTSLogin" Text="Click here to log in via the STS" OnClick="btnSTSLogin_Click" />
    
    And in the code behind, add this:
            protected void btnSTSLogin_Click(object sender, EventArgs e)
            {
                System.IdentityModel.Services.FederatedAuthentication.WSFederationAuthenticationModule.SignIn("CxPortal");
            }
    
    What you put in that argument does not matter. Literally. Read the Intellisense help.
  6. Now here's the real tricky part. When you log in with the FederatedAuthentication tool, it ONLY logs you in to the STS. Because we've gone back to using Forms authentication, you still need to set the Forms Authentication cookie. We do this by capturing WSFederationAuthenticationModule's OnSignedIn event. The MSDN documentation suggests the best place to put this is in Global.aspx under Application_Start. So here's the code I used:
            protected void Application_Start(object sender, EventArgs e)
            {
                // Set up Federated Authentication handlers.
                FederatedAuthentication.WSFederationAuthenticationModule.SignedIn += WSFederationAuthenticationModule_SignedIn;
            }
    
            void WSFederationAuthenticationModule_SignedIn(object sender, EventArgs e)
            {
                var principal = (ClaimsPrincipal)Thread.CurrentPrincipal;
                FormsAuthentication.SetAuthCookie(principal.Identity.Name, true);
            }
    
  7. If you handle the OnLoggedIn event in your Login control, you may need to do additional work to handle that. One consideration is that the HTTP Session is not available in the context of the Application_Start function, so you may have to get creative. I ended up checking on each request whether the user is initialized and initializing them if not. Your mileage may vary.
  8. Now you must handle logging out. There are two ways to log out: via your web application and via the STS. The basic idea is that you need to log out the user from the Forms authentication module AND from the Federated authentication module. How that is done depends on the how the log out is taking place.
    1. Via the Application: When a user clicks the Logout control in the application, it logs them out from the Forms Authentication module automatically. If they were a forms only user to begin with, then you are done. But if they were a federated user, you have to log them out of the STS as well so that they are logged out of any other applications they are signed into. To do this, handle the OnLoggedOut event of the Logout control (probably on the Master Page) with the following code:
              protected void LoginStatus_LoggedOut(object sender, EventArgs e)
              {
                  // If the user is signed on using Azure ACS, do a Federated Log Out.
                  if (Page.User.Identity.AuthenticationType == "Federation")
                  {
                      var fam = FederatedAuthentication.WSFederationAuthenticationModule;
      
                      // initiate a federated sign out request to the sts.            
                      SignOutRequestMessage signOutRequest = new SignOutRequestMessage(new Uri(fam.Issuer), fam.Realm + "Default.aspx");
      
                      // ACS requires the "wtrealm" parameter for Federated Sign Out, so add it.
                      signOutRequest.SetParameter("wtrealm", fam.Realm);
      
                      // Get the actual signout URL.
                      string signOutUrl = signOutRequest.WriteQueryString();
      
                      Response.Redirect(signOutUrl);
                  }
              }
      
      There is a function in WIF called WSFederationAuthenticationModule.FederatedSignOut() that in theory should be able to be used for this. However, when I call it I get an error that the wrealm parameter was not specified. The above code DOES work, so I use that instead.
    2. Via the STS: If the user logs out of the STS via some other application that is federated with you STS, they will need to be logged out of your application as well. The STS handles this by sending browser redirects to the various relying parties saying "Hey, you should sign out this user". When the redirected request is received by the application, it will automatically sign the user out from the Federated Module, but not the Forms module. So we need to handle the WSFederatedAuthenticationModule.OnSignedOut event, in the same way he handled to OnSignedIn event, so once again we will go to Global.asax and modify the Application_Start event to register the event handler:
              protected void Application_Start(object sender, EventArgs e)
              {
                  // Set up Federated Authentication handlers.
                  FederatedAuthentication.WSFederationAuthenticationModule.SignedIn += WSFederationAuthenticationModule_SignedIn;
                  FederatedAuthentication.WSFederationAuthenticationModule.SignedOut += WSFederationAuthenticationModule_SignedOut;
      
                  ...
              }
      
              void WSFederationAuthenticationModule_SignedOut(object sender, EventArgs e)
              {
                  FederatedAuthentication.SessionAuthenticationModule.SignOut();
                  FormsAuthentication.SignOut();
              }
      
And that's basically it. There is more that you may need to add, depending on the needs of your application. You may want to add code that automatically creates user accounts when someone logs in from the STS for the first time. You will probably have to reproduce any initialization that happens in the OnLoggedIn event, as mentioned in step 7. But the basic flow works.

Tuesday, April 9, 2013

How to encrypt and decrypt Azure Autoscaling Block Rules Store and Service Information Store in code

The Azure Autoscaling Block (WASABi) has a lot of configurability, but one common way is to store your Rules Store and Service Information Store as xml files and put them up in blob storage. You can also encrypt these XML files and provide the Autoscaling block with the thumbprint of the certificate that was used to encrypt. This is all described in more detail here.

The thing is, I wanted to write a web frontend that would allow authorized users to modify the rules or the service information store as needed. This means at the very least the ability to encrypt and decrypt those files from code. It took me awhile to figure this out, but you can directly access the encryption provider that the Autoscaling Block uses to do this encryption and call its encrypt and decrypt methods. If you write your own provider or use one other than the Pcks12ProtectedXmlProvider included in the Autoscaling Block this won't work, but here is the idea:

        private string EncryptXml(string thumbprint, string xml)
        {
            Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Security.Pkcs12ProtectedXmlProvider provider = 
                new Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Security.Pkcs12ProtectedXmlProvider(
                    System.Security.Cryptography.X509Certificates.StoreName.My, System.Security.Cryptography.X509Certificates.StoreLocation.LocalMachine,
                    thumbprint, false);
            XmlDocument doc = new XmlDocument();
            doc.PreserveWhitespace = true;
            doc.Load(new StringReader(xml));

            XmlNode encrypted = provider.Encrypt(doc.DocumentElement);
            return encrypted.OuterXml;
        }

        private string DecryptXml(string thumbprint, string xml)
        {
            Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Security.Pkcs12ProtectedXmlProvider provider = 
                new Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Security.Pkcs12ProtectedXmlProvider(
                    System.Security.Cryptography.X509Certificates.StoreName.My, System.Security.Cryptography.X509Certificates.StoreLocation.LocalMachine,
                    thumbprint, false);
            XmlDocument xmlDoc = new XmlDocument();
            xmlDoc.PreserveWhitespace = true;
            xmlDoc.Load(new StringReader(xml));

            XmlNode decryptedNode = provider.Decrypt(xmlDoc.DocumentElement);
            return decryptedNode.OuterXml;
        }

Tuesday, March 19, 2013

Load Performance Counters from XML file.

I have been playing recently with the Azure Autoscaling Block and Diagnostics (primarily using this article as a reference). One of the issues is creating performance counters. It seemed like it would be frustrating to add similar but ever so slightly different PowerShell scripts to every single role. It seemed it would be a bit easier to write a single script, and have the script add counters based on an XML file. So I wrote a little script I thought I'd share that will install Performance Counters based on a simple XML file. It will also automatically generate the Base objects you need to go along with certain types.

Here is the PowerShell script:
Param(
  [string]$counterFile
)

Function ParseCounterXMLFile
{

    [xml]$counters = Get-Content  $counterFile
    foreach ($category in $counters.counters.category)
    {
        $categoryName = $category.Name;

        # Delete category if it already exists.
        $exists = [System.Diagnostics.PerformanceCounterCategory]::Exists($categoryName)
        if ($exists)
        {
        [System.Diagnostics.PerformanceCounterCategory]::Delete($categoryName)
        }
 
        $counterData = new-object System.Diagnostics.CounterCreationDataCollection

        # For each counter in the category, add it to the new category collection.
        foreach ($counterElement in $category.counter)
        {
            $name = $counterElement.Attributes.ItemOf("name").Value
            $type = $counterElement.Attributes.ItemOf("type").Value
            AddCounter $counterData $name $type

            $baseType = ""

            switch ($type)
            {
                "AverageTimer32" { $baseType = "AverageBase" }
                "AverageCount64" { $baseType = "AverageBase" }
                "CounterMultiTimer" { $baseType = "CounterMultiBase" }
                "CounterMultiTimerInverse" { $baseType = "CounterMultiBase" }
                "CounterMultiTimer100Ns" { $baseType = "CounterMultiBase" }
                "CounterMultiTimer100NsInverse" { $baseType = "CounterMultiBase" }
                "RawFraction" { $baseType = "RawBase" }
                "SampleFraction" { $baseType = "SampleBase" }
                default { }
            }

            if ($baseType)
            {
                AddCounter $counterData "$name Base" $baseType
            }
        }

        # Create the counters in this category.
        [System.Diagnostics.PerformanceCounterCategory]::Create($categoryName, $categoryName, [System.Diagnostics.PerformanceCounterCategoryType]::SingleInstance, $counterData)
    }
}

Function AddCounter($counterData, $name, $type)
{
    $counter = new-object  System.Diagnostics.CounterCreationData
    $counter.CounterType = [System.Diagnostics.PerformanceCounterType] $type;
    $counter.CounterName = $name

    $counterData.Add($counter)

    write $name $type
}

ParseCounterXMLFile

And here is an example XML file:
<counters>
    <category name="My Test Performance Counters">
        <counter type="AverageTimer32" name="Avg Timer Test" />
        <counter type="NumberOfItems32" name="Counter Test 1" />
    </category>
    <category name="My Test Performance Counters Cat 2">
        <counter type="NumberOfItems32" name="Counter Test 2" />
    </category>
</counters>

You can use this in Azure by adding a StartupTask that calls a .cmd file that issues this command:
powershell -ExecutionPolicy Unrestricted -File .\loadperformancecounters.ps1 .\PerfCounters.xml

Update: This was working in the Compute Emulator, but failed on Azure. Upon troubleshooting I discovered something very strange... Using the bracket operator on an XmlAttributeCollection does not work in whatever version of .NET or Powershell being used in Azure instances. In the end, all I had to do was replace two lines of code:

            $name = $counterElement.Attributes["name"].Value
            $type = $counterElement.Attributes["type"].Value

was changed to:

            $name = $counterElement.Attributes.ItemOf("name").Value
            $type = $counterElement.Attributes.ItemOf("type").Value

This change is reflected in the code up above. I have absolutely no idea why this would make a difference

Thursday, February 14, 2013

Simple Cascading DropDown User Control


Sometimes you need two drop downs that work together, where the value you select in the first determines which values you see in the second. ASP.NET AJAX has the CascadingDropDown control to handle this, but requires calls to the server. If the total number of possible options is small, it may be worth it to preload all the values and use javascript to determine which to display.

I have done this in a somewhat convoluted manner, but it seemed to be the best way from the research I did. The overall gist is that I create a drop down that has ALL the possible options, and hide it (I call this the "Source"). Then, create a second drop down that is empty, but visible (I call this the "Target"). Then, when the drop down which controls the filtering is changed, javascript code will loop over all the options in the Source and copy the ones that match whatever criteria were specified on the server.


When the Target value is changed, the value selected is set as the selected value for PostBack, and to the calling code it looks like the user directly selected from the Source drop-down.

Here's how this works. First, the user control is very simple. Just a select control. Originally I tried to use an asp:DropDownList control, but I got an error that effectively said controls can't be modified on the client without turning off some security features, so I just made it a client-only control:

<%@ Control Language="C#" AutoEventWireup="true" CodeBehind="DropDownFilterExtender.ascx.cs" Inherits="DropDownFilterExtender.DropDownFilterExtender" %>
<select id="<%= ClientID+"_targetDropDown" %>" onchange="<%= OnTargetChangeFunction %>" ></select>

The next step is to map the values from the Source drop down to their corresponding values in the Filter. We do this one of two ways. The first is to check and see if the Source is data bound. If it is, you can try and get a property from it. I specified a "FilterProperty" property, which will use reflection to try and pull a filter value from a particular property if Source is data bound. Otherwise you'll have to use an event to set it custom on your page. Here is the function that does that:

protected void AddFilterAttribute()
{
    if (!Page.IsPostBack)
    {
        IEnumerable<object> dataSource = null;
        if (SourceDropDown.DataSource != null && typeof(IEnumerable<object>).IsAssignableFrom(SourceDropDown.DataSource.GetType()))
            dataSource = SourceDropDown.DataSource as IEnumerable<object>;

        List<string> filterValues = new List<string>();

        for (int i = 0; i < SourceDropDown.Items.Count; i++)
        {
            DropDownFilterExtenderGetFilterValueEventArgs e = new DropDownFilterExtenderGetFilterValueEventArgs();
            ListItem listItem = SourceDropDown.Items[i];
            e.Item = listItem;
            if (dataSource != null)
            {
                // If we can get the actual data item out of the filter source, try it.
                e.DataItem = dataSource.ElementAt(i);
                if (!string.IsNullOrEmpty(FilterProperty) && e.DataItem != null)
                {
                    object filterValue = e.DataItem.GetPropertyOrIndexValueSafe(FilterProperty);
                    if (filterValue != null)
                    {
                        e.FilterValue = filterValue.ToString();
                    }
                }
            }

            if (GetFilterValue != null)
            {
                GetFilterValue(this, e);
            }

            if (!string.IsNullOrEmpty(e.FilterValue))
            {
                filterValues.Add(e.FilterValue);
            }
            else
            {
                filterValues.Add("");
            }
        }

        string filterValuesJoined = string.Join(",", filterValues);
        Page.ClientScript.RegisterHiddenField(ClientID + "_filterValues", filterValuesJoined);
        ViewState[ClientID + "_filterValues"] = filterValuesJoined;
    }
    else
    {
        Page.ClientScript.RegisterHiddenField(ClientID + "_filterValues", (string)ViewState[ClientID + "_filterValues"]);
    }
}

Notice that I use a registered hidden field to accomplish this. On the client side, the way this will work is that I will loop over all the options in the source drop down, and all the elements in the filterValues hidden field. If the selected value from the filter drop down is the same as an element in the filterValues hidden field, then copy the corresponding option to the target drop down.

Note also that I save the filter values in ViewState. This is because, at least the way my code was set up, there was no guarantee that the data bindings would have happened on PostBack. If I store it in ViewState, then on PostBack I can be guaranteed to use the same values for the filters.

So now, here is the javascript code for managing this on the browser:

/*
Called when the filter select drop down changes. Get the new type, then get the corresponding customer
type and filter the list by that.
*/
function onFilterSelectChange(sourceDropDownID, targetDropDownID, filterSelectDropDownID, includeNulls, thisClientID) {
    var filterSelectDropDown = document.getElementById(filterSelectDropDownID);
    var sourceDropDown = document.getElementById(sourceDropDownID);
    var targetDropDown = document.getElementById(targetDropDownID);

    var filterValue = filterSelectDropDown.value;

    if (filterValue)
        popuplateDynamicDropDown(sourceDropDown, targetDropDown, filterValue, includeNulls, thisClientID);

    sourceDropDown.disabled = false;
    targetDropDown.disabled = false;

}

/*
There is a hidden drop-down with all the customers in it. When this function is called,
take all the options where the "customertype" attribute is equal to the value in customerType
and copy them into the dynamicCustomerDropDown that is actually displayed.
*/
function popuplateDynamicDropDown(sourceDropDown, targetDropDown, type, includeNulls, thisClientID) {
    // First clear the dynamic drop down
    while (targetDropDown.hasChildNodes()) {
        targetDropDown.removeChild(targetDropDown.firstChild);
    }

    var showDefaultRow = document.getElementById(thisClientID + "_showDefaultRow").value;
    if (showDefaultRow == "True") {
        var defaultRowText = document.getElementById(thisClientID + "_defaultRowText").value;
        var defaultRowValue = document.getElementById(thisClientID + "_defaultRowValue").value;
        // Add the default option to the dynamic.
        targetDropDown.options[0] = new Option(defaultRowText, defaultRowValue);
    }

    var filterValuesString = document.getElementById(thisClientID + "_filterValues").value;
    var filterValues = filterValuesString.split(",");

    for (var i = 0; i < sourceDropDown.options.length; i++) {
        var copyOption = false;
        var opt = sourceDropDown.options[i];
        if (opt.value != null && opt.value != "") {
            var filterValue = filterValues[i];
            if (filterValue != null && filterValue.length > 0) {
                if (filterValue == type)
                    copyOption = true;
            } else if (includeNulls) {
                // If includeNulls is set, include an option that doesn't have the attribute at all.
                copyOption = true;
            }

            if (copyOption) {
                var newOpt = new Option(opt.text, opt.value);
                newOpt.selected = opt.selected;
                targetDropDown.appendChild(newOpt);
            }
        }
    }
}

/*
When the selection changes on the dynamic drop down with the subset of customers, set
the selected value on the original customer drop down so that the selection is reflected server-side.
*/
function onDynamicDropDownSelect(sourceDropDownID, targetDropDownID) {
    var sourceDropDown = document.getElementById(sourceDropDownID);
    var targetDropDown = document.getElementById(targetDropDownID);

    var selectedValue = targetDropDown.value;
    sourceDropDown.value = selectedValue;
}

So now the main thing that's left to do is put this all together in a control and register the various client functions with the page. I will leave that as an exercise to a zip file. Download a complete example. Sorry my write-up was so hasty. I am in somewhat of a hurry and probably shouldn't be wasting time blogging. Hopefully the example makes up for it.

UPDATE: Playing around a bit more with this today, and I found this could be done better (and more browser-compatibly) with jQuery:


/*
There is a hidden drop-down with all the customers in it. When this function is called,
take all the options where the "customertype" attribute is equal to the value in customerType
and copy them into the dynamicCustomerDropDown that is actually displayed.
*/
function onFilterSelectChange(sourceDropDownID, targetDropDownID, filterSelectDropDownID, includeNulls, thisClientID) {
    // Clear the target drop down.
    var targetDropDown = $("#" + targetDropDownID + " option");
    targetDropDown.remove().end();

    var filter = $("#" + filterSelectDropDownID).val();

    var showDefaultRow = getHiddenFieldValue(thisClientID, "_showDefaultRow") == "True";
    if (showDefaultRow) {
        var defaultRowText = getHiddenFieldValue(thisClientID, "_defaultRowText");
        var defaultRowValue = getHiddenFieldValue(thisClientID, "_defaultRowValue");

        $("#" + targetDropDownID).append($("<option></option>")
                                 .val(defaultRowValue)
                                 .text(defaultRowText));
    }

    var filterValuesString = getHiddenFieldValue(thisClientID, "_filterValues");
    var filterValues = filterValuesString.split(",");

    var sourceDropDown = $("#" + sourceDropDownID + " option");

    sourceDropDown.each(function (i) {
        // $(this) is the current option selector
        var copyOption = false;
        if ($(this).val()) {
            var filterValue = filterValues[i];
            if (filterValue) {
                if (filterValue == filter)
                    copyOption = true;
            } else if (includeNulls) {
                // If includeNulls is set, include an option that doesn't have the attribute at all.
                copyOption = true;
            }
        }

        if (copyOption) {
            $("#" + targetDropDownID).append($(this).clone());
        }
    });

}

function getHiddenFieldValue(clientID, fieldID) {
    return $("#" + clientID + fieldID).val();
}


/*
When the selection changes on the dynamic drop down with the subset of customers, set
the selected value on the original customer drop down so that the selection is reflected server-side.
*/
function onDynamicDropDownSelect(sourceDropDownID, targetDropDownID) {
    var selectedValue = $("#" + targetDropDownID).val();
    $("#" + sourceDropDownID).val(selectedValue);

}