You’ll still write a good deal of code in ASP.NET 2.0.

Don’t completely trust those who say that ASP.NET 2.0 cuts 70% of the amount of code you’re called to write. You’ll end up writing more or less the same quantity of code, but you’ll write code of different quality. You’ll have more components and less boilerplate code to tie together pages and controls. Features like the provider model, data source controls, and master pages make coding easier and equally effective. But since there’s no magic behind, you have to learn the implications of each feature you employ. In the end, ASP.NET 2.0 comes with code behind, not magic behind.

Compared to classic ASP, ASP.NET 1.x was no more and no less than a revolution. It changed everything-the component model, the syntax, the languages used to author pages, the runtime environment, and the tools to design and create applications. And like icing on a cake, ASP.NET 1.x endeavored with good success to remain as close as possible, if not compatible, with the old code and old way of devising Web applications.

Adaptive rendering is the process by means of which server controls generate markup specifically crafted for a particular browser.

Today, most evangelists and influencers tend to present ASP.NET 2.0 as a major upgrade to ASP.NET 1.x. I mostly agree with this point of view. Overall, ASP.NET 2.0 introduces new controls, enhances the runtime environment, is richer in functionality and more customizable, but is fully backward compatible and doesn’t appear to developers as a brand new thing. Although extended, refined, and refactored, it’s the same old dog that has been taught a lot of new tricks. So far so good.

While this vision is logically correct, the amount of changes and improvements (read, the number of new tricks learned by the old dog) is astonishing. Let the numbers talk.

The core functions of ASP.NET are implemented in the system.web assembly. In ASP.NET 1.x, the system.web assembly counts 14 namespaces and 321 exported types. In ASP.NET 2.0, the number of namespaces grows up to 22. The number of exported types, instead, almost quadruples up to 1121 types. Do you still want to call it a major upgrade?

In the end, ASP.NET 2.0 doesn’t change the programming model and doesn’t modify the pillars on which developers build Web applications with Microsoft .NET technologies. Everything else is different because it is a new feature or because it is the reworked and enhanced version of an existing feature.

The goal of this article is to provide an annotated overview of ten selected ASP.NET 2.0 features. You’ll find dos and don’ts, whys and wherefores, and the rationale behind them. For obvious space constraints, I cannot provide a detailed description of the required syntax nor a step-by-step guide to the feature and its usage. If you need this kind of reference, either check out the online MSDN documentation or get a copy of Programming Microsoft ASP.NET 2.0 Applications Core Reference (Microsoft Press, 2005), which should be hot off the press by the time you read this.

#1-DataGrid vs. GridView

It’s easy to guess that a couple of quite versatile data-bound controls will be a fixed presence in most real-world ASP.NET 2.0 applications-the DataGrid and GridView controls.

Both controls render a multi-column, templated grid and provide a largely customizable user interface with read/write options. In spite of the rather advanced programming interface and the extremely rich set of attributes, DataGrid and GridView controls simply generate an HTML table with interspersed hyperlinks to provide interactive functionalities such as sorting, paging, selection, and in-place editing.

Although customizable at will, grid controls feature a relatively rigid graphical model. The data bound to a DataGrid or GridView is always rendered like a table, therefore, in terms of rows and columns. Nevertheless, the contents of the cells in a column can be customizable to some extent using system-provided as well as user-defined templates.

The DataGrid is the principal control of most data-driven ASP.NET 1.x applications. Like all ASP.NET 1.x controls, the DataGrid is fully supported in ASP.NET 2.0 but is partnered with a newer control that is meant to replace it in the long run. The new grid control, GridView, is complemented by other view controls, including DetailsView and FormView. The GridView is a major upgrade of the ASP.NET 1.x DataGrid control. It provides the same basic set of capabilities plus a long list of extensions and improvements. When should you use one vs. the other?

For brand new ASP.NET 2.0 applications, choosing the GridView over the DataGrid is a no-brainer. For ASP.NET 1.x applications being upgraded and maintained, moving to the GridView doesn’t pose any significant issues and, more importantly, it positions you well for future enhancements. In the end, the DataGrid control is over and is supported in ASP.NET 2.0 only for backward compatibility reasons so that existing applications can continue to work when simply ported and recompiled in ASP.NET 2.0.

Compared to the DataGrid control, the syntax of the GridView is largely similar but certainly not identical. The classes you use for binding columns have different names, for example BoundField and HyperLinkField instead of BoundColumn and HyperLinkColumn. In addition, the GridView supports more field types including ImageField and CheckBoxField for images and Boolean values.

The GridView control simplifies the implementation of some features such as the definition of edit and select buttons. To enable editing and selection, you simply set a couple of Boolean properties with the GridView, whereas the DataGrid requires you to add specific columns and handle related events.

Helper events that signal the creation of a new item, or the item’s binding to row data, have been renamed too and also underwent some minor syntax changes. For example, The ItemCreated event of the DataGrid corresponds to the RowCreated event of the GridView. The event’s delegate is different. In particular, the RowCreated event is a simple EventHandler delegate and doesn’t carry out any additional information about what happened. The event is a mere notification that a grid’s element has been created. You have to figure out yourself which element was created. On the other hand, new properties return an object reference to each grid’s element-header, footer, data rows, top and bottom pagers-so that you can easily check which one was created and decide how to proceed.

protected void GridView1_RowCreated(
    object sender,
    EventArgs e)
{
    // Check if the footer row was created
    if (GridView1.FooterRow != null)
    {
       // Do something here
    }
}

The GridView’s RowCommand event matches the DataGrid’s ItemCommand event and indicates when a custom column button is pressed. The DataGrid’s event passes the index of the clicked button through the ItemIndex property. The GridView’s event uses a different approach based on the CommandArgument property, as shown below:

protected void GridView1_RowCommand(
    object sender, GridViewCommandEventArgs e)
{
    // Check the command name associated with
    // the button
    if (e.CommandName.Equals("Add"))
    {
       // Get the index of the clicked button row
       int index;
       index = Convert.ToInt32(e.CommandArgument);
    
       // Add the item to the shopping cart
       AddToShoppingCart(index);
    }
}

The GridView allows you to cache the value of multiple fields for each displayed row. It is only one field-the primary key, if any-with the DataGrid. In the GridView, the DataKeyNames property (an array of strings) overtakes the DataKeyField property of the DataGrid (a string). Both controls support the DataKeys property which is an array of DataKey objects for the GridView and an array of strings for the DataGrid. Imagine you declare a GridView using this syntax:

<asp:GridView ID="GridView1" runat="server"
     DataSourceID="SqlDataSource1"
     DataKeyNames="productid,productname" />

In the code file page, you retrieve the productname field of the clicked row with the following code:

protected void GridView1_RowCommand(
    object sender, GridViewCommandEventArgs e)
{
  if (e.CommandName.Equals("Add"))
  {
     // Get the index of the clicked button row
     int index;
     index = Convert.ToInt32(e.CommandArgument);
    
     DataKey data = GridView1.DataKeys[index];
     ShoppingItem s = new ShoppingItem();
     s.ProductID = data.Values["productid"];
     s.ProductName = data.Values["productname"];
     :
  }
}

So far I’ve just taken for granted that ASP.NET 2.0 comes with two distinct and different grid controls, outlined the main differences, and basically suggested when to use which. Summarizing, you should always use the GridView in new ASP.NET 2.0 code and as much as possible in migrated code. The DataGrid remains available in ASP.NET 2.0 mostly to ensure compatibility.

Could the changes and enhancements slated for the GridView have been implemented directly in the DataGrid?

The key difference between DataGrid and GridView is elsewhere. In ASP.NET 2.0, a new family of server controls makes their debut-data source controls. These controls have no user interface but supply data to data-bound controls. Data source controls reduce the quantity of code you’re called upon to write in data-driven pages. Data source controls intelligently communicate with made-to-measure data-bound controls to provide data to display and save changes back to the physical data source in a two-way data binding mechanism. Simply put, the DataGrid and all ASP.NET 1.x controls were not designed to fully support data source controls. The DataGrid, in particular, supports data source controls only for display. The DataGrid is not designed for two-way data binding. That’s why Microsoft created a new grid control. Data source controls are perhaps the most powerful new feature in ASP.NET 2.0 and I’ll cover them in more detail later.

What about paging and sorting? GridView and DataGrid work in a nearly identical way if both are bound to enumerable data sources like in ASP.NET 1.x. If bound to a data source control, the GridView control relies on the capabilities of the underlying data source control for paging and sorting. For this reason, the GridView control has no property to enable custom paging. If you need a custom algorithm to have the grid page through data, you have to code it in a stored procedure or a custom business class bound to a data source control.

#2-Cookied vs. Cookieless

In ASP.NET 1.x, the browser and ASP.NET server-side environment can exchange the session ID in either of two ways-using a cookie or mangling the requested URL to incorporate the session ID string. Neither approach is free of issues. Not all browsers support cookies and even when the browser does support cookies, the user might have disabled their use. If you opt for a cookieless approach, the session ID shows in the address bar making it potentially easy for attackers to attempt to hijack a session at least for the duration of the session ID.

Session state management is not the only ASP.NET subsystem where cookies are employed. ASP.NET 1.x uses cookies to persist the authentication ticket for authenticated users. After checking the user’s credentials, the login page generates and attaches the cookie to the page. If the browser doesn’t support cookies, the user can hardly be authenticated.

To really benefit from ObjectDataSource, you must have your DAL already in place. ObjectDataSource doesn’t break n-tier systems, nor does it transform existing systems into truly n-tier systems.

In ASP.NET 2.0, the cookieless approach is also possible for form-based authentication. Obviously, using cookieless authentication is recommended in very particular cases. The canonical example is an international security-sensitive application (i.e., an international Internet banking application) running in a country where cookies are prohibited by law. When you opt for cookieless authentication, make sure you also work on a secure channel (HTTPS) and with sliding expiration to mitigate the risk of replay attacks.

As mentioned, with cookieless sessions the risk is a session hijacking attack. The attack is not necessarily harmful per se. Its effects mostly depend on what the attacker finds stored in the session state. If no sensitive data is cached in the session state, the risk of reporting damages is low.

The bottom line is that you should opt for cookied clients unless strict and precise requirements force you to set up a cookieless solution.

As a further note, consider that in ASP.NET 2.0 cookies are also used to cache role information for performance gains. When role management is enabled and a user connects, role information is retrieved, placed in a (configurable) cookie and attached to the response for next time. If cookies are disabled, role information is read from the data source on each and every request.

#3-Cross-Browser Rendering

The Holy Grail of Web development is finding a magic combination of technologies to build applications that render and work the same whatever browser or device is used.

Some developers prefer to create two distinct applications for uplevel and downlevel browsers. Other developers tend to use CSS styles to the extent that it is possible to differentiate the rendered markup on a per-browser basis. Yet another group of Web developers have placed their hopes in a new ASP.NET 2.0 technology named adaptive rendering.

Adaptive rendering is the process by means of which server controls generate markup specifically crafted for a particular browser. An external component-the control adapter-takes care of generating the markup in lieu of the control itself. When it is about to render, each control figures out its own adapter and delegates the task to it.

The adapter for a control is resolved by looking at the browser capabilities, as configured in the .browser text files that accompany the ASP.NET 2.0 installation. You look for these files under the following path:

%WINDOWS%
  \Microsoft.NET
    \Framework
      \[version]
        \CONFIG
          \Browsers

If the browser record specifies an adapter class for the control, that class is instantiated and used. Otherwise, the adapter for the control is an instance of a generic adapter that simply generates the markup for a control by calling the rendering methods on the control itself.

Writing an adapter component is not a trivial task because it requires that you understand the events in the lifecycle of both pages and controls to know where to apply your device-specific adaptation. In ASP.NET 2.0, a simpler form of adaptive rendering consists of declaratively assigning a browser-specific value to some control properties. Clearly, adaptive rendering and browser-sensitive rendering are two substantially different things. The former lets you adapt the behavior of the control to the browser. The latter is simply a declarative way of deciding which value to assign to individual properties based on the browser’s name. Here’s an example.

<asp:TextBox ID="TextBox1" runat="server"
     MaxLength="10"
     ie:MaxLength="10"
     netscape6to9:MaxLength="8"
     mozilla:MaxLength="7" />

The MaxLength property of the TextBox control will limit the buffer of the text box to ten characters if the page is viewed through Internet Explorer. If Firefox (or any Mozilla-powered browser) is used, the maximum length is 7. It is 8, instead, if a Netscape browser is used. If another browser is used, the value of the unprefixed MaxLength attribute is used. All properties you can insert in a tag declaration can be flagged with a browser ID. Each supported browser has a unique ID. As in preceding code, ie is for Internet Explorer whereas mozilla is for Firefox and netscape6to9 is for versions of Netscape browsers ranging from 6.x to future versions. Check out the documentation for browser capabilities to find out which is the ID of a particular browser or device.

Browser-specific filtering is also supported for master pages. Much like a PowerPoint master slide, a master page is the ASP.NET page your displayed page is based upon. If you’re employing master pages to build pages in your site, you can have the ASP.NET runtime switch the master on the fly for you as the user connects via a different browser.

You design the master to address the specific features of the browser. In doing so, you create multiple versions of the same master, each targeting a different type of browser. How do you associate masters and browsers?

In the content page, that is the page your users display, you define multiple bindings using the MasterPageFile attribute. Each binding is prefixed with the identifier of the browser. For example, suppose you want to provide ad hoc support for Internet Explorer and use a generic master for any other browsers that users employ to visit the site. You use the following syntax.

<%@ Page MasterPageFile="Base.master"
    ie:MasterPageFile="ieBase.master"
    netscape6to9:MasterPageFile="nsBase.master" %>

ASP.NET will use the ieBase.master file for Internet Explorer and nsBase.master for Netscape browsers from version 6.x on. In any other case, ASP.NET will use a device-independent master (Base.master). When the page runs, the ASP.NET runtime automatically determines which browser or device the user is using and selects the corresponding master page, as shown in Figure 1.

Figure 1: The page may change it’s structure and logic entirely to reflect the capabilities of the underlying device.

Table 1 shows the ID of the most common desktop browsers and mobile devices such as cellular phones and personal digital assistants (PDAs). It is important to note that if you use device-specific masters, you must also indicate a device-independent master to stay on the safe side and avoid anomalies in case a user uses a non-supported browser.

#4-Post-Cache Substitution

When you enable output caching on a given page, the page’s response will be cached for a specified time so that successive requests can be served without executing the code behind the page. This simple trick transforms the original ASP.NET page in something close to a static HTML page as far as performance is concerned.

Output caching is flexible enough to let you store distinct versions of the page per each combination of HTTP headers or query string parameters. The output of the page can be cached on the Web server, the client, or any proxy server downstream such as Microsoft ISA Server. The caching location is just one of the parameters you can set when you configure output caching for a page.

True visual inheritance à la Windows Forms is not a goal of ASP.NET 2.0 master pages. The contents of a master page are merged into the content page, and they dynamically produce a new page class. The merge process takes place at compile time and only once.

To cache the whole page you simply add the @OutputCache directive on top of the page. This is not the only option you have, though. If needed, you modify the cacheable regions of your page to be rendered through a user control and configure the user control to support output caching. These are the two options you have as of ASP.NET 1.x. What happens if you want to cache all of the page except for a few regions? In ASP.NET 2.0 you can do this as well thanks to post-cache substitution. For example, using this mechanism, an AdRotator control can serve a different advertisement on each request even if the host page is cached.

To use post-cache substitution, you place the new <asp:substitution> control at the page location where content should be substituted and set the MethodName property of the control to a callback method. Here’s a quick example.

<form id="form1" runat="server">
   <h3>The output you see has been generated at:
       <%=DateTime.Now.ToString() %>
       and is valid for 30 seconds</h3>
   <hr />
   This content is updated regularly
   <h2><asp:Substitution ID="Substitution1"
            runat="server"
            MethodName="WriteTimeStamp" /></h2>
   <hr />
   This is more static and cached content
   <asp:Button runat="server" Text="Refresh" />
</form>

You must set the MethodName property to the name of a static method that can be encapsulated in an HttpResponseSubstitutionCallback delegate:

public static string WriteTimeStamp(
    HttpContext context)
{
    return DateTime.Now.ToString();
}

Whatever string the method returns will be rendered out and becomes the output of the Substitution control. Note also that the callback method must be thread-safe and static.

Using post-cache substitution automatically disables client caching and forces the page to use server caching. This is reasonable as the markup replacement can only happen through server-side code. The page can’t rely on IIS 6.0 kernel mode caching and its benefits. Again, this is reasonable as some server-side work is required to serve the page. In light of this, the page can’t be served by IIS without hitting ASP.NET. Finally, note that the Substitution control works even if the page doesn’t use page output caching. In the end, Substitution is a mere server-side control and is processed as usual. In a non-cached page, your callback will be called at rendering time to contribute to the response.

More important than ever, you should avoid calling the substitution control from within the callback. If you do so, the callback will hold on to the control and the page containing the control. In the end, the page instance won’t be garbage-collected until the cached content expires.

#5-The Provider Model

The provider model is one of the most important and critical aspects of ASP.NET 2.0. A comprehensive understanding of the provider model is critical for an effective design and implementation of cutting edge applications. The provider model is formalized in ASP.NET 2.0 but, at the end of the day, it is the implementation of the strategy design pattern-a software feature not strictly related to .NET and ASP.NET. Once you get hold of the basic idea, you can start using the provider model in any application. In particular, you can (and to some extent, you should) use it to architect your own ASP.NET 2.0 applications.

Defined, the strategy pattern indicates a behavior and the strategy you intend to use to implement it. As an example, consider the behavior of sorting data. You can sort data through an array of algorithms such as Quicksort, Heapsort or perhaps Mergesort. Each sort algorithm represents a different, but equally valid, strategy. Whatever strategy you choose, the observable behavior of the code remains intact-that is, the code still sorts its data.

The most notable feature of the strategy pattern is that it provides a way for a subsystem to expose its internal plumbing so that a client can unplug the default implementation and plug its own in.

This is exactly what happens in ASP.NET 2.0 for a number of services, including membership, roles, state management, personalization, and site maps. The ASP.NET provider model is nothing more than the ASP.NET implementation of the strategy pattern.

The best thing you can do in your own applications is devise each feature that needs to read data from, or write data to, a data source according to the strategy pattern. You define a public and immutable programming interface and make your top-level code interact only with that. Such a programming interface is well-represented with a static class exposing a bunch of public methods. Under the hood, each method will locate the current strategy class providing the requested behavior. How the strategy class is located and the details of the programming interface are entirely up to you.

If you’re doing this in ASP.NET 2.0, you might want to derive strategy classes (i.e., providers) from the official base provider class-ProviderBase. All provider classes in ASP.NET inherit from ProviderBase. This is not a functionally rich class so you might want to create an intermediate class named MyFeatureProviderBase or something like that, and make this class extend ProviderBase with the programming interface you need for the specified feature. Trust me, this is easier done than explained and will give you unprecedented geek pleasure.

#6-Session Data Stores

In ASP.NET, you can store session state in a variety of places that you can configure offline through the web.config file. By default, the session state is held in the Web server’s memory, precisely in the Cache object. Alternately, you can ask the ASP.NET runtime to store session state in the memory of a remote server or in a made-to-measure SQL Server database.

Most developers tend to forget the implications of storing session data out-of-process. There are two types of possible implications: data marshaling and configuration.

If session data is managed by a process different from the ASP.NET worker process, the application has to pay the extra price of marshaling data from the external process to the worker process. The ASP.NET infrastructure has to read, marshal, and process data.

The cost of reading data is merely the cost of I/O. This cost is higher if a system file, like a database file, is involved. The cost of data marshaling is the cost of serializing data using the .NET Framework formatter classes. The cost of processing data is any cost involved with copying the session state from the external source to the HTTP context of the ongoing request.

In ASP.NET 1.x, you can store data outside the worker process in either of two ways: state server and SQL Server. In the former case, data is managed by a Windows service (aspnet_state.exe) that requires manual start and runs under the ASP.NET process account. The communication between ASP.NET and this process occurs through a fixed port over TCP. The data is stored in the memory of the state process and is subject to any failure of the service. The cost of I/O is minimal just because data lives in memory. If SQL Server is used, data is stored in a persistent database with a higher I/O cost.

Data is serialized back and forth using type-specific algorithms. Simple data types such as numbers, strings, dates, characters, bytes, and arrays of these types are serialized using a tailor-made serializer that optimizes the time required to save data of those types. Other types are first checked for a type converter class-a class that can convert the contents of the class into a string. If no type converter is found, the class is serialized using the .NET Framework binary formatter class. If the class is not serializable, an exception is thrown. The binary formatter is the most efficient formatter available, but is not all that fast compared to other techniques that can be used to serialize the contents of a class. The reason lies in the fact that the binary formatter is designed to service any managed class handling complex scenarios such as circular references. A formatter specifically designed for a limited group of classes can easily outperform the binary formatter. What’s the lesson?

When remote session state is enabled, you should avoid using complex data types such as DataSet or custom classes. The more you can resort to strings and numbers, the better performance you’ll see. To give evidence to the discussion, consider a 15% overhead if you connect to a state server and a 25% overhead if you use a database table to store the session state. If binary serialization is required, expect to see these numbers raise a little more.

The trade-off here is clear: speed versus robustness. If you’re looking for extreme robustness-for example, the capability of surviving IIS crashes-consider the following, additional issues.

You need a machine (not necessarily the local Web server machine) that contains an installation of SQL Server 7.0 or any newer version. You have to buy a SQL Server license if you don’t already have one. This might be good for Microsoft, but not necessarily for you or your client. If your application, say, is based on Oracle or DB2, couldn’t you simply store your session state to a database in the DBMS of choice? The answer is a disappointing no in ASP.NET 1.x, but a resounding yes in ASP.NET 2.0!

In ASP.NET 2.0, you can create a custom session data store that, according to the provider model, plugs into a common infrastructure and API and implements data storage and retrieval on top of the file system, a custom DBMS, or a customized table in SQL Server. In ASP.NET 2.0, the session state subsystem has been refactored to allow developers to replace most of the functionality. You have the following options to customize session state management.

  • You can stay with the default session state module but write a custom state provider to change the storage medium (e.g., a non-SQL Server database or a different table layout). In doing so, you can also override some of the collection classes used to bring data from the store to the Session object and back.
  • You can stay with the default session state module but replace the session ID generator.
  • You can unplug the default session state HTTP module and install your own. Technically this was possible in ASP.NET 1.x but this option is the last you should consider. Obviously, it provides the maximum flexibility but it is also extremely complicated and thus not recommended unless it proves strictly necessary.

Using a database as the session state data store also poses configuration issues, especially in ISP scenarios. The SQL Server environment needs to be extended to accommodate a new database and its tables, stored procedures, and related jobs. For this task, administrative privileges are required or the collaboration of the DBA.

In ASP.NET 1.x, credentials used to access session state stored in SQL Server depend on the connection string. If explicitly provided, user name and password will be used to access the database. Otherwise, if integrated security is requested, the account of the currently logged in client is used. This approach poses some administrative issues for intranet sites using client impersonation. In these cases, in fact, you have to grant access to the database to every client account that might be making calls. In ASP.NET 2.0, you can configure the session state so that the identity used to access SQL Server corresponds to the account running the ASP.NET worker process. This new optional behavior for SQL Server-based session state is consistent with the behavior of the other remote state server-the one using the Windows service.

#7-Cross-Page Posting

Almost every Web developer moving to ASP.NET from classic ASP found, in the beginning, that it was quite weird to be limited to a single form. More exactly, ASP.NET requires that only one server-side form is enabled at a time. You can place multiple server forms on an ASP.NET page so long as only one is active. You can also place as many client HTML forms as needed with no sort of limitation. A client HTML form is a <form> tag devoid of the runat=server attribute.

Multiple forms are useful to implement common features such as a search or a login box. On the other hand, the single form model was chosen as one of the pillars of ASP.NET because it forces page self-posting and makes the implementation of cross-request state dramatically easier. In ASP.NET 1.x, posting to another page is possible only using client HTML forms or redirecting the browser to the desired page after storing parameters to the session state. If you post using a client HTML form you can’t use viewstate and the type-safe programming model of ASP.NET; you are forced to use the Request’s Form or QueryString collections as in classic ASP.

In ASP.NET 2.0, you’ll find a more structured approach to cross-page posting that, although different from classic form posting, allows you to pass values to another page through the ASP.NET postback mechanism.

In ASP.NET 2.0, you can instruct certain button controls to post to a different target page. Button controls that support cross-page posting are those that implement the IButtonControl interface-Button, ImageButton, and LinkButton controls.

Authoring a Web page that can post data to another page requires only a couple of steps. First, you choose the controls that can post back and set their PostBackUrl property to the desired target URL. Here’s how to proceed.

<form id="form1" runat="server">
   <asp:TextBox runat="server" ID="Data" />
   <asp:Button runat="server" ID="buttonPost"
        Text="Click"
        PostBackUrl="target.aspx" />
</form>

When the PostBackUrl property is set, the ASP.NET runtime binds the corresponding HTML element of the button control to a new JavaScript function. Instead of using our old acquaintance __doPostback, it uses the new WebForm_DoPostBackWithOptions function. As a result, when the user clicks the button, the form posts its content to the specified target page.

What about the view state? When the page is enabled for cross-page posting, a new hidden field is also created named __PREVIOUSPAGE. The field contains the view state information to be used to serve the request to the new page. This view state information is transparently used in lieu of the original view state of the page being posted to.

You use the new PreviousPage property to reference the posting page and all of its controls.

void Page_Load(object sender, EventArgs e)
{
  // Retrieves posted data
  TextBox t;
  t = (TextBox) PreviousPage.FindControl("Data");
  :
}

Access to input controls is weakly typed and occurs indirectly through the FindControl method because the target page doesn’t know anything about the type of the posting page.

If you know exactly who will be calling the page, you can flag the target page with the @PreviousPageType directive. This causes the PreviousPage property to be typed to the right source page class.

<%@ PreviousPageType
    VirtualPath="crosspostpage.aspx" %>

Passing values from one page to another is a task that can be accomplished in a variety of ways-using cross-page posting, server transfer, HTML forms, or query strings. Which one is the most effective for developers?

Note that in ASP.NET 2.0, cross-page posting as well as server transfers (the Server.Transfer method) offer a familiar programming model but potentially move a significant chunk of data through the __PREVIOUSPAGE field. In other words, these approaches send values for all the input fields in the source page. In many cases, the target page just needs to receive a few parameters to start working. If this is the case, HTML client forms might be more effective in terms of data being moved. HTML forms, though, require an ASP-like programming model. In my opinion, all considered this is the most effective approach-less data moved, same effect.

#8-Script Callbacks

All Web developers dream of an environment that allows them to execute code on the server and update the page without a full refresh. This kind of feature is not natively part of the HTTP world so to get it working you must be ready to pay some price. In first place, you need a browser that can issue HTTP calls out of band. Internet Explorer 5.0 and newer versions come equipped with an ActiveX component named XmlHttpRequest. In spite of the name that evokes XML stuff, this object is nothing more than a thin object model built around the HTTP protocol. There’s nothing related to XML in what it does except perhaps the fact that it can carry XML data. Loving this object released in the spring of 1999, the Mozilla team created a clone named XmlHttpRequest and baked it in the DOM of Mozilla browsers. Recently, Opera also did the same so that today virtually all browsers have the capability of issuing HTTP command out of band.

This enables server-side platforms like ASP.NET to generate script code to execute server tasks without leaving the current page. When the operation completes, results are carried back as plain strings to the client and a Javascript callback takes care of refreshing the user interface of the page (or a control) using the Dynamic HTML object model supported by the browser.

ASP.NET 2.0 provides this feature through script callbacks, which is an interface that pages and/or controls implement to favor out-of-band calls. When enabled, the feature forces the ASP.NET runtime to inject some tailor-made script code to trigger the remote call.

Script callbacks work nicely in ASP.NET 2.0 and are not that hard to implement in ASP.NET 1.x. Check out the “Cutting Edge” column in the August 2004 issue of MSDN Magazine for an example. The rub lies in the fact that in their native form, script callbacks add yet another layer of code to your application. It’s not object-oriented code; it’s not modular code; it’s simply a tangle of function calls and methods that span the client and the server. But it works on a good number of browsers-Pocket Internet Explorer is the most popular browser for which it poses some issues. NetFront is an alternate Pocket PC browser that overcomes the limitation of Pocket Internet Explorer.

AJAX.NET is an open-source library that is gaining popularity these days because it lets you do server-side programming from the client in a more structured way. AJAX.NET provides two key benefits over the ASP.NET callbacks. First, it allows you to pass real objects to the Javascript callback instead of plain strings. Second, it automatically generates any script code that’s needed.

By the time you read this, Microsoft should also have disclosed a lot of details regarding Atlas-the ASP.NET 2.0 next (and more structured) version of script callbacks.

Regarding the use of script callbacks-whatever library you use-there are two considerations to bear in mind regarding security and page creation. Users navigating with Internet Explorer browsers have to lower the security level to allow at least for ActiveX control calls safe for scripting. This is not required on other browsers. In all cases, you must be able to merge the return values of server calls with the DOM of the current browser. This means that you must stay close to the methods and properties in the W3C HTML 4.0 standard.

#9-Master Pages

Probably the most-hyped new feature in ASP.NET 2.0, master pages are a rather effective way to build similar-looking pages. A master page is a distinct page with a .master extension that standard .aspx pages can reference. A master page contains the layout of the page made of static parts shared by all derived pages and dynamic regions that each derived page can customize at will.

A page based on a master is radically different from any other ASP.NET page. It looks like a collection of blocks that the ASP.NET runtime will use to fill the holes in the master. As in Listing 1, the master page contains one or more <asp:contentplaceholder> tags for which derived pages (named content pages) will provide content for. Listing 2 shows a sample content page.

A content page contains <asp:content> tags each matching one of the placeholders in the master. The contents of the <asp:content> tag is used to replace the master’s placeholder when assembling the final markup for the user. This is the essence of master pages in ASP.NET 2.0. Visual Studio .NET 2005 also does a good job of providing page authors with a preview of the final page merging master and content pages. Some considerations apply to master pages.

True visual inheritance à la Windows Forms is not a goal of ASP.NET 2.0 master pages. The contents of a master page are merged into the content page, and they dynamically produce a new page class that is served to the user upon request. The merge process takes place at compile time and only once. In no way do the contents of the master serve as a base class for the content page. There’s no way for the page code file to incorporate functions defined on the master. Likewise, trace and debug options, imported namespaces, languages, and whatever settings you set in the @Master directive are not copied at the page level.

You can set the binding between the master and the content at the page level and also at the application or folder level. Application-level binding means that you link all the pages of an application to the same master. You configure this behavior by setting the Master attribute in the <pages> element of the principal web.config file.

<configuration>
    <system.web>
        <pages master="MyApp.master" />
    </system.web>
</configuration>

If the same setting is expressed in a child web.config file-a web.config file stored in a site subdirectory-all ASP.NET pages in the folder are bound to a specified master page.

Having masters automatically associated with pages is tempting, but consider the following before you do it. A folder configured to use a given master can’t contain standard ASP.NET pages that are not bound to the master. At no time can you add a new page to the folder that is not designed to support that particular master.

Personally, I consider master pages to be a purely visual feature. If you need to derive pages that incorporate and inherit predefined behaviors, master pages are not the right feature. In this case, you’re better off building your own hierarchy of page classes through the classic OOP mechanism and code file classes.

#10-Data Source Controls

In ASP.NET 2.0, data-bound controls can receive their data in two ways: through an enumerable data source object like in ASP.NET 1.x and through a data source control. A data source control is a server control designed to interact with data-bound controls and hide the complexity of the manual data-binding pattern. A data source control saves the developer from writing binding code explicitly as too often required by ASP.NET 1.x.

Data source components not only provide data to controls, they also support data-bound controls in the execution of other common operations such as insertions, deletions, sorting, and updates. Each data source component wraps a particular data provider-relational databases, XML documents, or custom classes. The support for custom classes means that you can now directly bind your controls to existing classes in your business or data access layer.

Existing data-bound controls such as DataGrid and Repeater don’t take full advantage of data source controls. Only ASP.NET 2.0-specific controls such as GridView, FormView, and DetailsView benefit from the true power of data source controls. This is because new controls have a different internal structure specifically designed to deal with data source controls and share with them the complexity of the data-binding pattern.

A data source control represents one or more named views of data, each of which manages a collection of data. The data associated with a data source control is managed through SQL-like operations such as SELECT, INSERT, DELETE, and COUNT, and through capabilities such as sorting and paging. Data source controls come in two flavors-tabular and hierarchical, as in Table 2.

Note that the SqlDataSource class is not specific to SQL Server. It can connect to any ADO.NET provider that manages relational data. Connection string and the provider name are specified via properties.

Two data source controls are of key importance for ASP.NET 2.0 applications because of their frequent use-SqlDataSource and ObjectDataSource.

SqlDataSource works through SQL commands and stored procedures. It returns data through ADO.NET classes-DataSet or data reader. It doesn’t provide paging capabilities and relies on the stored procedure or the SQL command text for sorting. The SqlDataSource control supplies built-in capabilities for caching and conflict detection. Caching, in particular, means that the result of the query can be stored in the ASP.NET cache for the specified duration and retrieved from there instead of re-running the query.

ObjectDataSource works through method calls and returns data either through ADO.NET objects or custom collection classes. Paging and sorting are supported only if the underlying classes provide paging and sorting mechanisms. In other words, paging and sorting work if the methods provide parameters to extract only a subset of records sorted in a certain way. Caching is automatically supported only if data is returned via ADO.NET classes.

What’s the difference between SqlDataSource and ObjectDataSource and when should you use which?

Using SqlDataSource is inherently simpler and requires much less code to write. To some extent, though, it doesn’t provide for the implementation of advanced scenarios where a business layer is required. ObjectDataSource allows data retrieval and update while keeping data access and business logic separate from the user interface. On the other hand, using the ObjectDataSource class doesn’t automatically transform your system into a well-designed, effective, n-tier system.

Data source controls are mostly a counterpart to data-bound controls so that the latter can work more intelligently. To really benefit from ObjectDataSource, you must have your DAL already in place. ObjectDataSource doesn’t break n-tier systems, nor does it transform existing systems into truly n-tier systems. It greatly benefits, instead, from existing business and data layers. Between the lines, the adoption of ObjectDataSource requires the implementation of common enterprise design patterns for a DAL, such as the Table Data Gateway pattern.

Figure 2 measures the impact of SqlDataSource and ObjectDataSource controls on your code. It’s my pleasure to credit Fritz Onion for the idea of the diagram. You can read Fritz’s original comment here: http://pluralsight.com/blogs/fritz/archive/2005/08/09/13940.aspx.

Figure 2: Measuring the impact of SqlDataSource and ObjectDataSource on your code.

Overall, I believe that switching to ObjectDataSource increases the complexity of your code and the amount of code you have to write quite significantly. Using ADO.NET classes gives you free caching and some sorting and paging facilities. All of this has to be manually coded if you opt for custom collections. It’s my opinion that the complexity gap is larger between SqlDataSource and ObjectDataSource than between two ObjectDataSource instances using ADO.NET and custom classes. On the other hand, the more you take control of the plumbing, the more you gain in flexibility and maintenance.

Conclusion

ASP.NET 2.0 comes with a full bag of new controls and features. All of these features have been designed and implemented to ease your approach to the new platform and to diminish the amount of code you write. Don’t be fooled by jaw-dropping demos scientifically created to leave you enthusiastic about a given feature.

ASP.NET 2.0 doesn’t lie about its capabilities; some demos, though, tend to show the best case whereas most developers working on real-world applications should perhaps be more interested in average or worst cases. My goal in this article was to take ten features and discuss them enough so that you’ll feel comfortable enough to delve deeper beyond the surface of documentation and demos.