This question gains significance when you look at the fact that most popular website receiving very high traffic, such as Google, Facebook, Twitter, Amazon and the like were built using programming languages such as PHP, Java, Python, Ruby, Perl etc and not ASP.NET or similar technology leading one to question if ASP.NET is indeed capable of handling such volumes. However, it should be borne in mind that these websites started small and for obvious reasons used free and open source technologies and have stuck to them ever since. Of course, these languages are very much capable of handling large traffic but that does not imply that ASP.NET does not ! There are indeed many sites, such as those belonging to financial services,healthcare services, government portals that are built on ASP.NET and handle high traffic efficiently.

Website built using ASP.NET can handle a very large volume of traffic if it is optimized and scaled properly during ASP.NET development so as to eliminate performance bottlenecks and resource limitations affecting the performance and speed of the application. Here are a few techniques to use to ensure that your ASP.NET website remains scalable.

During the design process set objective goals that are measurable and verifiable. Determine what should be the appropriate response time, number of users expected, peak load and number of transactions per second. Validate and prototype your design in the beginning rather than wait for a later stage.

Remove the unnecessary parts which could hinder performance. Default modules which do not have any role in the application only help congest the pipeline. For instance, if your application doesn’t use Windows Authentication then there is no need to have WindowsAuthentication module intercepting each an every request made to the server.

Configuration Optimization

ASP.NET Process Model design contains by default some procedure level properties like what number of number of threads ASP.NET utilizes, to what is time out period after which the thread is blocked, the number of requests to continue waiting for IO processes to finish. Since hardware resources are not much of a constraints these days the model configuration can be tweaked according to requirements. For instance, in the default config, that “maxconnection” value is 2 which means that only 2 simultaneous connections can be made to the IP from your application. You can change the value to support as many simultaneous connection that your system can handle.

To tweak this, first modify the machine.config file to set autoconfig to false within the processModel element.

<processModel autoConfig=”false”/>

Then in the connectionManagement element change the maxconnection value

<connectionManagement>
<add address=”*” maxconnection=”2″>
<add address=”http://xx.xx.xx.xxx” maxconnection=”12″>
</connectionManagement>

While 12 is the recommended value per CPU, you can increase value according to your requirements. Also you can increase maxIoThreads, maxWorkersThreads to 100 each and increase the memory allocated to your application by default.

Each http request and response have to generally traverse a wide geographical area if not the globe itself which leads to some latent delay in data transfer. So if your budget allows, you can chose a Content Delivery Network which can optimize and deliver content at a much faster pace for a fee.

Browser Cache

Browser content are cached on local machine based on their URL, so in case of dynamically generated URL or query string parameters, return new content from the server but even these can be cached if proper caching headers are returned. In case of static pages or image files, one must take care that the URLs used are uniform so no fresh content are fetched from the server. Also reuse common graphics by storing them in one location and accessing them from there instead of having copies in different sub-folders and using relative URL to access them.

Caching In Business Layer

When caching data in the business layer, a caching mechanisms can be developed using hash tables or data structures in the application’s business logic. However, data caching in business layer should only be considered when it not possible effectively retrieve data from the database. Further, data that changes frequently should not be cached in this layer.

Caching In Database

Data caching in the database should only be considered when the data is being stored for a long period of time and should be saved as chunks or whole depending on the requirements. Since this data is stored in temporary tables consuming more RAM leading to bottlenecks, periodic measurements should be carried out to determine if the caching is causing adverse effect on the application or is contributing to its performance.

Deployment

Consider your deployment architecture, evaluating your constraints and assumptions at early stage of design. Both distributed and non-distributed architectures are compatible with .NET and both have their merits and demerits in the overall development and deployment of the application.

Non-Distributed Architecture

In non-distributed architecture the presentation layer, business layer and data layer are located on one web server process on a Web server, even if they are separated logically and thus the architecture is less complex as the performance calls are made locally. On the flip side, in non-distributed architecture sharing business logic with other applications is difficult. More importantly, in non-distributed architecture server resources are shared across the presentation layer, business layer and data layer which may be advantageous in small applications but can create issues in larger applications.

Distributed Architecture

In distributed architecture the business logic is stored on a middle-tier application server between the presentation layer on web server and that database server. This architecture is far more flexible enabling the business layer to load balance independently and effectively as well as providing the all three tiers with their own resources. Distributed architecture is definitely more complex and expensive but its bigger demerit is that it generates more serialization and network latency in remote calls. However, distributed architecture is definitely more suitable to support website with high traffic.

These are a few points out of many, which suggest that ASP.NET can be a great platform for creating websites that can handle very high traffic and continue performing well.

Comments

comments