Deploying Your Web Site in a Web Farm ASP.NET

A web farm is a cluster of multiple web servers running the same copy of code,serving the same web site,and distributing traffic among them in a load-balanced environment.

Generally,you use a hardware load balancer or implement Net work Load Balancing (NLB) for Windows to make multiple web servers respond to a fixed IP.The out side world sees only one IP; when traffic comes to that IP,it is distributed among the web servers in the web farm.

Figure shows a web farm configuration where a load balancer serves a public IP 69.15.89.1

Web farm with a load balancer

Web farm with a load balancer

Let’s say this IP is mapped to the domain www.dropthings.com. When users go to www.dropthings.com,traffic is sent to 69.15.89.1. The load balancer gets the incoming requests,and then, based on its load table and load balancing algorithm, it decides which of the web servers to send traffic to.Traffic never goes directly to the web servers from the Internet.

Web Farm Pros and Cons

A web farm environment is critical for successful web site operations, but there are some things that you must keep in mind.

Pros:

Easy to load balance

If a web server reaches its limit on the CPU or disk I/O, just add another server to balance the load.There’s no need to change code as long as your code can support the web farm scenario.Unless there’s some really bad code, you can always add more servers to a web farm and support the higher load.

Easy to replace a malfunctioning server

If one web server is malfunctioning,take it out of the web farm, fix it,and put it back in.Users will notice nothing. There’ll only be a temporary increase on other servers running in the web farm.

Directs traffic away from a nonresponsive server

If one web server crashes, your site still runs fine.The load balancer can detect nonresponsive servers and automatically divert traffic to responsive servers.

Removes slow servers from the web farm

If one server has become too slow for some reasonthe load balancers can automatically remove it from the web farm.

Avoids a single point of failure

There’s really no way you can run a production web site on one web server and ensure 99 percent up time.That web server becomes a single point of failure, and when it goes down, your site goes down as well.

Cons:

Session cannot be used unless it is stored in a centralized database An ASP.NET session won’t work in the in-process or out-of-process modes because they make each server maintain its own copy of the session.So, Session will only work in SQL Server mode when there’s one SQL Server storing the sessions,and all of the web servers participating in the web farm have access to that centralized SQL Server store. However, one good thing is that the ASP.NET Profile provider acts almost like Session because you can store a user’s properties,and it can be used instead of an ASP.NET session in a web farm.

Not all requests from a particular user will go to the same web server

It is possible that a user’s first hit to Default.aspx will go to Web Server 1, the Java-Script files will be down loaded from Web Server 2, and subsequent web service calls or asynchronous postback will go to Web Server 3. So, you need to make completely stateless web applications when you deploy a web farm.There are some very expensive load balancers that can look at a cookie, identify the user, and send all requests containing the same cookie to same web server all the time.

Web application logs will be distributed in web servers

If you want to analyze traffic logs or generate reports from logs, you will have to combine logs from all web servers and then do a log analysis.Looking at one web server’s log will not reveal any meaningful data.

The ASP.NET cache is not always available or up-to-date

One request could store something in the ASP.NET cache on Web Server 1,and the following request might try to get it from Web Server 2.So, you can only store static data in a cache that does not change frequently, and if it doesn ’t matter whether old data is added from the cache or not.Such data includes configuration cache,a cache of images from a data base,or content from external sources that do not change frequently. You cannot store entities like User object in the ASP.NET cache.

The ASP.NET cache is not always available or up-to-date

Problem:Startups often don’t have enough money to buy expensive servers.

Solution:Load balance your servers to ensure some redundancy.

When we first went live with Pageflakes in 2005, most of us did not have any experience running a high-volume,mass-consumer web application on the Internet.We went through all types of problems, but have grown from a thousand users to a million users.We have discovered some of ASP.NET 2.0’s under-the-hood secrets that solve many scalability and maintainability problems,and gained enough experience in choosing the right hardware and Internet infra structure that can make or break a high-volume web application.

When Pageflakes started,we had to save every penny in hosting. When you have a small hosting requirement,Windows is quite expensive compared to PHP hosting. So,we came up with a solution to use only two servers and run the site in a load-balanced mode. This ensured redundancy on both the web application and on SQL Server,and there was no single point of failure.If one server went down completely,the other server could serve the whole web site.The configuration is shown in Figure

We had two windows servers, both with IIS 6.0 and SQL Server 2005, so for this example,let’s call them Web Server and DB Server.

Web Server got 60 percent web traffic configured via NLB. We used Windows NLB to avoid buying a separate load balancer and Windows Firewall instead of an external firewall.SQL Server 2005 on this server was used as a log shipping standby database, so we didn’t have to pay a licensing fee for the standby server.

DB Server got 40 percent of the web traffic and hosted the database in its SQL Server 2005.We started with SQL Server 2005 Workgroup Edition because it was the only version we could afford ($99 per month).However, we couldn’t use the new database mirroring feature—instead, we had to use good old transaction log shipping.

A two-server web farm where both servers acts as a web server but one server is primary database server and the other is a standby database server

A two-server web farm where both servers acts as a web server

Both servers were directly connected to each other via a network adapter using crossover cable.Because we had only two servers, we didn’t have to buy a separate switch.An important lesson here is that you don’t have to pay for a SQL server license if the server is only hosting standby data bases.

So,we had two servers running the web site in NLB,and the web servers were properly load balanced and failsafe.If the DB Server went down, we could divert all traffic to the Web Server,bring up its standby database, and run the site solely from there.When the DB Server would come back online,we configured log shipping the opposite way and diverted most of the traffic to the DB Server. Thus, if needed,the data base server could become the web server and the web server could become the data base server.It required some manual work and is not fully automated. But it was the cheapest solution ($600 to $1,000 a month) for a reliable configuration, and it ensured 90 percent up time.

Transaction Log Shipping

SQL Server has a built-in transaction log shipping ability where it records each and every change made to a data base and ships the changes periodically (say every five minutes) to another standby server.The standby server maintains a copy of the production data base, it applies the changes(tran- saction logs), and keeps the data base in synch with the main data base.If the main data base server fails or the data base becomes un available for some reason, you can immediately bring in the standby database as active and run it as a production data base.

Transaction Log Shipping

Problem: When running a production server with a large database, you will soon run into storage issues.

Solution: Add another server as a backup store.

We ran a daily, full database backup and needed a lot of space to store seven days worth of backup.So,we added another server that acted as a back up store; it had very poor hard ware configuration but enormous hard drives.

We also had to generate weekly reports from the IIS logs.Every day we used to generate 3 to 5 GB of web logs on each server,and they had to be moved off the web server to a reporting server so we could analyze them and generate weekly reports. Such analysis takes a lot of CPU and time and is not suitable for running directly on the web servers. More over,we need to combine logs from both web servers into one place.We had no choice but to go for a separate reporting server. After adding a back up storage server and a reporting server, the configuration looked like Figure

Cheap hosting configuration with storage and reporting servers

Cheap hosting configuration with storage and reporting servers

The web and database servers had SCSI drives with 15,000 RPM. But the storage and reporting servers had cheap SATA drives because those servers didn’t need faster drives.

The web and database servers had an F: drive dedicated for storing SQL Server 2005 data base’s large MDF file.This F:drive was a physically separate disk, but the other physical disk had two logical partitions—C: and E:.The E:drive contained the LDF file and the web application.

If you put the MDF and LDF files on the same physical drive, the data base transactions will become slow.So, you must put MDF and LDF on two separate physical disks and preferably under two separate disk controllers.If both physical disks are in the same disk controller, you will still suffer from disk I/O bottleneck when the data base performs large jobs like full database backup.

Designing a Reasonable Hosting Configuration

A reasonable web-hosting configuration should include two web servers,two data base servers,a load balancer, and a fire wall.It is the minimum needed to guarantee 95 percent up time.Figure shows a reasonable configuration for a medium-scale webapplication

A standard web farm with redundant web and database servers,

A standard web farm with redundant web and database servers,

In this configuration,there’s redundancy in both the web and data base servers.You might wonder if the cheapest configuration has the same level of redundancy.This configuration gives you a dedicated box for running web applications and a data base server, but the cheaper configuration had database and web applications running on the same box.We learned that IIS 6.0 and SQL Server 2005 do not run well in the same box and somet imes SQL Server 2005 hangs until the service is restarted.This was the main reason why we separated the web and database servers.However,adding more than two servers requires a switch. So, we added a gigabit switch that was connected to each server’s gigabit Ethernet card via gigabit Ethernet cable.You could useoptical fiber cables for faster and more reliable connectivity, but gigabit cables are also quite good.Both web servers have an additional 100 mbps Ethernet card that is connected to the firewall and load balancer.We used the hosting provider’s shared firewall and had to buy two ports from the firewall.Luckily, that fire wall had load balancing capability built into it.f your hosting provider does not have a fire wall like this,you will have to lease a load balancer in addition to the firewall.This configuration will cost about $4,000 to $6,000 per month.


Face Book Twitter Google Plus Instagram Youtube Linkedin Myspace Pinterest Soundcloud Wikipedia

All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

ASP.NET Topics