Application Acceleration Technology – A Boon

[ad_1]

Many corporations require combined voice, video and Internet access with a two-way Internet bandwidth of at least 100 Mbps. This is a forward-looking composite requirement that recognizes that a typical corporation with 250+ employees will be watching videos, talking on the telephone, and accessing the Internet all at the same time.

About 300 million people in the world are telecommuting to work today. Better, faster, and cheaper communication infrastructure would mean a phenomenal increase in productivity and a better quality of life.

Knowing the impact of Internet on mankind and despite hundreds of terabyte Internet bandwidth capacity across the world, what is stopping us from using bandwidth to its full extent? Why are we still talking of speed in terms of kilobits when hundreds of terabyte Internet capacities have already been laid and tested?

The fiber glut

There exists a vast international bandwidth capacity across all continents and countries connecting their various cities and towns and terminating at various places that are called Point of Presence (PoP). More than a billion Internet users exist throughout the world. The challenge consists of connecting these users to the nearest POP. The connectivity between various client sites and POPs, called the last mile connectivity, is the bottleneck.

Internet Service Providers (ISPs) built the long haul and backbone networks spending billions over the past five years. ISPs spent to this extent to increase the broadband capacity by 250 times in long haul; yet, the capacity in the metro area increased only 16 fold. Over this period, the last mile access has remained the same, with the result that data moves very slowly in the last mile. Upgrading to higher bandwidths is either not possible or the cost is extremely prohibitive. The growth of Internet seems to have reached a dead end, with possible adverse effects on the quality and quantity of the Internet bandwidth that is available for the growing needs of enterprises and consumers.Compounding this is the technical limitations of Transmission Control Protocol / Internet Protocol (TCP/IP).

TCP/IP limitations

The Internet works on a protocol called the TCP/IP. TCP/IP performs well over short-distance Local Area Network (LAN) environments but poorly over Wide Area Networks (WANs) because it was not designed for it.

TCP as a transport layer has several limitations that cause many applications to perform poorly, especially over distance. These include: window size limitations for transmission of data, slow start of data transmission, inefficient error recovery mechanisms, packet loss, and disruption of transmission of data. The net result of issues is poor bandwidth utilization. The typical bandwidth utilization for large data transfers over long-haul networks is usually less than 30 percent, and more often less than 10 percent. Even if a chance of upgrading the last miles even at very high costs exists, the effective increase would be only 10 percent of the upgraded bandwidth. Hence, upgrading networks is a very expensive proposition.

A new technology called the ‘Application Acceleration’ has emerged, which accelerates the Internet applications over WANs using the same Internet infrastructure, circumventing to some extent the problems caused due to lack of bandwidth.

Application accelerators, as the name suggests, are appliances that accelerate applications by reengineering the way data, video, and voice is sent/transmitted over networks. Application acceleration addresses non-bandwidth congestion problems caused by TCP and application-layer protocols, thereby, significantly reducing the size of the data being sent along with the number of packets it takes to complete a transaction, and performs other actions to speed up the entire process.

Application accelerators can also monitor the traffic and help with security. Some appliances mitigate performance issues by simply caching the data and/or compressing the data before transfer. Others have the ability to mitigate several TCP issues because of their superior architecture.

These appliances have the ability to mitigate latency issues, compress the data, and shield the application from network disruptions. Further, these new appliances are transparent to operations and provide the same transparency to the IP application as TCP/IP application accelerators have the following features using Layer 4-7 Switching.

Transport protocol conversion

Some data center appliances provide alternative transport delivery mechanisms between appliances. In doing so, they receive the optimized buffers from the local application and deliver them to the destination appliance for subsequent delivery to the remote application process. Alternative transport technologies are responsible for maintaining acknowledgements of data buffers and resending buffers when required.

They maintain a flow control mechanism on each connection in order to optimize the performance of each connection to match the available bandwidth and network capacity. Some appliances provide a complete transport mechanism for managing data delivery and use User Datagram Protocol (UDP) socket calls as an efficient, low overhead, data streaming protocol to read and write from the network.

Compression engine

A compression engine as part of the data center appliance compresses the aggregated packets that are in the highly efficient IP accelerator appliance buffers. This provides an even greater level of compression efficiency, since a large block of data is compressed at once rather than multiple small packets being compressed individually. Allowing compression to occur in the LAN-connected appliance frees up significant CPU cycles on the server where the application is resident.

Overcoming packet loss

The largest challenge in the TCP/IP performance improvements centers is the issue of packet loss. Packet loss is caused by network errors or changes better known as network exceptions. Most networks have some packet loss, usually in the 0.01 percent to 0.5 percent in optical WANs to 0.01 percent to 1 percent in copper-based Time Division Multiplexing (TDM) networks. Either way, the loss of up to one or more packets in every 100 packets causes the TCP transport to retransmit packets, slows down the transmission of packets from a given source, and re-enters slow-start mode each time a packet is lost. This error recovery process causes the effective throughput of a WAN to drop to as low as 10 percent of whatever the available bandwidth is between two sites.

IP application accelerators optimize blocks of data traversing the WAN by maintaining acknowledgements of the data buffers and only sending the buffers that did not make it, and not the whole frame. This allows for the use of a better transport protocol that will not retract data or move into a slow start mode. Using a more efficient transport protocol has lower overhead and streams the data on reads and writes cycles from source to destination. This is completely transparent to the process running a given server application.

Caching

Web documents retrieved may be stored (cached) for a time so that they can be conveniently accessed if further requests are made for them. There is no need for the entire data to move cross the network and only updating requests are sent across, thereby optimizing network bandwidth.

Server load Balancing

Server load balancers distribute processing and communications activity evenly across a computer network so that no single device is overwhelmed. Load balancing is especially important for networks where it is difficult to predict the number of requests that will be issued to a server. Busy Web sites/Web sites with a heavy traffic typically employ two or more Web servers in a load-balancing scheme.

SSL acceleration

Secure sockets layer (SSL) is a popular method for encrypting data that is transferred over the Internet. SSL acceleration is a method of offloading the processor-intensive public key encryption algorithms involved in SSL transactions to a hardware accelerator. Typically, this is a separate card in an appliance that contains a co-processor able to handle most of the SSL processing.

Despite the fact that it uses faster symmetric encryption for confidentiality, SSL still causes a performance slowdown. That is because there is more to SSL than the data encryption. The “handshake” process, whereby the server (and sometimes the client) is authenticated, uses digital certificates based on asymmetric or public key encryption technology. Public key encryption is very secure, but also very processor-intensive and thus has a significant

negative impact on performance. The method used to address the SSL performance problem is the hardware accelerator. By using an intelligent card that plugs into a PCI slot or SCSI port to do the SSL processing, it relieves the load on the Web server’s main processor.

Connection multiplexing

Connection multiplexing works by taking advantage of a feature in HTTP/1.1 that allows for multiple HTTP requests to be made over the same TCP connection. So instead of passing each HTTP connection from the client to the server in a one-to-one manner, the appliance combines many separate HTTP requests from clients into relatively few HTTP connections to the server. This keeps the connections to the server open across multiple requests, thus eliminating the high turnover that is typically encountered in high volume Web sites. The ultimate result is that there is higher performance out of the same servers without any changes or improvements to the server infrastructure.

Clustering

A cluster is a group of application servers that transparently run applications as if it were a single entity. Clusters can comprise redundant and fail over-capable machines: A typical cluster in a network integrates Layer 4-7 Load Balancers, Gateway Routers, which exist at the end of a network on each side, and various switches in a network, which integrates the application and Web Servers with the whole Network. Firewalls are used in filtering port level access to all network resources and data storage devices (which can use any media such as Tape drives, Magneto- Optical drives or Simple hard drives). A cluster manages the writing of data on main storage devices as well as the redundant ones and manages switchover to redundant storage media in case of a failure of primary data storage devices.

Network security (Firewalls)

Network security protects the networks and their services from unauthorized modification, destruction, or disclosure, and provides assurance that the network performs its critical functions correctly and that there are no harmful side effects. It also includes providing for data integrity. Gateway that limits access between networks in accordance with local security policy is called Firewalls and can be implemented in Layer 4-7 Switching.

Firewalls are used to prevent unauthorized Internet users from accessing private networks. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria.A firewall is usually considered a first line of defense in protecting private information.

Bandwidth management, QoS, monitoring and reporting

Bandwidth management appliances allocate bandwidth to mission-critical applications, slow down non-critical applications, and stop bandwidth abuse in order to efficiently deliver networked applications to the branch office. The primary goal of Quality of Service (QoS) is to provide priority including dedicated bandwidth, controlled jitter, and atency (required by some real-time and interactive traffic) to applications traveling on the network.

End-to-end performance monitoring and reporting provides the WAN visibility required to analyze the traffic; Layer 7 QoS allocates bandwidth according to rules and policies. Traffic is automatically categorized into application classes.

Easy to understand shaping policies such as “real-time” or “block” govern the flow of traffic. Packet fragmentation assures that large data packets do not violate VoIP/video latency budgets, while packet aggregation ensures higher WAN capacity and stabilizes jitter. This guarantees that delay-sensitive traffic such as VoIP can be allocated a minimum amount of bandwidth to ensure optimal voice quality even when WAN links are congested or oversubscribed.

The net result of these features is that very high data transfer speeds, some times as much as 10X, are achieved. This technology has come as a boon to the Internet-starved industry achievement of higher bandwidth speeds and means that organizations can now look forward to explosive growth in their Internet business.

The demand for Internet bandwidth is bound to increase by the day. The application acceleration technology is expected to give the much-needed respite to ISPs and the Government to better plan and implement the last miles on the best possible media and resolve the last mile bottlenecks forever.

[ad_2]

Source by Vijay Kaul

View all VPN Deals

Trusted Coupon
Logo
Compare items
  • Total (0)
Compare