Infrastructure as a Competitive Advantage
Security for most Web sites often consists of maintaining recent applications, "patching" problem applications, maintaining proper system configurations and preserving good practices around basic issues. These are all important for success but security is really defined by best practices and policies of which these issues are but subsets. Security is the institutionalization of a set of best practices and policies that apply to not only applications and infrastructure, but also to business practices and partners as well.
In an increasingly outsourced and interrelated technological marketplace, where we are figuratively and literally bound by a "Web of Trust," we all have to agree on some common benchmarks of what constitutes security. Any single weak point affects everyone involved. Since security is also an ever-changing environment, good practices and policies serve as an anchor for managing relationships. Just as we wouldn't trust any company that didn't audit its financial practices, we should not trust any company that doesn't audit its security policies. Only through the impartial use of outside domain experts can we be sure that our policies and practices are current and uniformly applied. We can also then use the audit as a tool to represent our security accurately to our partners and customers.
Good security through good policies and practices has the primary benefit of not only improving security but improving the operational availability and overall health of an IT organization. All of these are important in providing a quality of service that will make selling your Web Service easy and sustainable. Having an audit that confirms these practices will also make it easier to sell your services and to bring partners on board. This works the other way as well. When evaluating a partner that you will be depending on for the delivery of your own Web Service, they must meet the threshold of good security and IT practices that your business requires.
The Business Importance of Infrastructure
Every organization has to make choices on where it invests and allocates its often-scarce resources. Any company providing Internet services has additional choices to make beyond its nominal IT needs. While any infrastructure has to be proportional to the business requirement it meets, careful attention must be given early on to making good technology choices. The concept of using low cost PC hardware and "scaling out" incrementally offers enormous benefits, allowing a business to grow incrementally as opposed to purchasing large capital assets. Early on an IT organization needs to make choices on platforms and technologies. Once those choices are made it should then stick to them or change them out wholesale.
Heterogeneity in an IT organization is one of the single greatest causes of increased work load, security and "single point of failure" scenarios that have crippled numerous companies. By making wise choices of an inexpensive and easily scaled homogenous infrastructure that is load balanced, you can have the best of the worlds of large economies of scale, security and enhanced redundancy, providing greater availability and a uniform user experience. As part of your due diligence, even if you choose to outsource you should look for these same attributes in whatever partner you choose.
The concept of a Web Service not only creates flexibility around how you build applications but also in how those services can be delivered. The concept of "scale out" allows us to build and scale our servers by the individual PC or server as opposed to upgrading smaller numbers of monolithic servers. This concept provides several key advantages. The first is the concept of incremental low-cost scalability to meet increasing demands. The second is the innate ability to re-purpose servers as requirements change. The third is the advantage of redundancy and high availability--no single machine becomes a single point of failure. Finally there is the inherent cost savings of smaller incremental server investments. Offsetting these gains is the increased cost of maintaining multiple servers. The way around this is to enforce strict homogeneity and to build servers using scripts or preformatted images that can be then rolled out to multiple machines. With these principles it becomes much easier to build and scale.
You can now combine this concept with the use of outsourced partnerships that deliver these services over multiple backbones or on the edge of the network. This combination allows even a small company to build a highly available and scalable Internet presence or Web Service.
Many companies make the choice not to host their own infrastructure because of limited needs or resources. Outsourcing is a sound strategy but only after conducting a lot of due diligence with your intended partners. The old collocation model of renting network and rack space has proven to be problematic both for customers and vendors as recent high profile business failures have shown. Rather, collocation should be viewed as one of two extremes. The first is the "bare bones" data center that provides racks, power, air, physical security as well basic network cross connects. This type of collocation facility lets a company get past the issues of a large capital outlay but preserves maximum flexibility. The other extreme is that of a managed service where network connectivity is provided along with the servers, base Operating System, and in some cases, the application and database servers as well. This total outsourcing model allows a company to focus on its core competency of developing content.
In a Web Services world where a number of Web Services are provided as a value add by the collocation center, a managed services option becomes a very attractive way to gain most of the benefits without the cost or infrastructure. Don't forget that in any of these scenarios significant value can be added by using a content delivery network or a network aggregator.
Content Delivery Networks (CDNs) are large networks of geographically dispersed servers that take content from an original Web server and redistribute that content to end users' computers or networks. CDNs typically function by caching data using sophisticated algorithms. They then match the geographically closest server to the end user allowing for low latency when serving Web pages and applications. This offers a number of benefits for any Internet service operator. The first is improving the end user experience by reducing latency of long transit or Internet infrastructure delays. The second is built-in scalability, as the CDN maintains many machines and network connections which can be utilized "on demand" to provide end user services.
Security is also enhanced, since a denial of service attack-a type of hacker attack that floods a Web server with requests it can't meet, effectively shutting it down-is most likely precluded due to the nature of the network. CDN's are now starting to move up the value chain by actually assembling Web content on the edge of the Internet and not just serving up pre-built Web page elements.
This means that a company potentially could have just one "origin" server and actually use a large CDN network as its data center. This eliminates the need to maintain an infrastructure beyond the origin server. As Web Services move to the fore, a new opportunity arises with the use of CDNs. Now not only could you assemble your content on the edge, but you could actually cache the Web Services themselves. This would allow for building your entire application in a large cluster of geographically dispersed servers that exist on multiple networks always physically close to the actual end user. This would gain us enormous advantages in the end user experience as well as scalability.
Copyright 2001, McAfee.com. Reprinted by permission.
FOR MORE INFORMATION: