We have heard a lot about improving software quality most of it centered
on things such as bugs and security vulnerabilities. Unfortunately, little
of that software quality rhetoric has focused on issues related to network
performance and cost.
This oversight is surprising and not at the same time. It's surprising because
we've been wrestling with networked applications for well over a decade
now and we know all too well how representative the network cost is in the
overall IT budget and how poor performance impacts our operations. On the
other hand, it isn't very surprising because application development teams
and network technicians are known for not interacting as much as they should.
Application developers are taught to focus on business-use models, code
quality, and project management disciplines. Efficient use of network resources
remains a relatively low priority in most of the IT world especially with
network technologies evolving to make more and more bandwidth available
for the asking. In general the network team accepts the applications demand
for bandwidth as a consummated fact without questioning it. The predominant
mentality is that "We start from there" or "Our job is to transport whatever
the applications demand". The usual situation is such that no one in the
network team has or wants to have any say during the application development
process.
Considering the percentage represented by the Telecom/network within the
typical IT budget it would be reasonable to expect that thoroughly evaluations
about the impact of the traffic generated by each application over the network
(LAN/WAN) would be taken, even in the early stages of the applications development.
The impact in terms of network's cost/performance can even be a major factor
defining the feasibility or not when you are implementing a new application.
Instead, what we usually have seen is the fact that applications may be
tested in advance for stability, user acceptance, and CPU utilization, but
their behavior on the network remains an afterthought. No one really knows
how they will perform across the diverse LAN and WAN connections they must
traverse, how they will impact the numerous other applications traversing
those same connections, how efficient in terms of bandwidth utilization
they really are (or could be) or how much they will add in the cost of the
network, until they go live.
These problems occur not only when implementing brand new applications.
Sometimes adding a new feature to an existing application can wind up wreaking
havoc in the network. Other times, it's the addition of a new site or new
users. Developers often fail to fully comprehend the impact that adding
Web-based access to a legacy application can have on both hosted Internet
infrastructure and enterprise connectivity.
This situation cannot be attributed exclusively to the lack of interaction
between applications development and network operations (although it's an
important part of the problem). The difficulties associated with modeling
the impact of a new application over a pre-existent network go beyond. Even
having a very oiled team it is difficult to put together all pieces involved
in order to properly model the problem. Most people may imagine that it
is only matter to properly measure the traffic generated by the new application
and extrapolate this traffic throughout the whole structure. Unfortunately
reality is a bit more complex. Even if the new application traffic is measurable
and the new application usage patterns can be reasonably identified we still
would have to have a clear view of the whole current structure including
things such as current traffic (others applications), points of presence,
interconnection possibilities (including service providers and technologies),
possible aggregation scenarios to be able to properly identify the more
cost effective way to handle the new traffic. A new application with a new
traffic volume/pattern may make feasible (In economical terms) the deployment
of different technologies, service providers or even a different topology.
To help companies address this problem we developed Ariete®. Ariete® is
a software able to identify the ideal network structure to support a given
traffic volume, being able to simulate several traffic volumes/pattern and
establish the correlation between traffic volume and network cost. The model
provides the opportunity for fast evaluation of multiple traffic volume/pattern
scenarios, delivering verification of infrastructure cost changes by scenario.
Through Ariete® becomes possible to identify the correlation between traffic
volumes, infrastructure cost and revenues as they relate to the services
offered by the applications. For example, assume that a new service is being
considered by the business as an additional service. Ariete® enables the
modeling of business cases identifying the correlation between each new
application and the associated cost involved in implementing it effectively.
Since in most cases is possible to perform a very quick and accurate analysis,
the automation of this process allows for the simulations and results to
be compiled within very short time frames. This enables the business to
consider several scenarios with clear definitions and documentation of the
costs and benefits of each service before implementing it. By providing
such elaborate and accurate information to the business, this process becomes
a powerful decision support tool. It makes possible to:
Generate simulations showing how different traffic volumes/patterns influence
the overall cost of the network structures; Establish the correlation between
traffic and telecommunications' expenditures; Simulate future needs and
verify how the network's cost will behave faced with an increase in traffic.
(Assisting in strategic planning, anticipating needs and problems); Negotiate
telecommunications budgets establishing a clear correlation between traffic,
services provided and the costs incurred with a high level of accuracy.