Network Performance Metrics Defined



  • Refers to the amount of time (usually measured in milliseconds) it takes for data to travel from one location to another across a network (or across the Internet, which is a network itself).
  • Is sometimes referred to as delay, because your software is often waiting to execute some function while data travels back and forth across the network. For example, Internet Explorer can’t display a story from until CNN’s Web servers respond to your request for that page.
  • Is often less than 100 milliseconds on today’s high­-speed network, which has very little impact on Web surfing.
If you’d like a more thorough explanation, see It’s the Latency, Stupid and It’s Still the Latency, Stupid.

Generally, you only need to be concerned about latency in two situations:

  • When your staff and patrons complain about a slow connection, high latency could be part of the problem, though you might not be able to do anything about it other than contact your ISP and ask them to address the issue.
  • Second, if you’re planning to install Voice over IP (VoIP) or any other application that relies on live, real­-time transmission of video or audio, you need to ask your service provider about their latency. Real­-time voice and video applications are sensitive to network delays. For instance, with VoIP, you’ll notice that the audio is choppy, with lots of pauses and dropped syllables. Jitter refers to variation in the amount of latency, and it has a similar negative impact on real­-time communication.


Bandwidth and throughput

These two terms are sometimes used interchangeably, and though they are related, they’re not quite the same. They both refer to the amount of data transferred between two points on a network in a given period of time. In other words, how many bits per second can you send across your network or over your Internet connection?

On a day-­to-­day basis, you’ll usually see them measured in Kbps (kilobits per second), Mbps (megabits per second) or Gbps (gigabits per second). Bandwidth generally refers to a theoretical maximum, while throughput is a real­-world, practical measurement. The distinction is relevant because ISPs will usually advertise their bandwidth, which is often higher than the throughput that you’ll actually receive. In other contexts, you’ll see the terms bandwidth, throughput and speed used interchangeably.

Bandwidth vs. latency

If you’re still having trouble grasping the difference between latency and bandwidth (or throughput), this analogy from the Gentoo Linux wiki might help: “Latency is a measure of the time a packet needs to get from point A to point B. Bandwidth measures the amount of data that got from A to B in a certain time. So, if you were to take a dictionary to your friend on the other side of town, your bandwidth would be good, but the latency would be bad (the time spent driving, to be exact). However, if you were to phone your friend and start reading the dictionary to him, the latency would be lower, but the bandwidth would be substantially less than in the first example.”

Uptime or responsiveness

Uptime, sometimes referred to as availability or responsiveness, refers to the amount of time that a computer or a network connection is functioning and usable.

If you’re buying a leased line, the ISP’s guarantee with regard to uptime should be written into the Service Level Agreement. You also want to measure the uptime of your own hardware and software equipment to see if a device has a recurring problem.

Hardware and software

Your network relies on switches, servers, routers and firewalls, so network monitors can usually track metrics such as CPU utilization, remaining hard drive space and memory use. Also, by sending messages to your Web site, your OPAC and other key applications, your network monitor can track the responsiveness of mission-­critical services and software.


There are hundreds of data points you could track on your network, so you’ll have to spend some time talking to your vendor or wading through the documentation.