Performance report

This report summarizes the validity of the run, summarizes the data most significant to the run, shows the response trend of the slowest 10 pages in the test, and graphs the response trend of each page for a specified interval.

Overall page

The Overall page provides this information:

Summary page

The Summary page summarizes the most important data about the test run, so that you can analyze the final or intermediate results of a test at a glance.

The Summary page displays the following Run Summary information:

The Summary page displays the following Page Summary information:

The Summary page displays the following Page Element Summary information:

If you have set transactions in your test, the Summary page displays the following Transaction information:

Page Performance page

The Page Performance page shows the average response of the slowest 10 pages in the test as the test progresses. With this information, you can evaluate system response during and after the test.

The bar chart shows the average response time of the 10 slowest pages. Each bar represents a page that you visited during recording. As you run the test, the bar chart changes, because the 10 slowest pages are updated dynamically during the run. For example, the Logon page might be one of the 10 slowest pages at the start of the run, but then, as the test progresses, the Shopping Cart page might replace it as one of the 10 slowest. After the run, the page shows the 10 slowest pages for the entire run.

The table under the bar chart provides the following additional information:
  • The minimum response time for each page in the run. Response time is the time between the first request character sent and the last response character received. Response time counters omit page response times for pages that contain requests with status codes in the range of 4XX (client errors) to 5XX (server errors). The only exception is when the failure (for example, a 404) is recorded and returned, and the request is not the primary request for the page. Page response times which contain requests that time out are always discarded.
  • The average response time for each page in the run. This matches the information in the bar chart.
  • The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater—and the response time is more varied.
  • The maximum response time for each page in the run.
  • The number of attempts per second to access each page. An attempt means that a primary request was sent; it does not include requests within the page.
  • The total number of attempts to access the page.
To display the 10 slowest page element response times, right-click a page and click Display Page Element Responses.

Response vs. Time Summary page

The Response vs. Time Summary page shows the average response trend as graphed for a specified interval. It contains two line graphs with corresponding summary tables. When a schedule includes staged loads, colored time-range markers at the top of the graph delineate the stages.

Response vs. Time Detail page

The Response vs. Time Detail page shows the response trend as graphed for the sample intervals. Each page is represented by a separate line.

The Average Page Response Time graph shows the average response of each page for each sample interval. When a schedule includes staged loads, colored time-range markers at the top of the graph delineate the stages. The table after the graph provides the following additional information:

Page Throughput page

The Page Throughput page provides an overview of the frequency of requests being transferred per sample interval. If the number of requests and hits are not close, the server might be having trouble keeping up with the workload.

If you add virtual users during a run and watch these two graphs in tandem, you can monitor the ability of your system to keep up with the workload. As the page hit rate stabilizes, even though the active user count continues to climb and the system is well tuned, the average response time will naturally slow down. This response time reduction happens because the system is running at its maximum effective throughput level and is effectively throttling the rate of page hits by slowing down how quickly it responds to requests.

Server Throughput page

The Server Throughput page lists the rate and number of bytes that are transferred per interval and for the entire run. The page also lists the status of the virtual users for each interval and for the entire run. The bytes sent and bytes received throughput rate, which is computed from the client perspective, shows how much data Rational® Performance Tester is pushing through your server. Typically, you analyze this data with other metrics, such as the page throughput and resource monitoring data, to understand how network throughput demand affects server performance.

Server Health Summary page

The Server Health Summary page gives an overall indication of how well the server is responding to the load.

Server Health Detail page

The Server Health Detail page provides specific details for the 10 pages with the lowest success rate.

Caching Details page

The Caching Details page provides specific details on caching behavior during a test run.

Resources page

The Resources page shows all resource counters that were monitored during the schedule run.

Page Element Responses

The Page Element page shows the 10 slowest page element responses for the selected page.

Page Response Time Contributions

The Page Response Time Contributions page shows how much time each page element contributes to the overall page response time and the client delay time and connection time.

Page Size

This page lists the size of each page of your application under test. The size of the page contributes to the response time calculation. If part of a page is cached or all of the page is cached then those requests coming from cache will not contribute to total page size. For a schedule, you can verify the number of hits to each page with the number of virtual users.

Errors

This page lists the number of errors and the corresponding actions that occurred in the test or schedule. The Error Conditions section displays the number of error conditions met. The Error Behavior section displays how each error condition was handled. You should have already defined how to handle errors in the Advanced tab of the test editor, schedule editor, or compound test editor.

Feedback