Recommendations for Performance Bench Marking

Embed Size (px)

Citation preview

  • 7/31/2019 Recommendations for Performance Bench Marking

    1/4

    Recommendations for Performance Benchmarking

    Shikhar Puri

    Infosys Technologies Ltd

    44 Electronics City, Hosur Road

    Bangalore 560100, India

    +91 9342842635

    [email protected]

    ABSTRACTPerformance Benchmarking for the application is becoming very

    essential before Production deployment. This paper covers fewrecommendations and best practices for PerformanceBenchmarking exercise.

    1. IntroductionThe necessity of proof of performance for the application along

    with functional proof is becoming very popular in the industry. Theperformance testing and benchmarking for the application can beperformed before deploying the product in the productionenvironment or after the production deployment. If the business is

    expected to grow in future, the client is required to ascertain if theapplication can support the extended business volume on theexisting infrastructure and what could be the potential areas ofconcern or bottleneck in terms of Performance. The areas need to

    be identified, where enhancement, tuning and upgrade is required.The performance testing and benchmarking also provides a proof ofthe performance for the product and sets baseline and benchmarkfor further enhancement in the application.

    2. Objective for Performance Benchmarking

    The objective of Performance benchmarking is to eliminateperformance bottlenecks in the Application and Infrastructure

    components by identifying and tuning them and to verify that theperformance delivered is as expected from the system. This might

    involve conducting performance tests iteratively on the applicationagainst representative production workload and anticipatory data

    volumes as expected in production after a significant period oftime.

    3. Performance Benchmarking Methodology

    The Performance Testing life cycle goes in parallel to the SoftwareDevelopment Life Cycle and starts with the requirement gathering

    for the software application. Performance Testing Methodologywill consist of following phases.

    1.1 Planning PhaseThe objective of planning phase is to evolve the test plan andstrategy which would involve following activities and details:

    Study and Review of Application Architecture

    Understand the system functionality and Flow

    Review of the Hardware Components

    Assess usability of Performance Testing Tool

    Performance expectations for transactions, system behavior

    Performance metrics including server and network utilization

    Test approach

    Test validation strategy

    Collection of Workload Profile

    Peak and Off-peak time periods

    Total and concurrent number of users

    Key Transactions, frequency and complexity

    Transaction mix with respective user profile

    Usage pattern and think time between transactions

    Data Access intensity and usage of data

    1.2 Test Suite DevelopmentThis phase would involve the following:

    Set up test environment

    Creation and validation of test scripts

    Creation of data population scripts

    Creation of transaction data pools to be used during test

    Setup for performance monitoring

    1.3 Execution of Performance TestsThis phase will consist of number of benchmarking runs, each runwith increased number of concurrent virtual users. This willcontinue till the performance testing objectives are met or

    breakpoint of the system is reached. If the system breaks down

    because the test environment infrastructure, the test results need tobe extrapolated. Typical benchmarking cycle will consist of:

    Start performance test and monitoring scripts

    Collect Test and monitoring results

    Validate results

    1.4 Analysis and ReviewData collected from performance tests will be consolidated andanalyzed. This will include:

    Analyze monitoring data

  • 7/31/2019 Recommendations for Performance Bench Marking

    2/4

  • 7/31/2019 Recommendations for Performance Bench Marking

    3/4

    scenario as multiple transactions are performed during the day

    and the data for a specific transaction is created gradually ondaily basis. The data is distributed automatically on various discs

    and the probability of disc being bottleneck is reduced. To avoidthis issue, the placement of data on discs should be made as close

    to production as possible. The ideal scenario would be to clonethe production data in test environment keeping the disc structure

    similar to that in the production environment.

    1.7 Load Injector Machine SetupThe virtual user scripts contain a single transaction or group ofsimilar transactions which are generally executed sequentially.These scripts are configured to be executed on the load injector

    machine with the number of iterations as per the system workloadprofile and generate the transaction load on the system. Few pointswhich can be considered while configuring the load injectormachine and virtual scripts are:

    Data Caching: Mostly the virtual user script for a single

    transaction with assigned number of users is configured on singleload injector machine. The data pool or pre-requisite data for themultiple iterations of the transaction could be different (as

    configured in the data pool) but the requests from all theassigned users for the candidate script are generated from thesingle load injector box. If the load injector box is caching someof the transaction specific static data, the response time of thefurther iteration of the candidate transaction would be reduced

    and would not portray the real time scenario. The machinesetting should be properly done or the distribution of the virtualuser script and transactions should be proper to avoid the datacaching issue.

    Distribution of IPs on Load Injector: Multiple transactionscripts and virtual users configured on the load injector machineuse the same IP address for execution. This in turn means that all

    the virtual users would be using the same IP subnet mask. In realtime scenario, only few of the users would be using the samesubnet mask. The test results in this scenario might differ from

    the real time results. Tools are available which can assignmultiple IP addresses to different virtual users assigned on thesame load injector box. This will generate a better picture of realtime scenario.

    Bandwidth Simulation: The performance benchmarkingexercise is normally performed on the server side. The loadcontroller and injector machines are connected to the serverswith High speed LANs and network congestion and delays in the

    transactions are not accounted. Tools are available to define theusage of the network bandwidth by virtual users on load injectormachines. If the system is locally deployed or de-centralized, the

    benchmark tests should be performed with real time network

    bandwidth. If the system is globally rolled out having multiplesites injecting the load on the system at the same time, it should

    be tried that the transaction load from various sites is allocatedon different load injector machines. These site specific machines

    should be simulated with the network bandwidth available atrespective sites. The pre-requisite for this scenario would be the

    availability of network topology of various production sites. Thiswould portray the near to real time network congestion and

    delays for the system.

    Memory Requirement of virtual users: All the users assigned toa load injector box require memory to execute the transaction.

    The maximum memory consumption by single virtual usersshould be derived in advance and virtual users should be

    assigned and distributed accordingly avoiding congestion on

    client machine. It should be tried not to consume more than 70-80% of the available memory on load injector.

    1.8 Execution and AnalysisFew points which can be considered for test execution and analysisof test results:

    Validation of Transaction Response: Most of time the data

    used during the test is different from the data used during therecording or creation of the transaction script, so the response at

    various steps in the transaction script can be different from whatis expected while recording or creation. To overcome this issue,

    the virtual user scripts should be devised in such a way that theresponse from the request is validated for success and further

    progress. The next steps in the transaction should be executeddepending on the response received from servers.

    Profiling of Transaction steps: The transactions can be furtherdivided into multiple logical steps and these steps should also bemonitored for response or elapsed time. If the transaction

    response or elapsed time is above the expected response time,these logical steps provide a better picture to understand the

    bottleneck in the transactions. The problematic steps can befurther looked into for removal of performance issues.

    Validating execution of Transactions: After the benchmarktest, the simulation tool provides the summary of the test resultsincluding transaction response time, transaction throughput etc.

    The throughput mentioned in the test result summary is based onthe execution of the steps mentioned in virtual script. Therecould be the scenario, when the transaction failed but the virtualuser step was executed, this would be shown in the test result

    summary. The better way of verifying the transaction throughputis to verify the occurrence of the transaction from the database or

    the affected areas. This would be easier if the transaction isperforming create, update or delete operations. For the read type

    of operations, the opening of screen or any logical thing can belogged somewhere so that the transaction occurrence or success

    can be verified.

    Transaction Response time: Generally the transaction responsetimes are presented as 90th percentile transaction response time.This means that 90% of the total transaction response timeoccurrences are below the figure marked as 90 th percentile figure

    for the candidate transaction. This is better than presenting theaverage response time figure, which averages out the peaks and

    variance in response times. Sometimes even 90th percentile figurecan not portray the correct picture and further analysis should be

    done before presenting the result summary. Say the transaction ishaving very low number of iterations (e.g. 2 per test duration)

    and one of the iteration happened during the initial stage or whenthe system was not performing well, the 90th percentile responsetime would be the response time for the mentioned iteration and

    would be very high. In this scenario, we can ignore thementioned iteration with proper justification and consider theresponse time of the 2nd iteration. Similarly if the frequency oftransaction is high and 90th percentile response time is quiteabove the expected response time, the difference between 85 th

    percentile and 90th percentile response time should also bechecked. If the difference is not much, the transaction has some

    performance issues but if the difference is very high the iterationoccurrences should be checked and the reasons should be

    thought for this difference. Most of time, it is observed that thetransaction is performing well but there seems to be some issues

  • 7/31/2019 Recommendations for Performance Bench Marking

    4/4