Login / Register |
5 Years Experience - Performance Tester with JMeter in Cognizant
#jmeter #performance #medium #5yeras #cognizant
Posted by TechElliptica at 24 Oct 2025 ( Day(s) Before)

What is the difference between smoke and sanity testing?

Smoke Testing

Smoke testing is performed after receiving a new build to verify that the critical functionalities of the application are working and that the build is stable enough for further testing. It takes a broad and shallow approach, covering essential features end-to-end without going into details. The main objective is to ensure the build’s stability before proceeding to more rigorous testing phases.


Sanity Testing

Sanity testing is carried out after receiving a build with minor changes or bug fixes to confirm that the specific functionality or defects have been addressed correctly. It focuses on a narrow and deep scope, testing only the affected modules or functionalities. The purpose of sanity testing is to validate the correctness of the changes made and ensure they work as expected without performing a full regression test.



Suppose an e-commerce application releases a new build. Testers first perform smoke testing to check whether critical functionalities such as user login, product search, adding items to the cart, and payment processing are working — ensuring the build is stable for further testing. Later, if a bug is fixed in the payment gateway or a small change is made to the discount calculation logic, testers perform sanity testing to validate that the fix or change works correctly without affecting other related functionalities.

What is volume testing in performance testing?

Volume Testing is a type of non-functional performance testing that checks how the application behaves when subjected to a large volume of data (not just users).


It helps determine:

  1. Whether the system can handle huge amounts of data in the database or files.
  2. How the system performs, processes, and retrieves large datasets.
  3. Whether data volume impacts response time, memory, or performance.


Volume testing is quite useful in below conditions


Banking App - Test system behaviour when 10 million transaction records exist in the DB
E-commerce Site - Load catalog with 1 million products and check search/filter speed
Payment Gateway - Simulate bulk transactions (e.g., 1 lakh in a batch job)
Messaging App - Validate performance when the message table grows to 5 GB+
What is basic difference between load and volume testing ?

In performance testing, load testing and volume testing often appear similar because both evaluate system behavior under stress. However, they differ in focus and perspective.


Load testing is conducted to evaluate how a system performs when subjected to a specific, expected number of concurrent users or transactions. The primary goal is to assess how the system behaves under normal and peak load conditions by measuring critical performance metrics such as response time, throughput, error rate, and resource utilisation. For example, in a banking application, load testing would validate how efficiently the fund transfer module handles 1,000 concurrent transactions initiated by different users. This helps identify bottlenecks related to server scalability, thread pool configuration, connection handling, and overall system responsiveness under user pressure.

It essentially answers the question

Can the system handle the expected number of users efficiently without degradation?



Volume testing, on the other hand, focuses on evaluating the system’s performance when processing or managing large volumes of data, rather than a large number of simultaneous users. The objective is to determine whether the application maintains stability, performance, and accuracy as the data size, database volume, or file input size increases significantly. In the same banking context, volume testing would examine how the system performs when processing one million fund transfer records or interacting with a database containing several gigabytes of transaction history. This type of testing highlights potential issues such as slow database queries, inefficient indexing, memory utilization problems, or storage limitations that may emerge when data grows over time.


In simpler terms,


load testing measures system performance under user load while volume testing measures performance under data load.
Load testing ensures the system remains scalable and responsive as user traffic increases, whereas volume testing ensures it remains stable, efficient, and consistent as data volume expands.
Together, both testing types provide a comprehensive view of system reliability and endurance, ensuring that applications can perform optimally under both high user concurrency and high data growth scenarios.



What do you mean by throughput in performance testing ?

In performance testing, throughput refers to the amount of work done by the system in a given period of time.It measures how many requests, transactions, or bytes the system can process per second (or minute) under a specific load.


how much load your system can handle efficiently per unit of time.


Throughput Metrics


Throughput can be measured in several ways depending on what you’re testing:

Type

Description

Example

Requests per second (RPS)

Number of HTTP requests handled per second

250 requests/sec

Transactions per second (TPS)

Number of business transactions completed per second

50 fund transfers/sec

Bytes per second (BPS)

Amount of data processed per second

1.2 MB/sec sent, 2.5 MB/sec received

In JMeter, throughput is typically displayed as requests per second (req/sec) in the Summary Report, Aggregate Report, or HTML Dashboard.



Example


Let’s say you are performance testing a fund transfer service in a banking application.

  1. You run a load test with 1,000 concurrent users.
  2. The system successfully processes 60 fund transfers per second.
  3. This means your throughput = 60 transactions per second (TPS).

If the same system is expected to handle up to 100 TPS as per the business SLA, then current performance indicates a potential bottleneck or scaling issue that needs optimization.



Throughput vs Response Time


These two metrics are related but not the same:

Metric

Definition

Relationship

Response Time

Time taken to complete one request

Lower response time → higher throughput

Throughput

Number of requests completed per second

Inversely affected by response time



As load increases, if response time goes up too much, throughput will eventually drop, indicating the system has reached its maximum capacity.



Throughput Advantage

Measures system capacity: Determines how much load your system can process efficiently.
Indicates scalability: Helps find the breaking point where performance starts degrading.
Validates SLAs: Ensures your system meets required TPS or RPS targets.
Compares builds or environments: Identify improvements or regressions after optimizations.
Assists capacity planning: Helps infrastructure teams size servers, threads, and DB connections.



Throughput in performance testing is the measure of how many requests, transactions, or bytes the system can process per unit of time under a defined load.
It reflects the system’s capacity, stability, and scalability, and is one of the key indicators used to assess overall performance.


Let me talk about a flow.

Suppose I am sending payment internationally from one account to another account.

and this data in updating in 3 tables inward table, Outward table and Balance-Check Table


How will you check this scenario in your project ?


You are sending an international payment from Account A to Account B. The transaction affects three tables:

  1. Outward Table: Records debit from the sender account
  2. Inward Table: Records credit to the receiver account
  3. Balance-Check Table: Verifies final balances of both accounts


Understand the Flow


Before testing, you need to clearly understand how the system behaves:

  1. User initiates international payment
  2. System validates account details, currency conversion, limits, and sanctions
  3. Outward table is updated → debit from sender
  4. Inward table is updated → credit to receiver
  5. Balance-check table is updated → reflect final balances


Transactions should be atomic. If any step fails, the transaction should rollback.



Functional Testing Approach


Create Test Data: Accounts with sufficient balance for transfer
Execute Transaction: Initiate fund transfer via UI or API
Validate Outward Table: Check debit entry is correct
Validate Inward Table: Check credit entry is correct
Validate Balance Table: Ensure balances reflect transaction accurately
Negative Scenarios: Insufficient funds, Invalid beneficiary account, Currency mismatch




Performance Testing Approach


Thread Groups in JMeter: Simulate multiple concurrent international payments

Example: 100–500 users initiating payments simultaneously

Samplers:

HTTP Sampler → Fund transfer API

JDBC Sampler → Verify outward, inward, balance-check tables

Assertions:

Response code = 200

Transaction status = SUCCESS

Database entries exist in all 3 tables

Correlation: Capture dynamic values like transaction ID or session token for validation in subsequent steps

Listeners:

Aggregate Report → Throughput, Response Time, Error %

Backend Listener → Monitor server metrics


Validation Strategy


  1. Data Consistency: Ensure that all three tables reflect the same transaction ID
  2. Atomicity Check: If debit fails, credit should not happen
  3. Concurrency Check: No race conditions when multiple payments are processed simultaneously
  4. Performance Metrics:
  5. Throughput: How many international transfers/sec
  6. Response Time: Avg, 90th, 95th percentile
  7. Error %: Transaction failures


Reporting

  1. Functional Report: SQL queries to validate correctness
  2. Performance Report: JMeter HTML Dashboard or CSV → showing response time, throughput, and error %
  3. Observation: Highlight bottlenecks — e.g., slow DB update on outward table, locking issues, etc.


Edge Cases

  1. Maximum transaction amount per day
  2. Multiple currencies
  3. Network failure or DB timeout
  4. Duplicate transaction prevention


In my project, for international fund transfers, I verify that each transaction correctly updates the outward, inward, and balance-check tables. Functionally, I check that debits, credits, and final balances are accurate and atomic. From a performance perspective, I simulate multiple concurrent transactions using JMeter, validating response time, throughput, error rate, and ensuring data consistency across all three tables. Edge cases such as insufficient funds, invalid accounts, and network failures are also tested to ensure robustness.

What is ACID Properties?

How will you maintain integrity testing in a scenario when you are testing multiple databases and as well as multiple APIs validation?

In the context of databases,


ACID Properties refer to a set of principles that ensure reliable processing of database transactions.

ACID stands for Atomicity, Consistency, Isolation, and Durability

which are crucial for maintaining data integrity.ACID properties ensure that database transactions are processed reliably and help in maintaining the integrity of the database.


The four ACID properties are defined as follows:

Atomicity: This ensures that a transaction is treated as a single unit, which either completely succeeds or fails. If any part of the transaction fails, the entire transaction fails, and the database remains unchanged.
Consistency: This ensures that a transaction takes the database from one valid state to another, maintaining all predefined rules, including constraints, cascades, and triggers.
Isolation: This ensures that concurrently executed transactions do not affect each other's execution. Each transaction should appear to be executed in isolation.
Durability: This ensures that once a transaction has been committed, it will survive permanently, even in the event of a system failure.


Example: In a banking system, transferring money from one account to another must adhere to ACID properties, ensuring that if the debit fails, the credit does not occur.


When conducting performance testing with multiple databases and APIs, maintaining integrity testing can be challenging. Here are some strategies to ensure that ACID properties are upheld:


  1. Transaction Management: Use transaction management techniques to ensure atomic operations across multiple databases. Implement distributed transaction protocols, such as the Two-Phase Commit (2PC), to manage transactions involving multiple databases. (Many times, multiple components are involved, so databases for all components should be updated properly.)
  2. Consistency Checks: Design test cases that include data validation checks before and after transactions to verify that consistency is maintained across all databases and APIs.
  3. Testing Isolation: Use isolation levels to control the visibility of data changes made by one transaction to other transactions. Test different isolation levels to identify the best configuration for your use case.
  4. Durability Testing: Simulate system failures during transactions to ensure that committed transactions are durable. Verify that once a transaction is confirmed, the data persists correctly across databases.


Example: When testing an e-commerce application, ensure that inventory updates are consistent across the product database and the order database after a purchase transaction.

Suppose you are in a healthcare project. and you are doing performance testing

there are many table we are interacting with


how will you design jmeter script for test databases. and which all sampler and assertions you will use?

When designing a JMeter script for performance testing in a healthcare project, particularly when interacting with multiple database tables, it is crucial to ensure that the script mimics real-world usage scenarios and accurately measures the performance of the database under load.


A well-structured JMeter script can help in identifying performance bottlenecks and ensuring database efficiency.


To design an effective JMeter script, follow these steps:

  1. Identify the key use cases that involve database interactions.
  2. Gather the necessary database connection details such as JDBC driver, database URL, username, and password.
  3. Use the JDBC Connection Configuration element to set up the database connection.


Example: The JDBC Connection Configuration may look like this:
jdbc:mysql://localhost:3306/healthcare_db
username
password


Next, add JDBC Request samplers for each database operation you want to test. For instance:

  1. For a select operation, use a SQL query like: SELECT * FROM patients WHERE age > 30;
  2. For an insert operation: INSERT INTO patients (name, age) VALUES ('John Doe', 45);
  3. For an update operation: UPDATE patients SET age = 46 WHERE name = 'John Doe';
  4. For a delete operation: DELETE FROM patients WHERE name = 'John Doe';


For each JDBC Request, you should also include assertions to validate the responses:

Use Response Assertion to check response codes and ensure they match expected values (e.g., 200 for success).
Utilize Duration Assertion to ensure that each query executes within acceptable time limits.



Finally, it’s important to configure a thread group to simulate concurrent users:

Set the number of threads (users) to simulate load.
Adjust the ramp-up period to control the speed at which users are simulated.
Define loop counts to determine how many times the tests will be executed.


By following these steps, you can design a robust JMeter script that effectively tests database performance in a healthcare project, ensuring that the system can handle the expected load while meeting performance criteria.

Suppose you are doing performance testing for healthcare application in jmeter. How do you maintain correlation in your performance reports?

how will your next call will have data from first call.

Correlation in performance testing, especially when using JMeter for a healthcare application, is crucial for ensuring that dynamic data from one request can be used in subsequent requests.


This is particularly important in applications where unique identifiers, session tokens, or other data must be passed between requests to simulate real user behavior accurately.


Correlation allows JMeter to dynamically capture values from responses and use them in subsequent requests, ensuring that the performance tests reflect realistic scenarios.
For instance, in a healthcare application, you might have a login request that returns a session token. This token would need to be used in subsequent requests to access protected resources or perform actions on behalf of the user.


To implement correlation in JMeter, follow these steps:

Identify dynamic values: Examine the server responses to find dynamic data that needs to be captured, such as session IDs or tokens.
Use Regular Expression Extractor: Add a Regular Expression Extractor to the request that contains the dynamic data. Configure it to capture the relevant value from the response.
Reference the captured value: Use the captured variable in subsequent requests by referencing it with the syntax ${variable_name} where variable_name is the name given in the Regular Expression Extractor.


Example: If the login response contains a token like this: Bearer abc123xyz, you would set up a Regular Expression Extractor to capture abc123xyz and then reference it in your next API call with ${token_variable}.


// Example Regular Expression Extractor configuration
// Reference: Bearer (.+)
String token = response.getHeader("Authorization").substring(7); // Extracts the token


Proper correlation in JMeter enables accurate performance testing by ensuring that subsequent requests utilize dynamic data, thus simulating real-world interactions in healthcare applications effectively.

While doing your performance testing for healthcare, have you ever done malfunction testing?

In the context of performance testing within the healthcare sector, malfunction testing is a critical aspect that focuses on evaluating how a system behaves under erroneous conditions or when certain components fail.


This type of testing is essential to ensure that healthcare applications can gracefully handle unexpected issues without compromising patient safety or data integrity.

Malfunction testing helps identify potential weaknesses in the system and ensures that the application can recover from errors effectively.


Example: In a healthcare application, malfunction testing might involve simulating database failures, network outages, or incorrect user inputs to observe the system's response.


During the performance testing phase, we typically perform malfunction testing by following these steps:

Identify critical components: Determine which parts of the system are essential for core functionalities, such as patient data retrieval, appointment scheduling, and billing.
Define failure scenarios: Create scenarios that simulate potential malfunctions, such as a server crash, API failure, or unresponsive third-party services.
Execute tests: Run performance tests by intentionally causing failures in the identified components to observe the system's behaviour and stability.
Monitor system responses: Use monitoring tools to track system performance metrics during malfunction scenarios, including response times, error rates, and recovery times.
Analyse results: Review the data collected to identify weaknesses and areas for improvement, ensuring that the system can maintain performance and reliability even during failures.


Conducting malfunction testing is vital in healthcare performance testing to ensure that systems can withstand and recover from unexpected issues, ultimately safeguarding patient care and maintaining regulatory compliance.

How do you perform endurance and spike testing in jmeter ?

Endurance and spike testing are essential performance testing techniques used to assess how a system behaves under varying loads over time.

Apache JMeter is a popular tool for conducting these types of tests due to its versatility and ease of use.


Endurance Testing: This involves testing the system under a significant load for an extended period to identify memory leaks, resource utilization, and performance degradation.
Spike Testing: This tests the system's reaction to sudden increases in load, determining how well it can handle unexpected traffic surges.


Here’s how to perform both endurance and spike testing using JMeter:


Endurance Testing Steps:

Set Up a Thread Group:
Configure the Thread Properties:
Add Samplers:
Configure Listeners:
Run the Test: Set the duration for several hours to monitor performance.
Thread Group Configuration:
- Number of Threads: 100
- Ramp-Up Period: 60 seconds
- Loop Count: Forever (for endurance testing)


Spike Testing Steps:

Create a New Thread Group:
Define Sudden Load Increases:
Add Samplers:
Configure Listeners:
Run the Test:
Thread Group Configuration:
- Number of Threads: 50 (sudden spike)
- Ramp-Up Period: 10 seconds
- Loop Count: 1 (for spike testing)


Ensure your system monitoring tools are in place to observe memory usage, CPU load, and response times during tests. Analyze results to identify performance bottlenecks and scalability issues.


Both endurance and spike testing are critical to ensuring that applications can handle sustained and unexpected loads, leading to improved user satisfaction and system reliability.

Which all reports and metrics you create in your jmeter framework? which all report you send to you stackholder ?

In a JMeter framework, various reports and metrics are generated to analyze the performance and behavior of the application under test.

These reports serve different purposes, such as identifying bottlenecks, assessing load capacity, and ensuring that the application meets performance requirements.


Effective reporting is crucial for stakeholders to understand the application's performance and make informed decisions.


Here are some of the key reports and metrics generated in a JMeter framework:

Summary Report: This report provides a high-level overview of the test execution, including total samples, average response time, error percentage, and throughput.
Aggregate Report: Similar to the summary report but includes more detailed statistics, such as min, max, and standard deviation of response times.
Response Time Graph: Visual representation of the response times over the test duration, helping to identify trends and spikes.
Error Rate Report: Highlights the number and percentage of errors encountered during the tests, which is critical for diagnosing issues.
Throughput Report: Displays the number of requests processed per second, which helps in understanding the load handling capacity of the application.
Latency Report: Focuses on the time taken for the server to respond after receiving a request, which is vital for assessing user experience.
Transaction Report: Measures the performance of specific transactions or user journeys, which is useful for validating critical business processes.



For stakeholders, the following reports are typically sent:

  1. Summary Report
  2. Aggregate Report
  3. Error Rate Report
  4. Throughput Report


These reports provide a comprehensive view of the application's performance, highlighting critical metrics that stakeholders can use to assess readiness for production.


Providing detailed and relevant reports to stakeholders ensures that they have the necessary information to make informed decisions about the application’s performance and stability.

Why do we need 90%, 95% and 99% value percentile in performance testing?

What is importance of this in execution?

In performance testing, the use of percentiles such as 90%, 95%, and 99% plays a critical role in understanding the response time and overall user experience of an application under load.

These percentiles help to summarize the distribution of response times, allowing teams to identify and address performance bottlenecks effectively.


Percentiles provide insight into the distribution of response times rather than just average values.


Example: An average response time of 2 seconds may seem acceptable, but if 20% of requests take longer than 5 seconds, user experience could be negatively impacted.


The importance of these percentiles in execution can be categorized into several areas:

Understanding User Experience: The 90th percentile indicates that 90% of users experience response times below this value. This is crucial for ensuring that the majority of users have a satisfactory experience.
Identifying Performance Issues: By analyzing the 95th and 99th percentiles, teams can pinpoint outliers and performance issues that may not be visible in average response times, thus enabling more focused optimization efforts.
Setting Performance Goals: Organizations can set realistic performance targets based on these percentiles, ensuring that the application meets the expectations of its users.
Capacity Planning: Understanding the distribution of response times helps in making informed decisions about server resources and scaling requirements, which is essential for maintaining performance under peak loads.


Utilising 90%, 95%, and 99% percentiles in performance testing is essential for gaining a comprehensive understanding of user experience, identifying performance bottlenecks, and ensuring that applications can handle expected loads efficiently.