Login / Register |
5 Years Experience - Full Stack QA with BDD and Database Experience
#6years #tcs #qa #automation #bdd #sql #screening #medium #quick-calls #tataconsultancy
Posted by TechElliptica at 21 Oct 2025 ( Day(s) Before)

Could you please introduce yourself ?

I am an Automation Test Lead with over 6 years of experience in Software Testing, specializing in both manual and automation testing. My core expertise lies in Selenium Automation using Java, where I have successfully implemented robust automation frameworks that have significantly reduced test execution time by up to 40%. I also bring strong experience with tools like UFT, Cucumber, Jenkins, GitHub, and test management platforms such as Jira and Confluence.


Throughout my career, I’ve worked extensively in the banking and insurance domains, with hands-on experience in functional testing, API testing, and DevOps integration. I have led a team of 6 members, showing strong leadership and collaboration skills, while consistently delivering high-quality testing solutions in Agile environments.


Currently, I’m working with TechElliptica on a project for Merv, where I’ve contributed to automation and regression testing, helping reduce manual efforts and accelerate release cycles. I’ve also taken on SRE responsibilities, supporting deployments and ensuring system reliability.


With a Master’s degree in Computer Applications and a passion for continuous learning, I bring a proactive attitude, adaptability, and a strong team spirit to every project I work on. I’m now looking to leverage my experience to further enhance software quality and drive efficiency in testing processes.

You said you are working on manual and automation testing in your project?

How to make a balance between manual and automation?

In my project, I’ve always been involved in both manual and automation testing — and honestly, striking the right balance between the two has been key to maintaining quality, speed, and flexibility throughout the software development lifecycle. It’s not about automating everything just because we can; it’s about being smart and strategic with what we choose to automate.


Automation works best for repetitive and predictable tasks — like smoke, sanity, and regression testing — or for testing critical, high-risk areas such as login, payment processing, or checkout flows. It’s also ideal when the requirements are stable or when tests need to be run across multiple data sets or browsers. In these cases, automation saves a lot of time and effort.


On the other hand, there are scenarios where manual testing still plays a crucial role. Features that are frequently changing, one-off test cases, or parts of the system that are still going through UI/UX changes usually aren't worth automating right away. These areas benefit more from human judgment, flexibility, and creative thinking — and that’s where manual testing shines. It allows us to explore new features, catch usability issues, and think from the user’s perspective in ways automation simply can’t replicate.


I usually follow a risk-based approach to decide what should be automated and what’s better tested manually. I ask questions like: "What’s the risk if this breaks?" or "Is this feature stable and frequently used?" If something is high-risk or business-critical, automation usually makes sense. But if it’s a low-impact feature or still under active development, I stick to manual testing for the time being.


For example, in a banking domain project I worked on, we automated the login and registration processes because they were used frequently and had stable requirements. The payment flow was even more critical, so automation was essential there to catch issues quickly and ensure reliability. However, when it came to UI/UX feedback or design validation, we handled that manually, since it needed human input and visual assessment. As for a new product page — once the functionality was stable — we automated those flows to save time during future regression cycles.


So overall, balancing manual and automation testing isn’t just a technical decision — it’s a thoughtful process of evaluating risk, stability, and value to the team. It's about making testing smarter, not just faster.

Why did you choose to work in QA area rather than as you have good experience in Java so you would have opt development side as well

To be honest, when I first started my career, I actually wanted to go into development. That was my initial goal during my college days. But after I got placed through campus and started working in QA for the first six months, my perspective completely changed.


I realised that QA has huge potential and plays a much bigger role than I initially thought. It’s not just about checking if something works — it’s about making sure the entire system is reliable, secure, and meets user expectations. That broader impact really drew me in.

One of the things I enjoy most about QA is that it lets me use both technical skills and analytical thinking. With automation testing, for example, I still get to write code, build frameworks, and solve real-world problems — just like a developer — but I do it with a focus on quality, performance, and user experience.


Another big reason I stuck with QA is the cross-functional collaboration. As a QA, I get to work closely with developers, product owners, business analysts — pretty much everyone involved in the product. I'm part of the entire journey, from understanding requirements to deployment and monitoring, and that makes the work very fulfilling.


Over time, I’ve also seen how much the QA role has evolved. With DevOps, CI/CD, cloud platforms, and even SRE practices, the line between development and QA has become thinner. My Java background has really helped me here — especially in building reliable automation solutions and integrating them into pipelines.


And honestly, what I love most is that QA gives the team confidence. We’re the ones who ensure that what goes out to users actually works in the real world. That’s a big responsibility — and it's the kind of accountable, meaningful role I’ve always wanted to be in.


Please explain your automation framework?

In my current project, we are using a Hybrid Automation Framework built using Selenium WebDriver with Java, integrated with TestNG for test execution and Maven for build management.


The framework supports both data-driven and modular approaches. We use Apache POI for reading test data from Excel files, and our test scripts are written using Page Object Model (POM) to maintain scalability and reusability.


For reporting, we have integrated Extent Reports, and for CI/CD, our framework runs on Jenkins with triggers on every code commit.

In terms of my role, I was responsible for enhancing the framework by adding reusable utilities, writing new test scripts, maintaining the


object repository, and setting up parallel execution using TestNG XML.

We also used Git for version control and Allure reports in some modules to better visualize test results."

How many testcases you have in your current project. How many you have developed ?

In our current project, we follow the BDD approach using the Cucumber framework, which helps ensure better collaboration between QA, developers, and business stakeholders. At present, our automation suite consists of around 300 feature files, covering nearly 1800 scenarios and scenario outlines across different modules.


As part of my individual work stack, I’ve personally developed around 80 feature files so far. On a daily basis, I typically work on creating 5 to 6 new scenarios, depending on the complexity of the functionality and the scope of the user story.


When creating scenarios, I make sure each one aligns closely with the acceptance criteria and business requirements. I focus on writing clean, reusable steps in the Gherkin format, which not only improves readability but also helps with easier maintenance and scalability of the framework. I also collaborate with the team to review and update existing scenarios to keep them aligned with evolving requirements.

What is the difference between Scenario and Scenario Outline?

In Cucumber-based BDD (Behavior Driven Development),


Scenario is used when you want to execute a test case with a specific set of inputs and expected outcomes. It is ideal for validating one functional flow with fixed test data—essentially, when you have only one user input or one particular test condition to verify. For example, if you're testing a login feature for a specific user with predefined credentials, using a simple Scenario is appropriate because it focuses on that single, fixed condition.

Scenario: Verify successful login
Given user open application
When user provider "username" and password
And user click on Login button
Then user validate dashboard



Scenario Outline is used for data-driven testing, where the same scenario logic needs to be executed multiple times with different input data. It is accompanied by an Examples table that provides various data combinations. So, if you want to test a login feature for five different sets of usernames and passwords, a Scenario Outline is the better approach. It saves repetition by allowing the same test logic to iterate over multiple inputs, ensuring broader coverage with less code duplication.


Scenario Outline: Verify Error Message for wrong credentials
Given user open application
When user provider "<username>" and "<password>"
And user click on Login button
Then user validate error msg "<errorMsg>"

Examples:
| username | password | errorMsg |
| | invalid | Empty Username |
| invalid | | Empty Password |
| | | Empty Username and password |



What all common challenges you faced in your Java Selenium UI Automation?

Despite having a robust test automation framework, we occasionally face specific issues, especially during CI executions. Below are the main challenges and how we address them:



1. Test Flakiness During CI Execution (Jenkins - Headless Mode)


In our CI pipeline (executed via Jenkins), we primarily run tests in headless mode, which sometimes results in flaky test behavior—tests fail in CI but pass locally. To mitigate this:

  1. We have implemented a retry mechanism using the IRetryAnalyzer interface in TestNG to automatically re-run failed tests.
  2. We periodically review and optimize our explicit waits to ensure elements are properly synchronized and UI timing issues are minimized.



2. Handling Environment-Specific Data and Test Failures


To ensure reliability across different environments (QA, UAT, etc.) and avoid failures due to hardcoded data:

  1. All environment-specific configurations (e.g., URLs, credentials, DB connections) are externalized in property files such as qa.properties, uat.properties, etc.
  2. These properties are loaded dynamically based on the runtime environment parameter passed via -Denv=qa.
  3. Instead of relying on static test data, we:
  4. Generate dynamic data at runtime (e.g., unique email IDs, phone numbers) using utility classes.
  5. Leverage backend APIs or direct DB queries to set up required test data when needed.
  6. A centralized ConfigReader class handles configuration loading, while TestDataUtil manages dynamic data generation.
  7. We use a TestContext class to store and retrieve data across test methods, improving test consistency and modularity.

This approach makes our test suite environment-agnostic, reliable, and highly CI/CD compatible.



3. Parallel Execution Challenges (Data Conflicts and Session Collisions)


Running tests in parallel can cause issues when:

  1. Tests share the same test data, sessions, or resources.
  2. There’s a lack of test isolation, leading to interference or race conditions.

To address this:

  1. All tests are designed to be independent and stateless, ensuring they don’t depend on or interfere with one another.
  2. We use ThreadLocal<WebDriver> to ensure thread-safe browser sessions, which is crucial for stable parallel execution.
  3. Test data is generated uniquely per thread at runtime to avoid collisions and maintain test integrity.

This ensures our suite scales reliably even during high-concurrency executions.


Write Java code to reverse a string while preserving the whitespace.

Yes. The idea is to reverse only the non-space characters while keeping all whitespaces at their original indices.


Here's the Java code to do that:


Explanation:


  1. We first create a result array with spaces fixed at their original indices.
  2. Then, we place the non-space characters from the original string into the result array in reverse order, skipping over the fixed space positions.


public class ReverseWithWhitespacePreserved {
public static void main(String[] args) {
String input = "a b c d e";
System.out.println("Original: " + input);
System.out.println("Reversed: " + reversePreservingSpaces(input));
}

public static String reversePreservingSpaces(String str) {
char[] inputArray = str.toCharArray();
char[] result = new char[inputArray.length];

// Mark spaces in result array
for (int i = 0; i < inputArray.length; i++) {
if (inputArray[i] == ' ') {
result[i] = ' ';
}
}

// Fill non-space characters in reverse order
int j = inputArray.length - 1;
for (int i = 0; i < inputArray.length; i++) {
if (inputArray[i] != ' ') {
while (result[j] == ' ') {
j--;
}
result[j] = inputArray[i];
j--;
}
}

return new String(result);
}
}

Ceremonies followed in Agile methodology?

We follow the standard Scrum ceremonies:

Sprint Planning – Define what will be delivered in the sprint.

Daily Stand-up – 15-minute meeting to sync progress and highlight blockers.

Sprint Review/Demo – Demonstrate what was built.

Sprint Retrospective – Reflect on the process and improve.

What is the difference between DELETE, TRUNCATE, and DROP?

All three commands are used to remove data, but they differ in terms of scope, speed, and rollback capabilities:

1) DELETE

Purpose: Deletes specific rows from a table based on a condition.

Syntax:

DELETE FROM table_name WHERE condition;

Transaction Control: It's DML (Data Manipulation Language), so it can be rolled back using ROLLBACK.

Triggers: Fires triggers if defined.

Performance: Slower than TRUNCATE because it logs each row deletion individually.

Example:

DELETE FROM Employees WHERE Department = 'Sales';



2) TRUNCATE

Purpose: Removes all rows from a table quickly.

Syntax:

TRUNCATE TABLE table_name;

Transaction Control: It's considered DDL (Data Definition Language), so cannot be rolled back in most RDBMS (unless inside a transaction block in some systems).

Triggers: Does not fire triggers.

Performance: Much faster than DELETE because it deallocates data pages instead of logging each row.

Example:

TRUNCATE TABLE Employees;


3) DROP

Purpose: Completely removes the table structure and data from the database.

Syntax:

DROP TABLE table_name;


Transaction Control: Also DDL, so it cannot be rolled back.

Triggers: Not applicable because the table itself is removed.

Performance: Very fast, as it removes metadata and all associated data.

Example:

DROP TABLE Employees;


Well, as this was a basic screening round.

I am done with my question.

Do you have any questions for me

Well, I only asked. What will be the next step ?



but you can ask below questions (ask only 1 or 2 questions)


What are the next steps after this round?
Will there be any hands-on technical rounds or coding assessments in the next stages?
Is there a timeline you’re aiming for to close this position?


What does a typical day or week look like for someone in this QA automation role?
Which testing tools and frameworks are most commonly used by your QA team?
How is the QA team structured? Will this role involve more individual work or team collaboration?
Is there a balance between manual and automation testing, or is the focus mostly on automation?
How large is the current QA automation suite, and how often is it maintained or updated?
How closely does the QA team work with developers and product owners?
Are there regular grooming or sprint planning sessions that QA participates in?
Is there a dedicated DevOps or SRE team, or does QA also participate in deployments and monitoring?
Does the company support learning or certification opportunities in QA, cloud, or DevOps?
What kind of growth path is there for QA professionals in this organisation?
Are there opportunities to move into roles like QA lead, SDET, or even DevOps/SRE from this position?
What vision team is having with automation testing in a feature with AI-ML tools