
Suppose you joined a project and the team asked you to write test cases for the login screen?
What will be your first 10 scenarios for the login screen?
Positive Scenarios (Valid Login)
Valid username and valid password - Expected Result: User logs in successfully and is redirected to the dashboard/homepage.
Login with email instead of username (if supported) - Expected Result: User is authenticated and logged in.
Remember Me checkbox functionality - Expected Result: User stays logged in even after closing and reopening the browser.
Login with different valid roles (Admin, User, etc.) - Expected Result: User logs in and is directed to role-specific pages.
Negative Scenarios (Invalid Login)
Invalid username and valid password - Expected Result: Error message like “Invalid username or password.”
Valid username and invalid password - Expected Result: Error message shown.
Invalid username and password - Expected Result: Error message shown.
Empty username and password fields - Expected Result: Form validation messages, e.g., “Username is required.”
SQL injection attempt in username or password fields - Expected Result: Input is sanitized; no backend error or login.
Cross-site scripting (XSS) input - Expected Result: Input is not executed as code; handled securely.
Suppose your joining a new project,
What is your approach to understanding new project?
When I join a new project, my first approach is to gain a clear understanding of the product from both a business and technical perspective.
- Start with Documentation: I begin by reviewing any available product documentation, user manuals, requirement specs (like BRD/FRD), and test cases to get an overview of the product features and workflows.
- Attend Knowledge Transfer (KT) Sessions: I actively participate in KT sessions with the development team, QA team, and business analysts to understand the architecture, business logic, and technical stack.
- Explore the Application: Hands-on exploration is key. I go through the application myself — like a user — to get a practical understanding of workflows, UI elements, and behavior.
- Ask Questions: I don't hesitate to ask questions when something is unclear. I interact with team members across QA, Dev, and Product to fill in any gaps in understanding.
- Understand Testing Scope: From a QA perspective, I try to identify what’s already tested, what’s automated, and what areas are high-risk. This helps in planning my testing strategy.
- Review Backlog & Sprint Items: I check Jira or whatever task management tool is being used to understand current priorities and how the product is evolving.
Suppose after your spring planning call, if the requirement is not clear, how will you handle this situation?
If I come across an unclear requirement, the first thing I do is seek clarification as early as possible. I reach out directly to the Business Analyst, Product Owner, or relevant stakeholder to understand the intent behind the requirement. I prepare specific questions in advance to make the discussion productive and to avoid assumptions.
While waiting for clarification, I focus on areas of the project where requirements are clear, so the overall testing timeline is not impacted. If necessary, I also refer to past similar features or documentation to get some context.
Additionally, I document all clarifications and updates to ensure transparency and traceability. This helps avoid confusion later in the sprint and ensures the entire team is aligned.
Overall, I believe that open communication and early engagement are key to handling unclear requirements effectively.
What challenges you are facing during project testing?
In my experience, one of the main challenges during project testing is managing incomplete or changing requirements. Often, requirements evolve during the development cycle, which can lead to gaps in test coverage or the need to constantly update test cases.
Another common challenge is ensuring adequate test data. In some cases, realistic and varied test data isn't readily available, which can impact the accuracy and effectiveness of testing—especially for edge cases.
Time constraints are also a significant factor. With tight deadlines, there's always pressure to deliver quickly, which can limit the time available for thorough testing or regression cycles. Balancing speed with quality is a constant challenge.
Additionally, coordination between development, QA, and business teams can be difficult. Miscommunication or delays in clarifications can slow down testing and lead to defects slipping through.
Lastly, in automation testing, maintaining scripts when the UI frequently changes or when the application is still unstable can be challenging. It requires a robust strategy to ensure the automation suite remains reliable and doesn’t generate false positives or negatives.
What is the difference between Scenario and Scenario Outline?
In Cucumber-based BDD (Behavior Driven Development),
Scenario is used when you want to execute a test case with a specific set of inputs and expected outcomes. It is ideal for validating one functional flow with fixed test data—essentially, when you have only one user input or one particular test condition to verify. For example, if you're testing a login feature for a specific user with predefined credentials, using a simple Scenario is appropriate because it focuses on that single, fixed condition.
Scenario Outline is used for data-driven testing, where the same scenario logic needs to be executed multiple times with different input data. It is accompanied by an Examples table that provides various data combinations. So, if you want to test a login feature for five different sets of usernames and passwords, a Scenario Outline is the better approach. It saves repetition by allowing the same test logic to iterate over multiple inputs, ensuring broader coverage with less code duplication.
Suppose you are testing a login functionality.
Write me a feature file for this functionality
Which tool do you use for testcase management in your project?
In my current project, we use JIRA as our primary project management tool to manage user stories, epics, and sprints.
For test case management, we have integrated the Xray plugin, which allows us to create, organize, and execute manual and automated test cases directly within JIRA.
We also use JIRA for defect management, where defects are logged, prioritized, and linked to their corresponding test cases and user stories for complete traceability.
Which all matrices you provide in your test case reports ?
In our test case reports, we track key metrics such as total, executed, passed, failed, and blocked test cases.
We also include defect-related metrics like defect density, defect leakage, severity distribution, and reopen rate.
Additionally, we maintain traceability metrics to ensure all requirements are covered by test cases.
These reports are generated from X-ray and shared with the stakeholders through JIRA dashboards and weekly summary reports.
Read Test-Report Metrices

