PMI's Process Owner vs. Process Manager

PMI (the Project Management Institute) has recently introduced new content into it's curriculum....the Process Owner and Process Manager.  The distinction between these two roles seems to originate from ServiceNow's influence.

In small organizations the same person wears both hats, but in larger organizations these two roles may be split between two people.  Basically the Process Owner is a senior person responsible for "working on" the process, to improve it, while the Process Manager "works in" the process to execute it with efficiency.  A description of each role follows:

Job Description for Process Owner:

Process Owner's are responsible for the end-to-end oversight of a particular business process within an organization. Their main responsibilities include designing, implementing, monitoring, and continuously improving the process to ensure it meets the organization's goals and objectives. Process owners also ensure that the process is compliant with regulatory requirements and industry standards. They work closely with cross-functional teams to identify areas for improvement and implement changes that increase efficiency, reduce costs, and enhance quality. Other key responsibilities of a process owner include:

  • Developing and maintaining process documentation, including standard operating procedures (SOPs), process flowcharts, and process metrics.
  • Monitoring process performance using key performance indicators (KPIs) and other metrics, and identifying areas for improvement. (Note, both roles include this bullet point as it is an ongoing common point of review and discussion.)
  • Leading process improvement initiatives, including process re-engineering and process automation.
  • Collaborating with other process owners to ensure that processes are integrated and aligned across the organization.
  • Communicating process changes to stakeholders, including senior management, process users, and customers.
  • Providing training and support to process users to ensure that they understand and follow the process.

Job Description for Process Manager:

Process managers are responsible for the day-to-day management of a particular business process within an organization. They ensure that the process is executed efficiently and effectively, and that process users comply with the process requirements. Process managers work closely with process owners and cross-functional teams to identify areas for improvement and implement changes that increase efficiency, reduce costs, and enhance quality. Other key responsibilities of a process manager include:

  • Ensuring that the process is executed in compliance with regulatory requirements and industry standards.
  • Monitoring process performance using KPIs and other metrics, and identifying areas for improvement. (Note, both roles include this bullet point as it is an ongoing common point of review and discussion.)
  • Providing training and support to process users to ensure that they understand and follow the process.
  • Identifying and addressing process issues and bottlenecks that impact process performance.
  • Collaborating with other process managers and process owners to ensure that processes are integrated and aligned across the organization.
  • Communicating process changes to stakeholders, including senior management, process users, and customers.
  • Developing and maintaining process documentation, including SOPs, process flowcharts, and process metrics.

Overall, while both roles are involved in managing and improving business processes, the process owner has a more strategic and high-level focus, while the process manager has a more operational and hands-on focus. The process owner is responsible for setting the direction of the process, while the process manager is responsible for executing the process according to the owner's direction. The process owner is more involved in initiating and leading process improvement initiatives, while the process manager is more involved in implementing and monitoring process changes on a day-to-day basis.

What is Model-based Testing in Software?

Model-based testing (MBT) is a software testing technique that uses models of the software system under test to automatically generate and execute test cases. The models used in MBT can be of different types, such as state diagrams, decision tables, flowcharts, or UML models. These models describe the expected behavior and interactions of the software components and can be used to generate test cases that cover all possible scenarios.

The main goal of model-based testing is to improve test coverage and reduce the time and effort required for manual test case design and execution. It also helps detect defects earlier in the development cycle, which can reduce the cost of fixing them. MBT can be applied to different software testing levels, such as unit testing, integration testing, and system testing.

The process of model-based testing typically involves the following steps:

  1. Model creation: Create a model of the software system under test that describes its behavior and interactions. This model is typically created using a modeling language or tool.

  2. Test case generation: Use the model to generate test cases automatically that cover all possible scenarios. The test cases are designed to validate the behavior of the software components and ensure that they meet the specified requirements.

  3. Test execution: Execute the generated test cases and record the results. The test execution can be performed manually or automated using a testing tool.

  4. Analysis and reporting: Analyze the results of the test cases and generate reports that provide insight into the software's performance and behavior. The reports can be used to identify defects and performance issues that need to be addressed.

MBT is a powerful software testing technique that can help organizations improve their testing efficiency and effectiveness by generating test cases automatically and increasing test coverage. It can be used to test different types of software, such as embedded systems, web applications, and mobile apps.

Great Interview Questions When Hiring a Software Tester

Here are some interview questions that can be useful when hiring a software tester:

  1. Can you describe your experience with software testing? What types of testing have you performed (functional, regression, performance, etc.), and what tools have you used?

  2. Can you describe your experience with test automation? What tools and frameworks have you used, and what types of tests have you automated?

  3. Can you walk me through your testing process? How do you plan tests, write test cases, and execute tests? How do you track and report on defects?

  4. Can you give an example of a time when you found a critical defect in a software application? How did you discover the defect, and how did you work with the development team to resolve it?

  5. How do you stay up-to-date with the latest trends and best practices in software testing? What resources do you use to stay informed?

  6. Can you describe your experience with exploratory testing? How do you approach exploratory testing, and what techniques do you use?

  7. How do you prioritize testing activities? What factors do you consider when deciding which tests to perform and in what order?

  8. Can you give an example of a time when you had to work under tight deadlines? How did you manage your time and prioritize your testing activities to ensure that the software met its quality goals?

  9. Can you describe your experience with performance testing? What tools and frameworks have you used to measure software performance, and what metrics do you track?

  10. How do you work with other team members, such as developers and product managers, to ensure that software meets the requirements and quality goals? What strategies do you use to communicate and collaborate effectively?

These interview questions can help you assess a candidate's experience, skills, and approach to software testing, as well as their ability to work effectively with other team members and adapt to different situations. You can also ask follow-up questions to get more detail and context around a candidate's responses.

A Summary of Common Software Development Life-cycles

The software development life-cycle (SDLC) is the process of planning, designing, building, testing, and deploying software. There are various models for the SDLC, each with their own advantages and disadvantages. Here are some of the most common models:

  1. Waterfall model: The waterfall model is a linear, sequential approach to software development, where each stage is completed before the next one begins. The stages include requirements gathering, design, implementation, testing, and maintenance. This model works well for projects with well-defined and stable requirements, but can be inflexible and slow to respond to changing requirements.

  2. Agile model: The agile model is an iterative and incremental approach to software development, where software is developed in short cycles called sprints. The stages include planning, requirements gathering, design, implementation, testing, and deployment, and the process is repeated in each sprint. This model works well for projects with evolving requirements and a need for flexibility, but can be challenging for teams that are new to agile methodologies.

  3. Spiral model: The spiral model is a risk-driven approach to software development, where risks are identified and addressed in each stage of the SDLC. The stages include planning, risk analysis, design, implementation, testing, and deployment, and the process is repeated in a spiral fashion as risks are identified and addressed. This model works well for projects with high levels of complexity and uncertainty, but can be time-consuming and expensive.

  4. V model: The V model is a variant of the waterfall model that emphasizes testing and verification throughout the SDLC. The stages include requirements gathering, design, implementation, testing, and maintenance, and the testing process runs in parallel with the development process. This model works well for projects with a strong emphasis on testing and verification, but can be inflexible and slow to respond to changing requirements.

  5. DevOps model: The DevOps model is a collaborative approach to software development that emphasizes the integration of development and operations teams. The stages include planning, development, testing, deployment, and monitoring, and the process is highly automated to enable rapid and frequent releases. This model works well for projects with a need for rapid release cycles and a high degree of automation, but can be challenging for teams that are new to DevOps methodologies.

Each of these models has its own strengths and weaknesses, and the choice of model will depend on the specific requirements of the project, the experience and preferences of the team, and the organizational culture. Successful software development requires careful planning, effective communication, and a willingness to adapt to changing requirements and circumstances.

Test Levels, Test Types, and Test Approaches in CTFL

The ISTQB-CTFL (Certified Tester Foundation Level) curriculum details 5 levels of testing, 4 test types, and 3 testing approaches.

The five levels of testing are as follows: Component Testing, Integration Testing, System Testing, User Acceptance Testing, and Maintenance Testing.  They are missing a needed sixth level, Smoke Testing. Perhaps one day they will add it.  Here is a description of all of them.

Smoke Testing: A quicks run through of all of the screens and reports within a system to ensure the developers gave you everything.  This ensures you don't get 2 days into testing a large app only to discover a ".dll not found" error due to the developers not including everything in the install package. Smoke testing should take now longer than 20 minutes or so.  Two added benefits once an organization has smoke test in place is that the implementation team can use the smoke test to test a new install, and if a production system goes down and is brought back online a smoke test and ensure all of it was brought back online.  But, I digress....

Component Testing: Checking each component of a system. There are 5 types of comports in software: 1. Screens, 2. Reports, 3. Interfaces, 4. Logic Modules, and 5. Data Structures.  Typically, component testing in software is checking each screen, and each report. The interfaces are tested at the next level.  While screens and reports are tested using requirements-based "black-box" testing, logic modules are best tested using "white-box" tools like logic trees and logic tables.

Integration Testing: There are two types of integration testing.  Internal integration testing is testing module to module (screen to report, for example).  External integration testing is testing system to system (your app to Paypal interface, for example).

System Testing: The best way to system test an app is to create a set of "typical day in the life of a user" flows representing different ways a user would use an app.  Providing all of the previous test levels worked reasonable well, system testing is a way to ensure everything is working together.

User Acceptance Testing: Once the system has been tested enough that the release success criteria has been met, the system should be shown to a real end-user for a final approval.  This should not be a surprise to the end-user.  In a healthy environment, the end-user should have been working closely with the development team as the software was created. Since the product should match requirements, any changes at this point would require updated requirements, and likely a new work order.

Maintenance Testing: This type of testing is used after an OS updated, or database change, or some other type of refactoring has occurred.

 

The Four Test Types found on the CTFL Exam are:

Functional testing (Black-box Testing): Requirements-based testing. When you cannot see the code, but all you have is a set of requirements, or user-stories. This is traditionally what people think of when they envision testing software.

Non-functional testing: Non-functional requirements are "implied" requirements which apply to the entire software app.  For example, security requirements, or HIPAA requirements, or PCI requirements.

Structural Testing (White-box Testing):This is used for component testing logic modules, where the developer share the code with you, and together you create flowcharts, logic trees, and logic tables.

Change Testing: This testing happens after a bug is found and the software is sent back to the developer, and the developer fixes the bug and returns the updated app to be tested again.

 

The Three Testing Approaches are:

Black-box Testing: For testing components with requirements, such as screens and reports. 

White-box Testing: for testing components where you can see the code, like logic modules.

Ad-hoc Testing: for testing an app when no requiems are present.  This can be due to time constraints, or perhaps you were just given an app and have to learn about it to create an approach and set of test plans to test it adequately.

 

 

What do you learn from a CTFL Certification?

I've been instructing students on the ISTQB-CTFL Certification curriculum for a number of years now.  I like to tell students the CTFL program is really a mini project-management training program.

Somebody hands you a software app to test, what do you do now?  You decompose the App in to smaller portions which make sense to you, constructing a Work Breakdown Structure.  Then you identify items to test within the WBS, organizing them by Test Plans. They you prioritize all test plans into a Test Execution Schedule so and your test team will know in what order to perform the tests.

You'll keep track of test coverage, and results, reporting on progress as you work through the app. Here's where it's important to communicate with your developers thoroughly so they can fix bugs you find and get the updated app install back to you for confirmation testing.

finally, when your testing matches the success criteria, you'll report of the final results of your findings and wrap up the testing project, adding lessons learned, estimates vs. actual, and other information into a test historical repository.

All of these activities are the foundation for project management.