How TestRigor with generative AI functions and Executable Specifications Opens a New Door to me and my Software Development Life Cycle experience
I came to know about testRigor recently and was amazed at how fast, consistent, creative, and speedy the testing process can turn into with the help of generative AI in testRigor. If I can summarize my whole experience, I would like to divide it into three parts: Before knowing generative AI and testRigor, after discovering them, and my Future with these technologies.
1. Before Knowing testRigor and Generative AI
The first phase of my life is the time when I spent on testing before knowing about generative AI and testRigor. I was in different teams and various roles. In every group I worked in, I had to do testing. The common steps for my testing experience is reading the test scripts written in excel, going into the system, testing functions, taking screenshots and pasting them into excel files, then sending them to the project managers to sign off. I recognize many challenging factors in the manual testing way: human errors, different testing steps done by different members, hard to keep audit records of the testing cases, etc. Further, it requires so much manual work and is time consuming. I have always wanted to use tools that can address those issues but I did not find the right one.
2. How Generative AI and Executable Specifications change testing experience and the SDLC
2.1 My learning experience
My past testing experience was repetitive, boring, and manual until I learned about generative AI and testRigor. When I first read the training documents, I was surprised at how testing scripts could be written in simple executable language for machines to follow and test. I had fun reading the materials and implementing the knowledge in writing scripts. I learnt a lot about how to write scripts in testRigor after using sample tests generated by AI and actual tests generated by AI functions. I gradually grew to like testRigor and generative AI because they allowed me to use my creativity in writing test scripts and granted me ability to control AI to expand my testing scope of various systems.
2.2 The Advantages of Executable Specifications in testRigor
After knowing testRigor and generative AI, my testing experience was changed 180 degrees. I am delighted to know that users with no programming skills can still write complex test scripts using plain English language or use generative AI to create the cases. The machines can understand what needs to be done and can create test cases that execute exactly the steps which humans do in real life when testing. TestRigor and generative AI bring new competitive advantages to the table by cutting down 99.5 % of time spent on testing maintenance, fastening the testing process by 15X, enhancing test coverage, and improving execution coverage. Besides, testRigor and generative AI allow stakeholders, management, product owners, business analysts, developers, quality assurance members, etc. to write test cases of their own. Users only need to have operational knowledge about the work they do and then they can write simple English commands or ask AI to generate testing steps for them. This speeds up the testing process and allows repeated testing scripts to be executed which is beneficial for enhancements testing, regression testing, etc.
2.3 How generative AI and testRigor change the quality assurance experience
Further, Generative AI-powered test generation tools of testRigor enable testing script writers to load in various data sets into a pre-designed test script and to turn those data into multiple test cases that originate from a single script. Through the use of generative AI, testRigor provides enhanced versatility as well as efficiency and is especially useful for applications that need to be tested with various data sets. Moreover, it enlarges the testing scopes, introduces the ability to create context-specific tests, and remarkably lowers the need for human intervention. The generative AI function of testRigor allows spaces for creating, adjusting, and tailoring tests that meet specific demands of different systems. Users like me can create test cases using generative AI based on simple descriptions and improve the quality assurance process.
2.4 How generative AI and testRigor change the Software Development Life Cycle
After experiencing building test cases using generative AI in testRigor, I can see the software development life cycle is shortened, bugs can be detected faster, and common tests are done seamlessly without human errors between different cycles, which helps project managers have an overview and good comparison about how the developed applications work. In short, the summarized benefits I experience for software development life cycle when using testRigor are:
Fasten testing process: testRigor with the use of generative AI provides capability to streamline the generation of tests, which eliminates the need for repetitive manual testing. This is a competitive advantage for areas such as regression testing. Further, testRigor not only saves valuable time and resources but also enables quality assurance members to have time working on more complex tasks that need human’s creativity and intuition. Thanks to generative AI and testRigor, the software testing phase will be cut down tremendously which results in a shorter software development life cycle. Besides, developers or any stakeholders of the project can quickly run the generated test cases to check the performance of the system and detect bugs in a timely manner. Consistent and Stable Quality: I created multiple tests with the assistance of Generative AI and could see the consistency in testing quality that can never be achieved through manual work. When humans test, they can forget steps and make mistakes. Whereas, testing cases that are run by testRigor provide consistent results and allow me to compare how the enhancement of the applications can impact the results. Using AI enables companies to maintain high quality test scripts and to reduce human errors as a result of repetitive tasks. Improved Test Scope: Humans cannot generate many test scenarios. However, with the help of Generative AI in testRigor, we can create various test scenarios based on description or data sets. This capability provides better coverage of the application testing, detecting unexpected bugs that cannot be caught by humans, which improves the reliability and consistency of the applications. Enhancing Test Cases: Generative AI can learn from data, testing scripts and improve test cases which are more accurate and complex over time. Upgrading the whole Software Development Life Cycle: Thanks to Generative AI, the whole testing time is cut back and test cases can be created fast for continuous development and integration process. As a result, the speed, delivery timeline, and effectiveness of Software Development are boosted. Cut down maintenance and Human Resource expense: The maintenance steps for test cases will be reduced tremendously thanks to testRigor. Further, companies can hire employees who have ability to control AI to write test cases without having to pay for manual testing labor. Better audit and performance comparison: testRigor stores test cases and timestamps of the run which allow quality assurance members to go back and check how testing was done and can detect issues of the system if needed. For example: If the requirement was to test the front end, the team will know that certain test cases were done. If project managers see errors when the updated functions are released, they can go back and check if the features were checked in testing or not. If the answer is no, quality assurance people will need to update the data and the test cases to cover these scenarios in the future.
2.5 How I planned my generative AI test cases in testRigor
When writing test cases, I have learned that it is beneficial to have a plan with all the steps and thought about how I should use generative AI in generating scripts. Below are the steps I follow to ensure I can write effective test cases using testRigor Generative AI: Think about the Goal: Before writing a test case, I note down what I want to achieve by using generative AI in my testing process. For example: The goal can be that I want to check certain functions of the applications more thoroughly. I would like to have a wider test coverage, to discover more vulnerabilities and errors that I cannot imagine, and to cut down the testing time as well as manual work. List out some challenges for the test to avoid and be prepared: Generative AI is an effective tool. However, there can be challenges while testing scripts are generated and run such as human verification steps in which machines cannot do without human intervention or using private data in test cases that are not recommended. Further, if we use datasets for generatie AI to create test cases, we should be careful in case the data unfairly lead AI to create tests targeting only certain functions while some other needed to test features are ignored. Thus, we should monitor as well as pick the data for testing carefully. Prepare Resources to run Generative AI models: Computational resources are needed elements for generative AI to work. Thus, we need to be prepared with good infrastructure to support these requirements. Actions companies and we should take are upgrading hardwares and utilizing cloud-based solutions. Mastering the skills to use and control generative AI in testRigor: In order to monitor, adjust, troubleshoot and fix tests created by generative AI as well as issues while using it, we need to have the skills and knowledge about the system and understand the commands to control it. testRigor has a great training website, hence, I highly recommend new Quality Assurance members to check the site out and develop their skills from there. Doing Hand on Work and Monitor: After having gone through all of the above steps, we can start using generative AI to implement the testing strategies that we have outlined. We can start by writing commands to introduce some areas we want the AI to focus on. After monitoring the results carefully, we can expand the testing scope gradually to determine if the performance and test cases generated by the AI are meaningful or not. From that, we can make changes as needed to guide the AI to create better test cases per our needs
2.6 Some notices when using generative AI with testRigor
While writing test cases in testRigor, I noted down some aspects that I hoped other testers knew when writing scripts and using generative AI: Practice and Practice: In order to write complex and meaningful test cases as well as know how to control generative AI for quality assurance, you should read the training materials of testRigor to become familiar with the commands and how to change the test steps (https://testrigor.com/docs/language/#commands). By doing so, you can adjust AI generated test cases as needed. Further, writing your own scripts is also a good and quick way to learn. Failures and errors are not bad things since they give you a chance to fix and see how the tests are improved to be passed. Through a lot of trials and practice, you will become familiar with how testRigor works and be able to control AI to create scripts with meaningful test results. Trying to avoid bias in test cases: The bias in test cases can affect the testing results, decision making process, and development cycle tremendously. The AI models used for quality testing are often trained based on large datasets, which can learn and mimic the biases presented in the datasets. As a result, some bugs or issues can be ignored if the training dataset was focused on certain types of functions or errors. Thus, it’s important that we should use various training datasets, monitor closely, and adjust the AI models to make sure they are not biased. Always protect the privacy of data: We, as testers, should play an important role in ensuring that sensitive data of users are protected with utmost care while testing using generative AI. Before using the datasets and input them into the testing models, we should review and mask any sensitive data that are in the datasets. This prevents generative AI from learning about users’ sensitive info.
3. Future with generative AI and testRigor
After spending time learning about testRigor and writing test cases to test the website that I developed, I can see myself spending time using testRigor to test rigorously my developed apps and projects. I definitely don’t want to leave the knowledge that I spent time learning to rust and will improve my testing skills and quality using testRigor. I understand that there will be questions raising about reduced need for human intervention and worry about job roles as well as job perspective in the quality assurance industry.
For me, my thought is that thanks to generative AI, the role of quality assurance analysts will be improved to the next level, in which humans will not have to do mundane and repetitive tasks. Instead, we will do work that requires creativity, supervision and management of the AI-driven testing cases. Humans will need to make sure that the test cases generated by AI as intended and there are no subjective biases in the scripts. We will need to understand and translate the testing results produced by AI into meaningful context and provide useful development decisions based on them.
Further, thanks to AI, human skills will be upgraded which means we need to learn how AI works, learn to control and use it, as well as improve our abilities to apply AI effectively in testing environments. People who use testRigor like me will assume the responsibility for training and changing AI models to ensure they match the purpose of the test. We will also need to resolve any bugs or issues that we encounter during testing.
In short, thanks to testRigor, testers like me will move to more technical, strategic, and analytical roles that require creativity and generate more job satisfaction. I am very excited about the future I will have with testRigor and definitely want to improve my testRigor testing skills more by challenging myself with advanced and complicated cases using generative AI. I think it will be fun since I get to use my creativity to control AI to generate meaningful test cases.
4. Conclusion
As generative AI is used more and more in the quality assurance industry, the revolution is transforming the testing landscape and opens a new era of application testing. This evolution proves the continuous effort of the technology industry to improve accuracy, consistency, stability, coverage, and efficiency of quality assurance.
As someone who is in the wave of getting to know about generative AI and testRigor, I can see the future in which generative AI can create more effective, context-specific tests while reducing human intervention and tremendously enhancing the test coverage. We can expect that there are complexities when using generative AI. However, the benefits such as cutting manual work, continuous integration and deployment, fastening software development life cycle processes, etc. are far bigger than the challenges.
Including generative AI in the quality testing process requires not only the use of testRigor but also needs changes in how we approach testing strategically. That means we need to master the skills to manage generative AI, have clear strategic goals, understand the demands, and prepare the infrastructure for expanding adoption of generative AI and testRigor in testing for large teams or whole organizations.
I can foresee a creative and joyful journey with generative AI and testRigor in which I can improve my skills more and be transformed to do more strategic roles as an AI controller and manager.
Built With
- testrigor
Log in or sign up for Devpost to join the conversation.