On a recent project, we had a particularly timely and difficult challenge. It was April and the world was shutting down due to the coronavirus,yet we had User Acceptance Testing (UAT) scheduled for June. It was more than likely that travel restrictions would not be lifted in time for our scheduledUAT, and we needed to find a way to test successfully on a global scale. It wastime to look at testing from a new perspective – a virtual one.
There were several challenges that we needed to tackle as we planned for our Virtual UAT:
There are currently many virtual collaboration tools on the market – many getting substantial use during lockdowns. We chose to use Microsoft Teams, primarily because it was a meeting tool familiar to manyof our testers and the support team. While Microsoft Teams had its challenges,it proved to be a more than suitable format for testing:
Testers’ response to the use of Teams was overwhelmingly positive. As expected, most would have preferred an onsite group setting, but using this online format was a suitable substitution – allowing access from individual remote environments. However, it was not without challenges,particularly in regions where internet access was not as robust.
A key challenge we had to resolve was creating an environment that closely mimicked the onsite, large group testing experience. In the past, the testing support team would have traveled to a regional site and worked with testers in a large group or conference room setting. The benefits of this model are multiple:
Having determined that we would use Microsoft Teams, we elected to use Channels to mimic‘tables’ as part of the tester experience. By region and business group, testers were assigned to channels within Teams (complete with fun names)with a Table Lead. Following each day’s kick-off session, testers logged onto their table through a direct meeting link where they were able to collaborate with their testing group, ask questions, share screens, and ‘chat’. To encourage engagement, Table Leads would include fun activities such as pinpointing on a map each tester location and asking questions to be answered by GIF.
To further encourage tester engagement and morale, each testing day began with a short kick-off session in a ‘General’ channel. Testers were reminded of the process, provided information on how well they were doing with their test scenarios,and received any large group communications. The Testing Team also took that opportunity to gather feedback from testers so that real-time adjustments could be made to testing formats, if needed.
Due to the global nature of this implementation, the UAT Team was spread across multiple regions and time zones. Compounding this were Covid-19 restrictions where many testers were working remotely. We needed solutions that would allow for testers to work together, while also working with a support team that was primarily US based.
We employed several methods to assist with testing and support across these regions. We divided the testing teams into three regions – Americas, APAC, and EMEA – and scheduled testing hours accordingly. While this did mean some early or late hours for our US-based team, we avoided any overnight (midnight to 05:00 US) hours to ease strain on the support team.
In turn, to minimize strain on testers, we minimized testing hours to 6-hour blocks – and were mindful of tester locations in an effort to not schedule exceedingly early or late local times. Understanding the schedule was not ideal for many testers – especially when dealing with large regions. We engaged some Table Leads local to EMEA and APAC so that they could provide off-testing-hour support, should testers choose to continue to test outside of the scheduled testing hours.
In contrast to an onsite in-person testing environment,the support team was not walking around answering questions. We needed an online format that would allow for quick triage and response, while not overloading the support team.
The decision was made to employ a three-tier support structure, with our Table Leads acting as Tier 1 support. Testers praised the immediacy of feedback provided by the Table Leads and, as testing progressed, they were able to get ahead of common issues.To bolster Testing Leads further, a separate, private channel was set up for Testing Leads to share information and have candid discussions regarding issues or questions and seek feedback from admins.
Tier 2 Support comprised members of the HRIS team. If an issue could not be resolved by the Table Leads, they would be escalated to Tier 2 Support through the dedicated Tier 2 Support Channel.
Tier 3 Support Team was given their own dedicated and moderated channel. To limit noise on that channel, only the Tier 3 and Tier 2 support teams were listed as moderators and, therefore, only these support team members were able to open new posts. These were greatly appreciated by the Tier 3 Support Team as they could focus only on the issues that required their escalated attention.
Lastly, daily wrap-up sessions were held for the Testing Admin and Support Team to discuss the day’s testing, issues and trends. These calls allowed for the Admin and Support Team to set guidelines for the following day’s testing and make adjustments, as needed, to maximize productivity and support.
This format was so successful for our UAT testing group,that it was quickly employed for other testing teams within our organization as we moved forward towards Go-Live.In discussions with other groups, even outside of our organization, we began to realize that this format could be applicable to many different scenarios, such as training and education, as we continue working in a primarily virtual environment. In the end, we were excited to complete UAT successfully in the midst of a global pandemic and hope that our model will lead others to conduct effective virtual testing.