Next we will take a look at the Selenium tools. For the recording of the raw tests, I use the Selenium IDE. The Selenium IDE is a Firefox plugin used to record clicks on a webpage. It add wait conditions, and verification steps. It is a known fact that jenkins testing has attracted a lot of people in the current generation along with Selenium. A quality testing process always helps in building a good room for the result.
We export a captured test case as a Python script. It can be used this script later as a basis for the implementation of the DSL commands. We use another tool, the Selenium Server, for the execution of the test cases. The Selenium Server provides the test profile for the web browser. It starts and ends the web browser, and handles all the communication of our test suite with the web browser. I cover the usage of Selenium in more detail in an article. The article named “Functional testing of web applications with Selenium” on my website.
Selenium Testing in DSL Infrastructure
It is relatively easy to identify which parts of the script implement certain aspects (search, zoom, login, …) of the test case. It also takes a little while to get used to reading the captured test cases. After a while you will slice the scripts into reusable commands in no time. Sometimes test cases that worked before, or which worked for another browser, will fail. This happens due to the asynchronous behavior of AJAX implementations.
After some tweaking of the implementation of your external DSL commands would look something like that:
Note that the functions used by quality testing companies in the above source code block which form the DSL commands are modified by the @dsl decorator. The @dsl decorator links a regular expression statement to the function signature. This regular expression is used during execution of a test case scenario to identify which DSL command will be executed. The grammar an other infrastructure of the functional testing DSL is implemented as a Python nose unit test plugin (open source).
I think this is really worth the effort. With a little bit of additional coding it is possible to break up the capture and replay tests into reusable DSL commands that can be easily reused to extend the test suite with more new test cases. The amount of work to fix timing issues and the time for writing additional verification steps pays back when you formulate new test cases based on these commands.