A collection of tutorials, code and tools to help you get better using Oracle
06 March 2013
In my last post on PLSQL testing, I explored how I created a simple test framework using Ruby, but I skipped discussing more complicated test scenarios which require some pre-test set up. This post will explore some ideas on how to tackle these sort of tests.
Without some care, it is easy to create code that is difficult to test. TDD advocates are quick to highlight how important it is that the code you want to test must be designed to be testable. With TDD, you write tests before code, so it actively encourages designing code to be testable right from the start. In the Object Orientated world, that means short methods, loosely coupled objects and sensible APIs.
To create testable PLSQL code, we need to keep similar things in mind. Avoid procedures and functions with 1000's of lines of code, and think about the public procedures and functions in a package - ideally you want nice small procedures and functions that can be tested in isolation.
Good code design is well beyond the scope of this post, but I thought it was worth mentioning. If you start to find it too difficult to test some of your PLSQL code, take a step back and see if it could be refactored into smaller units.
When testing PLSQL code, you cannot get far without needing some data in the database to manipulate. This immediately causes a problem - something needs to populate the database with the correct data before a test is executed. There is also data that has been created by other users, or left around from other test runs or tests.
When I am running unit tests, I prefer to do it on a sandbox database where I can destroy all the relevant data before each test - that way, you know the environment is in a clean state before the test starts.
When faced with the task of testing a procedure, the first thing I do is figure out which tables it queries and modifies, and then delete all the transactional data from them as part of the test setup. This gets the tables into a known state.
Then I load whatever data is necessary for the test. For example, a procedure designed to update user_account details will require a row in the users table to update.
Putting that row in the users table could easily be achieved with an insert statement, but they are fairly verbose. It can also be hard to keep track of which columns are nullable and if the table has a lot of columns, the insert statements are painful to code up.
I created the data_factory gem and added it to PLSQL_Unit_Test to solve this problem. Data Factory reads the table definition from the database and generates random data for all non null columns. When you create a data_factory object, you pass a value for columns you care about for the current test, and let it set all the others to a random value.
An example is probably the best way to explain this. Consider a test for a function that returns 1 if a useraccount is locked, and zero otherwise. Of all the columns in the useraccount table, for this test we only care about the username and the status - all others can be set to any valid value.
require './test_helper'
module DF
class UserAccount < DataFactory::Base
set_table_name 'user_account'
end
end
class UserAccountLockTest < Test::Unit::TestCase
def setup
# Remove all the data from the user_account table
@@db_interface.execute_sql('delete from user_account')
@proc_call = @@db_interface.prepare_call("
begin
:output := is_user_locked(:username);
end;")
end
def teardown
@proc_call.close
end
def test_locked_account_returns_1
# Use datafactory to setup some data in the user_account table
# passing the value for only the username and status columns
DF::UserAccount.create(:username => 'sodonnell', :status => 'locked')
@proc_call.execute([Integer, nil, :out], 'sodonnell')
assert_equal(1, @proc_call[1])
end
def test_locked_account_returns_0
DF::UserAccount.create(:username => 'sodonnell', :status => 'active')
@proc_call.execute([Integer, nil, :out], 'sodonnell')
assert_equal(0, @proc_call[1])
end
end
In this test, notice the block of code after the require line. Ideally this should be in the testhelper file, but I have included it all in the one place for clarity. This code creates a class called DF::UserAccount that is linked to the useraccount table in the database. We can use this class to create data for the actual tests with very little code, without resorting to insert statements and without needing to set values for columns that are not relevant to this test.
This example is simple, but you can easily code up many calls to different Data Factory classes to set up many rows across tables as your test requires.
Data Factory is a fairly simple gem, but it is worth checking the manual, as it has a few features to allow columns to be given custom default values instead of random data.
When testing an application it is a good idea to create one Ruby test file for each procedure under test. If you have any more than a few procedures, then you need a way to run all the test files instead of a running them one at a time. Doing this is fairly easy. Create a file in your test directory called test_runner.rb and put the following code in it:
current_dir = File.expand_path(File.dirname(__FILE__))
# Find all files that end in _test.rb and require them ...
files = Dir.entries(current_dir).grep(/^[^#].+_test\.rb$/).sort
files.each do |f|
require File.join(current_dir, f)
end
This code assumes that all files containing tests have a filename ending "test.rb". To run the entire test suite, just execute testrunner.rb:
$ jruby test_runner.rb
Run options:
# Running tests:
....................................................................
Finished tests in 4.154000s, 46.9427 tests/s, 109.0515 assertions/s.
195 tests, 453 assertions, 0 failures, 0 errors, 0 skips
Over the last 3 posts, I have explored my strategy for unit testing PLSQL code with Ruby. So far I have created a few 100 tests with this strategy, mostly against quite simple stored procedures and the approach is working well for me. The tests run quickly, could easily be integrated into a CI environment and are quite readable and maintainable.
More complex procedures result in messier test code, especially if many rows of related data need to be staged in many different tables, but in many ways that is unavoidable.
So far I have resisted adding too many features to the test framework - combined with SimpleOracleJDBC and Data Factory it is easy to query and set-up the database, and for more complicated tests the power of the Ruby programming language is there to do pretty much anything.
If you decide to try this approach out, then start with simple tests a build up to more complex ones - if the PLSQL code is nicely written with small procedures and functions then the tests should be simple too - if that is not the case maybe it is time for some refactoring!