test_utils Package

config_change_validation Module

Module for testing config file changes.

author:Kristof Katus and Plamen Dimitrov
copyright:Intra2net AG 2012

@license: GPL v2

autotest.client.shared.test_utils.config_change_validation.assert_config_change(actual_result, expected_result)[source]

Wrapper of the upper method returning boolean true if no config changes were detected.

autotest.client.shared.test_utils.config_change_validation.assert_config_change_dict(actual_result, expected_result)[source]

Calculates unexpected line changes.

The arguments actual_result and expected_results are of the same data structure type: Dict[file_path] –> (adds, removes), where adds = [added_line, ...] and removes = [removed_line, ...].

The return value has the following structure: Dict[file_path] –> (unexpected_adds,

not_present_adds, unexpected_removes, not_present_removes)
autotest.client.shared.test_utils.config_change_validation.del_temp_file_copies(file_paths)[source]

Deletes all the provided files

autotest.client.shared.test_utils.config_change_validation.extract_config_changes(file_paths, compared_file_paths=[])[source]

Extracts diff information based on the new and temporarily saved old config files

Returns a dictionary of file path and corresponding diff information key-value pairs.

autotest.client.shared.test_utils.config_change_validation.get_temp_file_path(file_path)[source]

Generates a temporary filename

autotest.client.shared.test_utils.config_change_validation.make_temp_file_copies(file_paths)[source]

Creates temporary copies of the provided files

autotest.client.shared.test_utils.config_change_validation.parse_unified_diff_output(lines)[source]

Parses the unified diff output of two files

Returns a pair of adds and removes, where each is a list of trimmed lines

autotest.client.shared.test_utils.config_change_validation.print_change_diffs(change_diffs)[source]

Pretty prints the output of the evaluate_config_changes function

functools_24 Module

autotest.client.shared.test_utils.functools_24.compose(*args)[source]
autotest.client.shared.test_utils.functools_24.fastcut(*sargs, **skw)[source]

mock Module

exception autotest.client.shared.test_utils.mock.CheckPlaybackError[source]

Bases: exceptions.Exception

Raised when mock playback does not match recorded calls.

class autotest.client.shared.test_utils.mock.SaveDataAfterCloseStringIO(buf='')[source]

Bases: StringIO.StringIO

Saves the contents in a final_data property when close() is called.

Useful as a mock output file object to test both that the file was closed and what was written.

Properties:
final_data: Set to the StringIO’s getvalue() data when close() is
called. None if close() has not been called.
close()[source]
final_data = None
exception autotest.client.shared.test_utils.mock.StubNotFoundError[source]

Bases: exceptions.Exception

Raised when god is asked to unstub an attribute that was not stubbed

class autotest.client.shared.test_utils.mock.anything_comparator[source]

Bases: autotest.client.shared.test_utils.mock.argument_comparator

is_satisfied_by(parameter)[source]
class autotest.client.shared.test_utils.mock.argument_comparator[source]

Bases: object

is_satisfied_by(parameter)[source]
class autotest.client.shared.test_utils.mock.base_mapping(symbol, return_obj, *args, **dargs)[source]

Bases: object

match(*args, **dargs)[source]
class autotest.client.shared.test_utils.mock.equality_comparator(value)[source]

Bases: autotest.client.shared.test_utils.mock.argument_comparator

is_satisfied_by(parameter)[source]
class autotest.client.shared.test_utils.mock.function_any_args_mapping(symbol, return_val, *args, **dargs)[source]

Bases: autotest.client.shared.test_utils.mock.function_mapping

A mock function mapping that doesn’t verify its arguments.

match(*args, **dargs)[source]
class autotest.client.shared.test_utils.mock.function_mapping(symbol, return_val, *args, **dargs)[source]

Bases: autotest.client.shared.test_utils.mock.base_mapping

and_raises(error)[source]
and_return(return_obj)[source]
class autotest.client.shared.test_utils.mock.is_instance_comparator(cls)[source]

Bases: autotest.client.shared.test_utils.mock.argument_comparator

is_satisfied_by(parameter)[source]
class autotest.client.shared.test_utils.mock.is_string_comparator[source]

Bases: autotest.client.shared.test_utils.mock.argument_comparator

is_satisfied_by(parameter)[source]
class autotest.client.shared.test_utils.mock.mask_function(symbol, original_function, default_return_val=None, record=None, playback=None)[source]

Bases: autotest.client.shared.test_utils.mock.mock_function

run_original_function(*args, **dargs)[source]
class autotest.client.shared.test_utils.mock.mock_class(cls, name, default_ret_val=None, record=None, playback=None)[source]

Bases: object

class autotest.client.shared.test_utils.mock.mock_function(symbol, default_return_val=None, record=None, playback=None)[source]

Bases: object

expect_any_call()[source]

Like expect_call but don’t give a hoot what arguments are passed.

expect_call(*args, **dargs)[source]
class autotest.client.shared.test_utils.mock.mock_god(debug=False, fail_fast=True, ut=None)[source]

Bases: object

NONEXISTENT_ATTRIBUTE = <object object>
check_playback()[source]

Report any errors that were encounterd during calls to __method_playback().

create_mock_class(cls, name, default_ret_val=None)[source]

Given something that defines a namespace cls (class, object, module), and a (hopefully unique) name, will create a mock_class object with that name and that possesses all the public attributes of cls. default_ret_val sets the default_ret_val on all methods of the cls mock.

create_mock_class_obj(cls, name, default_ret_val=None)[source]
create_mock_function(symbol, default_return_val=None)[source]

create a mock_function with name symbol and default return value of default_ret_val.

mock_io()[source]

Mocks and saves the stdout & stderr output

mock_up(obj, name, default_ret_val=None)[source]

Given an object (class instance or module) and a registration name, then replace all its methods with mock function objects (passing the orignal functions to the mock functions).

set_fail_fast(fail_fast)[source]
stub_class(namespace, symbol)[source]
stub_class_method(cls, symbol)[source]
stub_function(namespace, symbol)[source]
stub_function_to_return(namespace, symbol, object_to_return)[source]

Stub out a function with one that always returns a fixed value.

:param namespace The namespace containing the function to stub out. :param symbol The attribute within the namespace to stub out. :param object_to_return The value that the stub should return whenever

it is called.
stub_with(namespace, symbol, new_attribute)[source]
unmock_io()[source]

Restores the stdout & stderr, and returns both output strings

unstub(namespace, symbol)[source]
unstub_all()[source]
class autotest.client.shared.test_utils.mock.regex_comparator(pattern, flags=0)[source]

Bases: autotest.client.shared.test_utils.mock.argument_comparator

is_satisfied_by(parameter)[source]

mock_demo Module

mock_demo_MUT Module

autotest.client.shared.test_utils.mock_demo_MUT.do_create_stuff()[source]

unittest Module

Python unit testing framework, based on Erich Gamma’s JUnit and Kent Beck’s Smalltalk testing framework.

This module contains the core framework classes that form the basis of specific test cases and suites (TestCase, TestSuite etc.), and also a text-based utility class for running the tests and reporting the results

(TextTestRunner).

Simple usage:

import unittest

class IntegerArithmenticTestCase(unittest.TestCase):
def testAdd(self): ## test method names begin ‘test*’
self.assertEqual((1 + 2), 3) self.assertEqual(0 + 1, 1)
def testMultiply(self):
self.assertEqual((0 * 10), 0) self.assertEqual((5 * 8), 40)
if __name__ == ‘__main__’:
unittest.main()

Further information is available in the bundled documentation, and from

Copyright (c) 1999-2003 Steve Purcell Copyright (c) 2003-2009 Python Software Foundation Copyright (c) 2009 Garrett Cooper This module is free software, and you may redistribute it and/or modify it under the same terms as Python itself, so long as this copyright message and disclaimer are retained in their original form.

IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF THIS CODE, EVEN IF THE AUTHOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE CODE PROVIDED HEREUNDER IS ON AN “AS IS” BASIS, AND THERE IS NO OBLIGATION WHATSOEVER TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.

Garrett: This module was backported using source from r71263 with fixes noted in Issue 5771.

class autotest.client.shared.test_utils.unittest.TestResult[source]

Bases: object

Holder for test result information.

Test results are automatically managed by the TestCase and TestSuite classes, and do not need to be explicitly manipulated by writers of tests.

Each instance holds the total number of tests run, and collections of failures and errors that occurred among those test runs. The collections contain tuples of (testcase, exceptioninfo), where exceptioninfo is the formatted traceback of the error that occurred.

addError(test, err)[source]

Called when an error has occurred. ‘err’ is a tuple of values as returned by sys.exc_info().

addExpectedFailure(test, err)[source]

Called when an expected failure/error occurred.

addFailure(test, err)[source]

Called when an error has occurred. ‘err’ is a tuple of values as returned by sys.exc_info().

addSkip(test, reason)[source]

Called when a test is skipped.

addSuccess(test)[source]

Called when a test has completed successfully

addUnexpectedSuccess(test)[source]

Called when a test was expected to fail, but succeed.

startTest(test)[source]

Called when the given test is about to be run

stop()[source]

Indicates that the tests should be aborted

stopTest(test)[source]

Called when the given test has been run

wasSuccessful()[source]

Tells whether or not this result was a success

class autotest.client.shared.test_utils.unittest.TestCase(methodName='runTest')[source]

Bases: object

A class whose instances are single test cases.

By default, the test code itself should be placed in a method named ‘runTest’.

If the fixture may be used for many test cases, create as many test methods as are needed. When instantiating such a TestCase subclass, specify in the constructor arguments the name of the test method that the instance is to execute.

Test authors should subclass TestCase for their own tests. Construction and deconstruction of the test’s environment (‘fixture’) can be implemented by overriding the ‘setUp’ and ‘tearDown’ methods respectively.

If it is necessary to override the __init__ method, the base class __init__ method must always be called. It is important that subclasses should not change the signature of their __init__ method, since instances of the classes are instantiated automatically by parts of the framework in order to be run.

addTypeEqualityFunc(typeobj, function)[source]

Add a type specific assertEqual style function to compare a type.

This method is for use by TestCase subclasses that need to register their own type equality functions to provide nicer error messages.

Args:
typeobj: The data type to call this function on when both values
are of the same type in assertEqual().
function: The callable taking two arguments and an optional
msg= argument that raises self.failureException with a useful error message when the two arguments are not equal.
assertAlmostEqual(first, second, places=7, msg=None)[source]

Fail if the two objects are unequal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero.

Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).

assertAlmostEquals(first, second, places=7, msg=None)

Fail if the two objects are unequal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero.

Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).

assertDictContainsSubset(expected, actual, msg=None)[source]

Checks whether actual is a superset of expected.

assertDictEqual(d1, d2, msg=None)[source]
assertEqual(first, second, msg=None)[source]

Fail if the two objects are unequal as determined by the ‘==’ operator.

assertEquals(first, second, msg=None)

Fail if the two objects are unequal as determined by the ‘==’ operator.

assertFalse(expr, msg=None)[source]

Fail the test if the expression is true.

assertGreater(a, b, msg=None)[source]

Just like self.assertTrue(a > b), but with a nicer default message.

assertGreaterEqual(a, b, msg=None)[source]

Just like self.assertTrue(a >= b), but with a nicer default message.

assertIn(member, container, msg=None)[source]

Just like self.assertTrue(a in b), but with a nicer default message.

assertIs(expr1, expr2, msg=None)[source]

Just like self.assertTrue(a is b), but with a nicer default message.

assertIsNone(obj, msg=None)[source]

Same as self.assertTrue(obj is None), with a nicer default message.

assertIsNot(expr1, expr2, msg=None)[source]

Just like self.assertTrue(a is not b), but with a nicer default message.

assertIsNotNone(obj, msg=None)[source]

Included for symmetry with assertIsNone.

assertLess(a, b, msg=None)[source]

Just like self.assertTrue(a < b), but with a nicer default message.

assertLessEqual(a, b, msg=None)[source]

Just like self.assertTrue(a <= b), but with a nicer default message.

assertListEqual(list1, list2, msg=None)[source]

A list-specific equality assertion.

Args:

list1: The first list to compare. list2: The second list to compare. msg: Optional message to use on failure instead of a list of

differences.
assertMultiLineEqual(first, second, msg=None)[source]

Assert that two multi-line strings are equal.

assertNotAlmostEqual(first, second, places=7, msg=None)[source]

Fail if the two objects are equal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero.

Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).

assertNotAlmostEquals(first, second, places=7, msg=None)

Fail if the two objects are equal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero.

Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).

assertNotEqual(first, second, msg=None)[source]

Fail if the two objects are equal as determined by the ‘==’ operator.

assertNotEquals(first, second, msg=None)

Fail if the two objects are equal as determined by the ‘==’ operator.

assertNotIn(member, container, msg=None)[source]

Just like self.assertTrue(a not in b), but with a nicer default message.

assertRaises(excClass, callableObj=None, *args, **kwargs)[source]

Fail unless an exception of class excClass is thrown by callableObj when invoked with arguments args and keyword arguments kwargs. If a different type of exception is thrown, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception.

If called with callableObj omitted or None, will return a context object used like this:

with self.assertRaises(some_error_class):
    do_something()
assertRaisesRegexp(expected_exception, expected_regexp, callable_obj=None, *args, **kwargs)[source]

Asserts that the message in a raised exception matches a regexp.

Args:

expected_exception: Exception class expected to be raised. expected_regexp: Regexp (re pattern object or string) expected

to be found in error message.

callable_obj: Function to be called. args: Extra args. kwargs: Extra kwargs.

assertRegexpMatches(text, expected_regex, msg=None)[source]
assertSameElements(expected_seq, actual_seq, msg=None)[source]

An unordered sequence specific comparison.

Raises with an error message listing which elements of expected_seq are missing from actual_seq and vice versa if any.

assertSequenceEqual(seq1, seq2, msg=None, seq_type=None)[source]

An equality assertion for ordered sequences (like lists and tuples).

For the purposes of this function, a valid orderd sequence type is one which can be indexed, has a length, and has an equality operator.

Args:

seq1: The first sequence to compare. seq2: The second sequence to compare. seq_type: The expected datatype of the sequences, or None if no

datatype should be enforced.
msg: Optional message to use on failure instead of a list of
differences.
assertSetEqual(set1, set2, msg=None)[source]

A set-specific equality assertion.

Args:

set1: The first set to compare. set2: The second set to compare. msg: Optional message to use on failure instead of a list of

differences.

For more general containership equality, assertSameElements will work with things other than sets. This uses ducktyping to support different types of sets, and is optimized for sets specifically (parameters must support a difference method).

assertTrue(expr, msg=None)[source]

Fail the test unless the expression is true.

assertTupleEqual(tuple1, tuple2, msg=None)[source]

A tuple-specific equality assertion.

Args:

tuple1: The first tuple to compare. tuple2: The second tuple to compare. msg: Optional message to use on failure instead of a list of

differences.
assert_(expr, msg=None)

Fail the test unless the expression is true.

countTestCases()[source]
debug()[source]

Run the test without collecting errors in a TestResult

defaultTestResult()[source]
fail(msg=None)[source]

Fail immediately, with the given message.

failIf(*args, **kwargs)
failIfAlmostEqual(*args, **kwargs)
failIfEqual(*args, **kwargs)
failUnless(*args, **kwargs)
failUnlessAlmostEqual(*args, **kwargs)
failUnlessEqual(*args, **kwargs)
failUnlessRaises(*args, **kwargs)
failureException

alias of AssertionError

id()[source]
longMessage = False
run(result=None)[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

shortDescription()[source]

Returns both the test method name and first line of its docstring.

If no docstring is given, only returns the method name.

This method overrides unittest.TestCase.shortDescription(), which only returns the first line of the docstring, obscuring the name of the test upon failure.

skipTest(reason)[source]

Skip this test.

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

class autotest.client.shared.test_utils.unittest.TestSuite(tests=())[source]

Bases: object

A test suite is a composite test consisting of a number of TestCases.

For use, create an instance of TestSuite, then add test case instances. When all tests have been added, the suite can be passed to a test runner, such as TextTestRunner. It will run the individual test cases in the order in which they were added, aggregating the results. When subclassing, do not forget to call the base class constructor.

addTest(test)[source]
addTests(tests)[source]
countTestCases()[source]
debug()[source]

Run the tests without collecting errors in a TestResult

run(result)[source]
class autotest.client.shared.test_utils.unittest.ClassTestSuite(tests, class_collected_from)[source]

Bases: autotest.client.shared.test_utils.unittest.TestSuite

Suite of tests derived from a single TestCase class.

id()[source]
run(result)[source]
shortDescription()
class autotest.client.shared.test_utils.unittest.TextTestRunner(stream=<open file '<stderr>', mode 'w'>, descriptions=1, verbosity=1)[source]

Bases: object

A test runner class that displays results in textual form.

It prints out the names of tests as they are run, errors as they occur, and a summary of the results at the end of the test run.

run(test)[source]

Run the given test case or test suite.

class autotest.client.shared.test_utils.unittest.TestLoader[source]

Bases: object

This class is responsible for loading tests according to various criteria and returning them wrapped in a TestSuite

classSuiteClass

alias of ClassTestSuite

getTestCaseNames(testCaseClass)[source]

Return a sorted sequence of method names found within testCaseClass

loadTestsFromModule(module)[source]

Return a suite of all tests cases contained in the given module

loadTestsFromName(name, module=None)[source]

Return a suite of all tests cases given a string specifier.

The name may resolve either to a module, a test case class, a test method within a test case class, or a callable object which returns a TestCase or TestSuite instance.

The method optionally resolves the names relative to a given module.

loadTestsFromNames(names, module=None)[source]

Return a suite of all tests cases found using the given sequence of string specifiers. See ‘loadTestsFromName()’.

loadTestsFromTestCase(testCaseClass)[source]

Return a suite of all tests cases contained in testCaseClass

sortTestMethodsUsing()

cmp(x, y) -> integer

Return negative if x<y, zero if x==y, positive if x>y.

suiteClass

alias of TestSuite

testMethodPrefix = 'test'
class autotest.client.shared.test_utils.unittest.FunctionTestCase(testFunc, setUp=None, tearDown=None, description=None)[source]

Bases: autotest.client.shared.test_utils.unittest.TestCase

A test case that wraps a test function.

This is useful for slipping pre-existing test functions into the unittest framework. Optionally, set-up and tidy-up functions can be supplied. As with TestCase, the tidy-up (‘tearDown’) function will always be called if the set-up (‘setUp’) function ran successfully.

id()[source]
runTest()[source]
setUp()[source]
shortDescription()[source]
tearDown()[source]
autotest.client.shared.test_utils.unittest.main

alias of TestProgram

exception autotest.client.shared.test_utils.unittest.SkipTest[source]

Bases: exceptions.Exception

Raise this exception in a test to skip it.

Usually you can use TestResult.skip() or one of the skipping decorators instead of raising this directly.

autotest.client.shared.test_utils.unittest.skip(reason)[source]

Unconditionally skip a test.

autotest.client.shared.test_utils.unittest.skipIf(condition, reason)[source]

Skip a test if the condition is true.

autotest.client.shared.test_utils.unittest.skipUnless(condition, reason)[source]

Skip a test unless the condition is true.

autotest.client.shared.test_utils.unittest.expectedFailure(func)[source]
autotest.client.shared.test_utils.unittest.getTestCaseNames(testCaseClass, prefix, sortUsing=<built-in function cmp>)[source]
autotest.client.shared.test_utils.unittest.makeSuite(testCaseClass, prefix='test', sortUsing=<built-in function cmp>, suiteClass=<class 'autotest.client.shared.test_utils.unittest.TestSuite'>)[source]
autotest.client.shared.test_utils.unittest.findTestCases(module, prefix='test', sortUsing=<built-in function cmp>, suiteClass=<class 'autotest.client.shared.test_utils.unittest.TestSuite'>)[source]