### Rust’s Quickcheck

Quickcheck is a Haskell property-based testing library. What this means for a layman (like myself) is that instead of sitting down and painstakingly creating edge cases for unit tests (and forgetting most of them), the library generates many possible inputs for you.

There’s a Rust port of quickcheck. Here’s a simplified example from my trie datastructure where I validate it using quickcheck:

```
use super::*;
use quickcheck::TestResult;
fn insertion_test_helper(mut v: Vec<(String, String)>) -> TestResult {
let mut t = Trie::default();
let mut bt = BTreeMap::new();
for &(i, j) in v.iter() {
assert_eq!(t.insert(i, j), bt.insert(i, j));
}
for &(i, _) in v.iter() {
assert_eq!(t.get(i), bt.get(i));
}
TestResult::from_bool(true)
}
quickcheck! {
fn random_insertion(v: Vec<(String, String)>) -> TestResult {
insertion_test_helper(v)
}
}
```

Quickcheck automatically generates `(String, String)`

pairs based on the function signature of the test. In the test I compare my trie and the Rust std BTreeMap for inserts and gets.

### Python’s Hypothesis

Another property-based testing framework is Hypothesis. Hypothesis’ homepage describes property-based testing better than I did:

Normal “automated” software testing is surprisingly manual. Every scenario the computer runs, someone had to write by hand.

I wanted to test some of my C implementations of Cholesky matrix decomposition. Cholesky decomposition works on a specific set of inputs: square, symmetric, positive-definite matrices.

I used Hypothesis and the Hypothesis numpy package to generate these:

```
@given(arrays(np.double,
array_shapes(min_dims=2, max_dims=2,
min_side=int(sys.argv[1]),
max_side=int(sys.argv[1])),
elements=st.floats(0, 1000, allow_nan=False)))
def test_rand(self, a):
assume(a.shape[0] == a.shape[1])
size = a.shape[0]
spdm = np.dot(a, np.transpose(a)) + size*np.identity(size)
assume(np.array_equal(spdm, np.transpose(spdm)) and
(np.linalg.eigvals(spdm) > 0).all())
...
```

There’s a lot of going on here so let’s deconstruct it.

#### min_dims=2, max_dims=2

A numpy ndarray relic which specifies that the generated arrays will be 2D matrices.

#### min_side=int(sys.argv[1]), max_side=int(sys.argv[1])

This is to generate NxN matrices, where N as the first command-line argument. There’s unfortunately no cleaner way to pass a variable to an annotation than `sys.argv[1]`

.

#### elements=st.floats(0, 1000, allow_nan=False)

`st.floats`

is Hypothesis’ strategy; the parameters make it generate floats between `[0, 1000]`

without any NaN values.

#### assume(a.shape[0] == a.shape[1])

Calling `assume(CONDITION)`

within a Hypothesis function discards any generated values that violate CONDITION:

The “assume” call will halt execution by throwing an exception if it’s not satisfied. This will not cause the test to fail. Instead Hypothesis will try to control its data generation so that it avoids data that is likely to violate your requirements.

Here we ensure the matrices are symmetric.

#### spdm = np.dot(a, np.transpose(a)) + size*np.identity(size)

Here we massage the input array to produce something that is positive-definite.

#### assume(np.array_equal(spdm, np.transpose(spdm)) and (np.linalg.eigvals(spdm) > 0).all())

Even though we just massaged the values, here we use another `assume()`

call to make absolutely sure that the matrix is symmetric (i.e. A = A^T) and positive-definite (all eigenvalues are positive).

#### ctypes

Finally, since the Hypothesis numpy plugin generates numpy ndarrays, it has all the advantages of ndarrays in that you can get C pointers fom them:

```
spdm.ctypes.data_as(c.POINTER(c.c_double))
```

This way we can pass the Hypothesis testcases to C functions.