|
|
|
|
|
### Overview
|
|
|
This tutorial will show you how to generate the `validation()` method in addition to the standard `eval()`
|
|
|
method and how to use it to validate that the C++ implementation of your network model matches the
|
|
|
TensorFlow original. This will build upon the convolutional MNIST example used in tutorials 1 & 2, you'll
|
|
|
need to add some code to the python and C++ sources.
|
|
|
|
|
|
### Generating the Validate Method
|
|
|
In order to generate the `validate()` method we need to have a set of example input data to be used for the validation
|
|
|
case. These are passed to the `generate()` as a dictionary where the input placeholder names are keys
|
|
|
and numpy.ndarrays values.
|
|
|
|
|
|
In this example we'll use the first element of MNIST training data for validation:
|
|
|
```python
|
|
|
validation_input = mnist.test.images[:1]
|
|
|
```
|
|
|
Add this to the call to `generate()` as shown below, and set the validation_type parameter to `Full`.
|
|
|
```python
|
|
|
c_exporter.generate("tfmin_generated/mnist_model",
|
|
|
"mnistModel",
|
|
|
validation_inputs={"input/x-input": validation_input},
|
|
|
validation_type='Full',
|
|
|
timing=True,
|
|
|
layout='RowMajor')
|
|
|
```
|
|
|
If you now run the python script again the output should the following lines at the bottom of its output:
|
|
|
```
|
|
|
Analysed flow-graph
|
|
|
Optimised memory map.
|
|
|
Generated constructor
|
|
|
Generated inference method.
|
|
|
Generated validation method.
|
|
|
Generated timing method.
|
|
|
Generated data header.
|
|
|
Complete
|
|
|
```
|
|
|
|
|
|
### The Validate Method
|
|
|
The declaration of the validate that has been generated by the modified python script is shown below:
|
|
|
```
|
|
|
template <typename Device>
|
|
|
bool YourObjectName::validate(const Device &d);
|
|
|
```
|
|
|
Since the `validate()` method is working on the provided validation data it doesn't require any input or
|
|
|
output buffers like the eval or timing methods. This method prints out the validation result of each
|
|
|
operation of the model as it is being processed, if an operation fails validation then it stops executing
|
|
|
the model and returns false. If all operations pass validation then the method returns true.
|
|
|
|
|
|
### Using the Validate Method in your C++ Project
|
|
|
It is simpler to use the `validate()` method than either the eval or timing methods, it's only parameter
|
|
|
is the Eigen device object and it returns a Boolean indicating the result of the validation task.
|
|
|
|
|
|
```cpp
|
|
|
std::cout << "about to verify model." << std::endl;
|
|
|
if (mnist.validate(device))
|
|
|
std::cout << "Verification Passed." << std::endl;
|
|
|
else
|
|
|
std::cout << "Error: Verification Failed." << std::endl;
|
|
|
```
|
|
|
Copy and pasted this code snippet after the timing call in the test_mnist.cpp source file of the mnist_conv
|
|
|
example after the timing, and it should display a successful validation result as shown below:
|
|
|
|
|
|
```
|
|
|
Validating model computations against stored results from TensorFlow
|
|
|
Eigen Version 3.3.90
|
|
|
About to perform Reshape operation [Reshape]
|
|
|
Passed
|
|
|
About to perform Conv2D operation [Conv1/convolution]
|
|
|
Passed
|
|
|
About to perform Mul operation [Conv1/activations/mul]
|
|
|
Passed
|
|
|
About to perform Maximum operation [Conv1/activations/Maximum]
|
|
|
Passed
|
|
|
About to perform MaxPool operation [Conv1/pooling]
|
|
|
Passed
|
|
|
About to perform Reshape operation [Conv1/Reshape_4]
|
|
|
Passed
|
|
|
About to perform MatMul operation [Dense1/Wx_plus_b/MatMul]
|
|
|
Passed
|
|
|
About to perform Add operation [Dense1/Wx_plus_b/add]
|
|
|
Passed
|
|
|
About to perform Mul operation [Dense1/activation/mul]
|
|
|
Passed
|
|
|
About to perform Maximum operation [Dense1/activation/Maximum]
|
|
|
Passed
|
|
|
About to perform MatMul operation [layer2/Wx_plus_b/MatMul]
|
|
|
Passed
|
|
|
About to perform Add operation [layer2/Wx_plus_b/add]
|
|
|
Passed
|
|
|
Verification Passed.
|
|
|
```
|
|
|
|
|
|
### Summary
|
|
|
This tutorial should have demonstrated how to generate use the validate() method of the MNIST inference model to
|
|
|
validate that the C++ implementation produces the same results as the TensorFlow original.
|
|
|
|
|
|
---
|
|
|
[Previous Tutorial](/Tutorials/Tutorial-2-Evaluating-the-Runtime-of-a-Model) - Evaluating the Runtime of a Model
|
|
|
|
|
|
[Next Tutorial](/Tutorials/Tutorial-4-Adding-Support-for-More-Operations) - Adding Support for More Operations
|
|
|
|