Neural Network Utilities¶
These are a couple of classes that help use perform the operations we need on our neural networks and linear regressions.
TensorMinMax (sklearn MinMaxScaler)¶
-
class
SiPANN.import_nn.
TensorMinMax
(feature_range=(0, 1), copy=True)¶ Copy of sklearn’s MinMaxScaler implemented to work with tensorflow.
When used, tensorflow is able to take gradients on the transformation as well as on the network itself, allowing for gradient-based optimization in inverse design problems.
Parameters: - feature_range (2-tuple, optional) – Desired range of transformed data. Defaults to (0, 1)
- copy (bool, optional) – Set to false to perform inplace operations. Defaults to True.
-
fit
(X)¶ Fits the transfomer to the data.
Essentially finds original min and max of data to be able to shift the data.
Parameters: X (tensor or ndarray) – Data to fit
-
transform
(X, mode='numpy')¶ Actually does the transorm.
Parameters: - X (tensor or ndarray) – Data to transform
- mode ({'numpy' or 'tensor'}, optional) – Whether to use numpy or tensorflow operations.
Returns: X – Transformed data
Return type: tensor or ndarray
-
inverse_transform
(X, mode='numpy')¶ Undo the transorm.
Parameters: - X (tensor or ndarray) – Data to inverse transform
- mode ({'numpy' or 'tensor'}, optional) – Whether to use numpy or tensorflow operations.
Returns: X – Inverse transformed data
Return type: tensor or ndarray
Neural Network Importer¶
-
class
SiPANN.import_nn.
ImportNN
(directory)¶ Class to import trained NN.
This the way we’ve been saving and using our neural networks. After saving them we can simply import them using this class and it keeps them open for as many operations as we desire.
-
normX
¶ Norm of the inputs
Type: TensorMinMax
-
normY
¶ Norm of the outputs
Type: TensorMinMax)
-
s_data
¶ Dimensions (size) of input and outputs
Type: 2-tuple
Parameters: directory (str) – The directory where the model has been stored -
validate_input
(input)¶ Used to check for valid input.
If it is only a single data point, expands the dimensions so it fits properly
Parameters: input (ndarray) – Numpy array with width s_data[0] (hopefully) Returns: input – Numpy array with width s_data[0] (hopefully) and height 1 Return type: ndarray
-
output
(input, kp=1)¶ Runs input through neural network.
Parameters: - input (ndarray) – Numpy array with width s_data[0]
- kp (int, optional) – Value from 0 to 1, 1 refers to not performing any dropout on nodes, 0 drops all of them. Defaults to 1.
Returns: output – numpy array with width s_data[1]
Return type: ndarray
-
differentiate
(input, d, kp=1)¶ Returns partial derivative of neural network.
Parameters: - input (ndarray) – numpy array with width s_data[0]
- d (3-tuple of ints) – Refers to partial of first element wrt second element to the order of third element
- kp (int, optional) – Value from 0 to 1, 1 refers to not performing any dropout on nodes, 0 drops all of them. Defaults to 1.
Returns: output – numpy array with width s_data[1]
Return type: ndarray
-
rel_error
(input, output, kp=1)¶ Returns relative error of network.
Parameters: - input (ndarray) – Numpy array with width s_data[0]
- output (ndarray) – Numpy array with width s_data[1]
- kp (int, optional) – Value from 0 to 1, 1 refers to not performing any dropout on nodes, 0 drops all of them. Defaults to 1.
Returns: relative error – The relative error of inputs/outputs
Return type: scalar
-
Linear Regression Importer¶
-
class
SiPANN.import_nn.
ImportLR
(directory)¶ Class to import trained Linear Regression.
To remove independence on sklearn and it’s updates, we manually implement an sklearn Pipeline that includes (PolynomialFeatures, LinearRegression). We use the actual sklearn implementation to train, save the coefficients, and then proceed to implement it here. To see how to save a pipeline like above to be used here see SiPANN/LR/regress.py
-
coef_
¶ Linear Regression Coefficients
Type: ndarray
-
degree_
¶ Degree to be used in PolynomialFeatures.
Type: float
-
s_data
¶ Dimensions of inputs and outputs
Type: 2-tuple
Parameters: directory (str) – The directory where the model has been stored -
make_combos
(X)¶ Duplicates Polynomial Features.
Takes in an input X, and makes all possibly combinations of it using polynomials of specified degree.
Parameters: X (ndarray) – Numpy array of size (N, s_data[0]) Returns: polyCombos – Numpy array of size (N, ) Return type: ndarray
-
validate_input
(input)¶ Used to check for valid input.
If it is only a single data point, expands the dimensions so it fits properly
Parameters: input (ndarray) – Numpy array with width s_data[0] (hopefully) Returns: input – Numpy array with width s_data[0] (hopefully) and height 1 Return type: ndarray
-
predict
(X)¶ Predict values.
Runs X through Pipeline to make prediction
Parameters: X (ndarray) – Numpy array of size (N, s_data[0]) Returns: polyCombos – Numpy array of size (N, ) Return type: ndarray
-