harold package

Submodules

harold.harold module

The MIT License (MIT)

Copyright (c) 2015 Ilhan Polat

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

class harold.harold.Transfer(num, den=None, dt=False)

Bases: object

Transfer is the one of two main system classes in harold (together with State()).

Main types of instantiation of this class depends on whether the user wants to create a Single Input/Single Output system (SISO) or a Multiple Input/Multiple Output system (MIMO) model.

For SISO system creation, 1D lists or 1D numpy arrays are expected, e.g.,:

>>>> G = Transfer(1,[1,2,1])

For MIMO systems, depending on the shared denominators, there are two distinct ways of entering a MIMO transfer function:

1. Entering “list of lists of lists” such that every element of the inner lists are numpy array-able (explicitly checked) for numerator and entering a 1D list or 1D numpy array for denominator (and similarly for numerator):

>>>> G = Transfer([[[1,3,2],[1,3]],[[1],[1,0]]],[1,4,5,2])
>>>> G.shape
(2,2)

2. Entering the denominator also as a list of lists for individual entries as a bracket nightmare (thanks to Python’s nonnative support for arrays and tedious array syntax):

 >>>> G = Transfer([
          [ [1,3,2], [1,3] ],
          [   [1]  , [1,0] ]
        ],# end of num
        [
           [ [1,2,1] ,  [1,3,3]  ],
           [ [1,0,0] , [1,2,3,4] ]
        ])
>>>> G.shape
(2,2)

There is a very involved validator and if you would like to know why or how this input is handled ,provide the same numerator and denominator to the static method below with ‘verbose=True’ keyword argument, e.g.

>>>> n , d , shape , is_it_static = Transfer.validate_arguments(
          [1,3,2], # common numerator
          [[[1,2,1],[1,3,3]],[[1,0,0],[1,2,3,4]]],# explicit den
          verbose=True # print the logic it followed
          )

would give information about the context together with the regularized numerator, denominator, resulting system shape and boolean whether or not the system has dynamics.

However, the preferred way is to make everything a numpy array inside the list of lists. That would skip many compatibility checks. Once created the shape of the numerator and denominator cannot be changed. But compatible sized arrays can be supplied and it will recalculate the pole/zero locations etc. properties automatically.

The Sampling Period can be given as a last argument or a keyword with ‘dt’ key or changed later with the property access.:

>>>> G = Transfer([1],[1,4,4],0.5)
>>>> G.SamplingSet
'Z'
>>>> G.SamplingPeriod
0.5
>>>> F = Transfer([1],[1,2])
>>>> F.SamplingSet
'R'
>>>> F.SamplingPeriod = 0.5
>>>> F.SamplingSet
'Z'
>>>> F.SamplingPeriod
0.5

Providing ‘False’ value to the SamplingPeriod property will make the system continous time again and relevant properties are reset to CT properties.

Warning

Unlike matlab or other tools, a discrete time system needs a specified sampling period (and possibly a discretization method if applicable) because a model without a sampling period doesn’t make sense for analysis. If you don’t care, then make up a number, say, a million, since you don’t care.

SamplingSet

If this property is called G.SamplingSet then returns the set Z or R for discrete and continous models respectively. This is a read only property and cannot be set. Instead an appropriate setting should be given to the SamplingPeriod property.

NumberOfInputs

A read only property that holds the number of inputs.

NumberOfOutputs

A read only property that holds the number of outputs.

shape

A read only property that holds the shape of the system as a tuple such that the result is (# of inputs , # of outputs).

polynomials

A read only property that returns the model numerator and the denominator as the outputs.

SamplingPeriod

If this property is called G.SamplingPeriod then returns the sampling period data. If this property is set to False, the model is assumed to be a continuous model. Otherwise, a discrete time model is assumed. Upon changing this value, relevant system properties are recalculated.

num

If this property is called G.num then returns the numerator data. Alternatively, if this property is set then the provided value is first validated with the existing denominator shape and causality.

den

If this property is called G.den then returns the numerator data. Alternatively, if this property is set then the provided value is first validated with the existing numerator shape and causality.

DiscretizedWith

This property is used internally to keep track of (if applicable) the original method used for discretization. It is used by the undiscretize() function to reach back to the continous model that would hopefully minimize the discretization errors. It is also possible to manually set this property such that undiscretize uses the provided method.

DiscretizationMatrix

This matrix denoted with \(Q\) is internally used to represent the upper linear fractional transformation of the operation \(\frac{1}{s} I = \frac{1}{z} I \star Q\). For example, the typical tustin, forward/backward difference methods can be represented with

\[\begin{split}Q = \begin{bmatrix} I & \sqrt{T}I \\ \sqrt{T}I & \alpha TI \end{bmatrix}\end{split}\]

then for different \(\alpha\) values corresponds to the transformation given below:

\(\alpha\) method
\(0\) backward difference (euler)
\(0.5\) tustin
\(1\) forward difference (euler)

This operation is usually given with a Riemann sum argument however for control theoretical purposes a proper mapping argument immediately suggests a more precise control over the domain the left half plane is mapped to. For this reason, a discretization matrix option is provided to the user.

The available methods (and their aliases) can be accessed via the internal _KnownDiscretizationMethods variable.

Note

The common discretization techniques can be selected with a keyword argument and this matrix business can safely be avoided. This is a rather technical issue and it is best to be used sparingly. For the experts, I have to note that the transformation is currently not tested for well-posedness.

Note

SciPy actually uses a variant of this LFT representation as given in the paper of Zhang et al.

PrewarpFrequency

If the discretization method is tustin then a frequency warping correction might be required the match of the discrete time system response at the frequency band of interest. Via this property, the prewarp frequency can be provided.

static validate_arguments(num, den, verbose=False)

A helper function to validate whether given arguments to an Transfer instance are valid and compatible for instantiation.

Since there are many cases that might lead to a valid Transfer instance, Pythonic “try,except” machinery is not very helpful to check every possibility and equally challenging to branch off. A few examples of such issues that needs to be addressed is static gain, single entry for a MIMO system with common denominators and so on.

Thus, this function provides a front-end to the laborious size and type checking which would make the Transfer object itself seemingly compatible with duck-typing while keeping the nasty branching implementation internal.

The resulting output is compatible with the main harold Transfer class convention such that

  • If the recognized context is MIMO the resulting outputs are list of lists with numpy arrays being the polynomial coefficient entries.
  • If the recognized context is SISO the entries are numpy arrays with any list structure is stripped off.
Parameters:
  • num

    The polynomial coefficient containers. Either of them can be (not both) None to assume that the context will be derived from the other for static gains. Otherwise both are expected to be one of np.array, int , float , list , list of lists of lists or numpy arrays.

    For MIMO context, element numbers and causality checks are performed such that numerator list of list has internal arrays that have less than or equal to the internal arrays of the respective denominator entries.

    For SISO context, causality check is performed between numerator and denominator arrays.

  • den – Same as num
  • verbose (boolean) – A boolean switch to print out what this method thinksabout the argument context.
Returns:

  • num (List of lists or numpy array (MIMO/SISO))
  • den (List of lists or numpy array (MIMO/SISO))
  • shape (2-tuple) – Returns the recognized shape of the system
  • Gain_flag (Boolean) – Returns True if the system is recognized as a static gain False otherwise (for both SISO and MIMO)

class harold.harold.State(a, b=None, c=None, d=None, dt=False)

Bases: object

State() is the one of two main system classes in harold (together with Transfer() ).

A State object can be instantiated in a straightforward manner by entering 2D arrays, floats, 1D arrays for row vectors and so on.:

>>>> G = State([[0,1],[-4,-5]],[[0],[1]],[[1,0]],1)

However, the preferred way is to make everything a numpy array. That would skip many compatibility checks. Once created the shape of the system matrices cannot be changed. But compatible sized arrays can be supplied and it will recalculate the pole/zero locations etc. properties automatically.

The Sampling Period can be given as a last argument or a keyword with ‘dt’ key or changed later with the property access.:

>>>> G = State([[0,1],[-4,-5]],[[0],[1]],[[1,0]],[1],0.5)
>>>> G.SamplingSet
'Z'
>>>> G.SamplingPeriod
0.5
>>>> F = State(1,2,3,4)
>>>> F.SamplingSet
'R'
>>>> F.SamplingPeriod = 0.5
>>>> F.SamplingSet
'Z'
>>>> F.SamplingPeriod
0.5

Setting SamplingPeriod property to ‘False’ value to the will make the system continous time again and relevant properties are reset to continuous-time properties.

Warning: Unlike matlab or other tools, a discrete time system needs a specified sampling period (and possibly a discretization method if applicable) because a model without a sampling period doesn’t make sense for analysis. If you don’t care, then make up a number, say, a million, since you don’t care.

SamplingSet

If this property is called G.SamplingSet then returns the set Z or R for discrete and continous models respectively. This is a read only property and cannot be set. Instead an appropriate setting should be given to the SamplingPeriod property.

NumberOfStates

A read only property that holds the number of states.

NumberOfInputs

A read only property that holds the number of inputs.

NumberOfOutputs

A read only property that holds the number of outputs.

shape

A read only property that holds the shape of the system as a tuple such that the result is (# of inputs , # of outputs).

matrices

A read only property that returns the model matrices.

a

If this property is called G.a then returns the matrix data. Alternatively, if this property is set then the provided value is first validated with the existing system shape and number of states.

b

If this property is called G.b then returns the matrix data. Alternatively, if this property is set then the provided value is first validated with the existing system shape and number of states.

c

If this property is called G.c then returns the matrix data. Alternatively, if this property is set then the provided value is first validated with the existing system shape and number of states.

d

If this property is called G.a then returns the matrix data. Alternatively, if this property is set then the provided value is first validated with the existing system shape.

SamplingPeriod

If this property is called G.SamplingPeriod then returns the sampling period data. If this property is set to False, the model is assumed to be a continuous model. Otherwise, a discrete time model is assumed. Upon changing this value, relevant system properties are recalculated.

DiscretizedWith

This property is used internally to keep track of (if applicable) the original method used for discretization. It is used by the undiscretize() function to reach back to the continous model that would hopefully minimize the discretization errors. It is also possible to manually set this property such that undiscretize uses the provided method.

DiscretizationMatrix

This matrix denoted with \(Q\) is internally used to represent the upper linear fractional transformation of the operation \(\frac{1}{s} I = \frac{1}{z} I \star Q\). For example, the typical tustin, forward/backward difference methods can be represented with

\[\begin{split}Q = \begin{bmatrix} I & \sqrt{T}I \\ \sqrt{T}I & \alpha TI \end{bmatrix}\end{split}\]

then for different \(\alpha\) values corresponds to the transformation given below:

\(\alpha\) method
\(0\) backward difference (euler)
\(0.5\) tustin
\(1\) forward difference (euler)

This operation is usually given with a Riemann sum argument however for control theoretical purposes a proper mapping argument immediately suggests a more precise control over the domain the left half plane is mapped to. For this reason, a discretization matrix option is provided to the user.

The available methods (and their aliases) can be accessed via the internal _KnownDiscretizationMethods variable.

Note

The common discretization techniques can be selected with a keyword argument and this matrix business can safely be avoided. This is a rather technical issue and it is best to be used sparingly. For the experts, I have to note that the transformation is currently not tested for well-posedness.

Note

SciPy actually uses a variant of this LFT representation as given in the paper of Zhang et al.

PrewarpFrequency

If the discretization method is tustin then a frequency warping correction might be required the match of the discrete time system response at the frequency band of interest. Via this property, the prewarp frequency can be provided.

static validate_arguments(a, b, c, d, verbose=False)

An internal command to validate whether given arguments to a State() instance are valid and compatible.

It also checks if the lists are 2D numpy.array’able entries.

harold.harold.statetotransfer(*state_or_abcd, *, output='system')

Given a State() object of a tuple of A,B,C,D array-likes, converts the argument into the transfer representation. The output can be selected as a Transfer() object or the numerator, denominator if ‘output’ keyword is given with the option ‘polynomials’.

If the input is a Transfer() object it returns the argument with no modifications.

The algorithm is to first get the minimal realization of the State() representation. Then implements the conversion ala Varga,Sima 1981 which can be summarized as iterating over every row/cols of B and C to get SISO Transfer representations via c*(sI-A)^(-1)*b+d

Parameters:
  • state_or_abcd (State() or a tuple of A,B,C,D matrices.) –
  • output ({'system','polynomials'}) – Selects whether a State() object or individual numerator, denominator will be returned.
Returns:

  • G (Transfer()) – If ‘output’ keyword is set to ‘system’
  • num ({List of lists of 2D-numpy arrays for MIMO case,) – 2D-Numpy arrays for SISO case} If the ‘output’ keyword is set to ‘polynomials’
  • den (Same as num)

harold.harold.transfertostate(*tf_or_numden, *, output='system')

Given a Transfer() object of a tuple of numerator and denominator, converts the argument into the state representation. The output can be selected as a State() object or the A,B,C,D matrices if ‘output’ keyword is given with the option ‘matrices’.

If the input is a State() object it returns the argument with no modifications.

For SISO systems, the algorithm is returning the controllable companion form.

For MIMO systems a variant of the algorithm given in Section 4.4 of W.A. Wolowich, Linear Multivariable Systems (1974). The denominators are equaled with haroldlcm() Least Common Multiple function.

Parameters:
  • tf_or_numden (Transfer() or a tuple of numerator and denominator.) – For MIMO numerator and denominator arguments see Transfer() docstring.
  • output ({'system','matrices'}) – Selects whether a State() object or individual state matrices will be returned.
Returns:

  • G (State()) – If ‘output’ keyword is set to ‘system’
  • A,B,C,D ({(nxn),(nxm),(p,n),(p,m)} 2D Numpy-arrays) – If the ‘output’ keyword is set to ‘matrices’

harold.harold.transmission_zeros(A, B, C, D)

Computes the transmission zeros of a (A,B,C,D) system matrix quartet.

Note

This is a straightforward implementation of the algorithm of Misra, van Dooren, Varga 1994 but skipping the descriptor matrix which in turn becomes Emami-Naeini,van Dooren 1979. I don’t know if anyone actually uses descriptor systems in practice so I removed the descriptor parts to reduce the clutter. Hence, it is possible to directly row/column compress the matrices without caring about the upper Hessenbergness of E matrix.

Parameters:A,B,C,D ({(nxn),(nxm),(p,n),(p,m) 2D Numpy arrays}) –
Returns:z – The array of computed transmission zeros. The array is returned empty if no transmission zeros are found.
Return type:{1D Numpy array}
harold.harold.discretize(G, dt, method='tustin', PrewarpAt=0.0, q=None)
harold.harold.undiscretize(G, OverrideWith=None)

Converts a discrete time system model continuous system model. If the model has the Discretization Method set, then uses that discretization method to reach back to the continous system model.

harold.harold.rediscretize(G, dt, method='tustin', alpha=0.5)

Todo

Not implemented yet!

harold.harold.kalman_controllability(G, compress=False)

Computes the Kalman controllability related quantities. The algorithm is the literal computation of the controllability matrix with increasing powers of A. Numerically, this test is not robust and prone to errors if the A matrix is not well-conditioned or its entries have varying order of magnitude as at each additional power of A the entries blow up or converge to zero rapidly.

Parameters:
  • G (State() or tuple of {(n,n),(n,m)} array_like matrices) – System or matrices to be tested
  • compress (Boolean) – If set to True, then the returned controllability matrix is row compressed, and in case of uncontrollable modes, has that many zero rows.
Returns:

  • Cc ({(n,nxm)} 2D numpy array) – Kalman Controllability Matrix
  • T ((n,n) 2D numpy arrays) – The transformation matrix such that T^T * Cc is row compressed and the number of zero rows at the bottom corresponds to the number of uncontrollable modes.
  • r (integer) – Numerical rank of the controllability matrix

harold.harold.kalman_observability(G, compress=False)

Computes the Kalman observability related objects. The algorithm is the literal computation of the observability matrix with increasing powers of A. Numerically, this test is not robust and prone to errors if the A matrix is not well-conditioned or too big as at each additional power of A the entries blow up or converge to zero rapidly.

Parameters:
  • G (State() or {(n,n),(n,m)} array_like matrices) – System or matrices to be tested
  • compress (Boolean) – If set to True, then the returned observability matrix is row compressed, and in case of unobservability modes, has that many zero rows.
Returns:

  • Co ({(n,nxm)} 2D numpy array) – Kalman observability matrix
  • T ((n,n) 2D numpy arrays) – The transformation matrix such that T^T * Cc is row compressed and the number of zero rows on the right corresponds to the number of unobservable modes.
  • r (integer) – Numerical rank of the observability matrix

harold.harold.kalman_decomposition(G, compute_T=False, output='system', cleanup_threshold=1e-09)

By performing a sequence of similarity transformations the State representation is transformed into a special structure such that if the system has uncontrollable/unobservable modes, the corresponding rows/columns of the B/C matrices have zero blocks and the modes are isolated in the A matrix. That is to say, there is no contribution of the controllable/observable states on the dynamics of these modes.

Note that, Kalman operations are numerically not robust. Hence the resulting decomposition might miss some ‘almost’ pole-zero cancellations. Hence, this should be used as a rough assesment tool but not as actual minimality check or maybe to demonstrate the concepts academic purposes to show the modal decomposition. Use canceldistance() and minimal_realization() functions instead with better numerical properties.

Example usage and verification :

G = State([[2,1,1],[5,3,6],[-5,-1,-4]],[[1],[0],[0]],[[1,0,0]],0)
print('Is it Kalman Cont'ble ? ',is_kalman_controllable(G))
print('Is it Kalman Obsv'ble ? ',is_kalman_observable(G))
F = kalman_decomposition(G)
print(F.a,F.b,F.c)
H = minimal_realization(F.a,F.b,F.c)
print('The minimal system matrices are:',*H)

Expected output :

Is it Kalman Cont'ble ?  False
Is it Kalman Obsv'ble ?  False
[[ 2.          0.         -1.41421356]
 [ 7.07106781 -3.         -7.        ]
 [ 0.          0.          2.        ]]

[[-1.]
 [ 0.]
 [ 0.]]

[[-1.  0.  0.]]

The minimal system matrices are:
 [[ 2.]] [[ 1.]] [[ 1.]]

Warning

Kalman decomposition is often described in an ambigous fashion in the literature. I would like to thank Joaquin Carrasco for his generous help on this matter for his lucid argument as to why this is probably happening. This is going to be reimplemented with better tests on pathological models.

Parameters:
  • G (State()) – The state representation that is to be converted into the block triangular form such that unobservable/uncontrollable modes corresponds to zero blocks in B/C matrices
  • compute_T (boolean) – Selects whether the similarity transformation matrix will be returned.
  • output ({'system','matrices'}) – Selects whether a State() object or individual state matrices will be returned.
  • cleanup_threshold (float) – After the similarity transformation, the matrix entries smaller than this threshold in absolute value would be zeroed. Setting this value to zero turns this behavior off.
Returns:

  • Gk (State() or if output = ‘matrices’ is selected (A,B,C,D) tuple) – Returns a state representation or its matrices as a tuple

  • T ((nxn) 2D-numpy array) – If compute_T is True, returns the similarity transform matrix that brings the state representation in the resulting decomposed form such that

    Gk.a = inv(T)*G.a*T Gk.b = inv(T)*G.b Gk.c = G.c*T Gk.d = G.d

harold.harold.is_kalman_controllable(G)

Tests the rank of the Kalman controllability matrix and compares it with the A matrix size, returns a boolean depending on the outcome.

Parameters:G (State() or tuple of {(nxn),(nxm)} array_like matrices) – The system or the (A,B) matrix tuple
Returns:test_bool – Returns True if the input is Kalman controllable
Return type:Boolean
harold.harold.is_kalman_observable(G)

Tests the rank of the Kalman observability matrix and compares it with the A matrix size, returns a boolean depending on the outcome.

Parameters:G (State() or tuple of {(nxn),(pxn)} array_like matrices) – The system or the (A,C) matrix tuple
Returns:test_bool – Returns True if the input is Kalman observable
Return type:Boolean
harold.harold.staircase(A, B, C, compute_T=False, form='c', invert=False, block_indices=False)

The staircase form is used very often to assess system properties. Given a state system matrix triplet A,B,C, this function computes the so-called controller/observer-Hessenberg form such that the resulting system matrices have the block-form (x denoting the nonzero blocks)

\[\begin{split}\begin{array}{c|c} \begin{bmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & \times \end{bmatrix} & \begin{bmatrix} \times \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \\ \hline \begin{bmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \end{bmatrix} \end{array}\end{split}\]

For controllability and observability, the existence of zero-rank subdiagonal blocks can be checked, as opposed to forming the Kalman matrix and checking the rank. Staircase method can numerically be more stable since for certain matrices, A^n computations can introduce large errors (for some A that have entries with varying order of magnitudes). But it is also prone to numerical rank guessing mismatches.

Notice that, if we use the pertransposed data, then we have the observer form which is usually asked from the user to supply the data as \(A,B,C \Rightarrow A^T,C^T,B^T\) and then transpose back the result. This is just silly to ask the user to do that. Hence the additional form option denoting whether it is the observer or the controller form that is requested.

Parameters:
  • A,B,C ({(n,n),(n,m),(p,n)} array_like) – System Matrices to be converted
  • compute_T (bool, optional) – Whether the transformation matrix T should be computed or not
  • form ({ 'c' , 'o' }, optional) – Determines whether the controller- or observer-Hessenberg form will be computed.
  • invert (bool, optional) – Whether to select which side the B or C matrix will be compressed. For example, the default case returns the B matrix with (if any) zero rows at the bottom. invert option flips this choice either in B or C matrices depending on the “form” switch.
  • block_indices (bool, optional) –
Returns:

  • Ah,Bh,Ch ({(n,n),(n,m),(p,n)} 2D numpy arrays) – Converted system matrices

  • T ((n,n) 2D numpy array) – If the boolean compute_T is true, returns the transformation matrix such that

    \[\begin{split}\left[\begin{array}{c|c} T^{-1}AT &T^{-1}B \\ \hline CT & D \end{array}\right]\end{split}\]

    is in the desired staircase form.

  • k (Numpy array) – If the boolean block_indices is true, returns the array of controllable/observable block sizes identified during block diagonalization

harold.harold.canceldistance(F, G)

Given matrices \(F,G\), computes the upper and lower bounds of the perturbation needed to render the pencil \(\left[ \begin{array}{c|c}F-pI & G\end{array}\right]\) rank deficient. It is used for assessing the controllability/observability degenerate distance and hence for minimality assessment.

Implements the algorithm given in D.Boley SIMAX vol.11(4) 1990.

Parameters:F,G (2D arrays) – Pencil matrices to be checked for rank deficiency distance
Returns:
  • upper2 (float) – Upper bound on the norm of the perturbation \(\left[\begin{array}{c|c}dF & dG\end{array}\right]\) such that \(\left[\begin{array}{c|c}F+dF-pI & G+dG \end{array} \right]\) is rank deficient.
  • upper1 (float) – A theoretically softer upper bound than the upper2 for the same quantity.
  • lower0 (float) – Lower bound on the same quantity given in upper2
  • e_f (complex) – Indicates the eigenvalue that renders [F + dF - pI | G + dG ] rank deficient i.e. equals to the p value at the closest rank deficiency.
  • radius (float) – The perturbation with the norm bound “upper2” is located within a disk in the complex plane whose center is on “e_f” and whose radius is bounded by this output.
harold.harold.minimal_realization(A, B, C, mu_tol=1e-09)

Given state matrices \(A,B,C\) computes minimal state matrices such that the system is controllable and observable within the given tolerance \(\mu\).

Implements a basic two pass algorithm :
1- First distance to mode cancellation is computed then also the Hessenberg form is obtained with the identified o’ble/c’ble block numbers. If staircase form reports that there are no cancellations but the distance is less than the tolerance, distance wins and the respective mode is removed.

Uses canceldistance() and staircase() for the tests.

Parameters:
  • A,B,C ({(n,n), (n,m), (pxn)} array_like) – System matrices to be checked for minimality
  • mu_tol (float) – The sensitivity threshold for the cancellation to be compared with the first default output of canceldistance() function. The default value is (default value is \(10^{-9}\))
Returns:

A,B,C – System matrices that are identified as minimal with k states instead of the original n where (k <= n)

Return type:

{(k,k), (k,m), (pxk)} array_like

harold.harold.haroldsvd(D, also_rank=False, rank_tol=None)

This is a wrapper/container function of both the SVD decomposition and the rank computation. Since the regular rank computation is implemented via SVD it doesn’t make too much sense to recompute the SVD if we already have the rank information. Thus instead of typing two commands back to back for both the SVD and rank, we return both. To reduce the clutter, the rank information is supressed by default.

numpy svd is a bit strange because it compresses and looses the S matrix structure. From the manual, it is advised to use u.dot(np.diag(s).dot(v)) for recovering the original matrix. But that won’t work for rectangular matrices. Hence it recreates the rectangular S matrix of U,S,V triplet.

Parameters:
  • D ((m,n) array_like) – Matrix to be decomposed
  • also_rank (bool, optional) – Whether the rank of the matrix should also be reported or not. The returned rank is computed via the definition taken from the official numpy.linalg.matrix_rank and appended here.
  • rank_tol ({None,float} optional) – The tolerance used for deciding the numerical rank. The default is set to None and uses the default definition of matrix_rank() from numpy.
Returns:

  • U,S,V ({(m,m),(m,n),(n,n)} 2D numpy arrays) – Decomposed-form matrices
  • r (integer) – If the boolean “also_rank” is true, this variable is the numerical rank of the matrix D

harold.harold.haroldker(N, side='right')

This function is a straightforward basis computation for the right/left nullspace for rank deficient or fat/tall matrices.

It simply returns the remaining columns of the right factor of the singular value decomposition whenever applicable. Otherwise returns a zero vector such that it has the same number of rows as the columns of the argument, hence the dot product makes sense.

The basis columns have unity 2-norm except for the trivial zeros.

Parameters:
  • N ((m,n) array_like) – Matrix for which the nullspace basis to be computed
  • side ({'right','left'} string) – The switch for the right or left nullspace computation.
Returns:

Nn – Basis for the nullspace. dim is the dimension of the nullspace. If the nullspace is trivial then dim is 1 for consistent 2D array output

Return type:

(n,dim) array_like

harold.harold.ssconcat(G)

Takes a State() model as input and returns the matrix

\[\begin{split}\left[\begin{array}{c|c}A&B\\ \hline C&D\end{array}\right]\end{split}\]
Parameters:G (State()) –
Returns:M
Return type:2D Numpy array
harold.harold.ssslice(H, n)

Takes a two dimensional array of size \(p\times m\) and slices into four parts such that

\[\begin{split}\left[\begin{array}{c|c}A&B\\C&D\end{array}\right]\end{split}\]

For non-square slicing, see the method matrixslice()

Parameters:
  • M (2D Numpy array) –
  • n (int) – For a meaningful output, this number must staisfy \(n<p,m\).
Returns:

  • A (2D Numpy array) – The upper left \(n\times n\) block of \(M\)
  • B (2D Numpy array) – The upper right \(n\times (m-n)\) block of \(M\)
  • C (2D Numpy array) – The lower left \((p-n)\times n\) block of \(M\)
  • D (2D Numpy array) – The lower right \((p-n)\times (m-n)\) block of \(M\)

harold.harold.matrixslice(M, M11shape)

Takes a two dimensional array of size \(p\times m\) and slices into four parts such that

\[\begin{split}\left[\begin{array}{c|c}A&B\\ \hline C&D\end{array}\right]\end{split}\]

where the shape of \(A\) is determined by M11shape=(r,q).

Parameters:
  • M (2D Numpy array) –
  • M11shape (tuple) – An integer valued 2-tuple for the shape of \((1,1)\) block element
Returns:

  • A (2D Numpy array) – The upper left \(r\times q\) block of \(M\)
  • B (2D Numpy array) – The upper right \(r\times (m-q)\) block of \(M\)
  • C (2D Numpy array) – The lower left \((p-r)\times q\) block of \(M\)
  • D (2D Numpy array) – The lower right \((p-r)\times (m-q)\) block of \(M\)

harold.harold.eyecolumn(width, nth=0)

Returns the nth column of the identity matrix with shape (width,width). Slicing is permitted with the nth parameter.

harold.harold.system_norm(state_or_transfer, p=inf, validate=False, verbose=False, max_iter_limit=100, hinf_tolerance=1e-10, eig_tolerance=1e-12)

Computes the system p-norm. Currently, no balancing is done on the system, however in the future, a scaling of some sort will be introduced. Another short-coming is that while sounding general, only \(\mathcal{H}_2\) and \(\mathcal{H}_\infty\) norm are understood.

For \(\mathcal{H}_2\) norm, the standard grammian definition via controllability grammian, that can be found elsewhere is used.

Currently, the \(\mathcal{H}_\infty\) norm is computed via so-called Boyd-Balakhrishnan-Bruinsma-Steinbuch algorithm (See e.g. [2]).

However, (with kind and generous help of Melina Freitag) the algorithm given in [1] is being implemented and depending on the speed benefit might be replaced as the default.

[1] M.A. Freitag, A Spence, P. Van Dooren: Calculating the \(\mathcal{H}_\infty\)-norm using the implicit determinant method. SIAM J. Matrix Anal. Appl., 35(2), 619-635, 2014

[2] N.A. Bruinsma, M. Steinbuch: Fast Computation of \(\mathcal{H}_\infty\)-norm of transfer function. System and Control Letters, 14, 1990

Parameters:
  • state_or_transfer ({State,Transfer}) – System for which the norm is computed
  • p ({int,Inf}) – Whether the rank of the matrix should also be reported or not. The returned rank is computed via the definition taken from the official numpy.linalg.matrix_rank and appended here.
  • validate (boolean) – If applicable and if the resulting norm is finite, the result is validated via other means.
  • verbose (boolean) – If True, the (some) internal progress is printed out.
  • max_iter_limit (int) – Stops the iteration after max_iter_limit many times spinning the loop. Very unlikely but might be required for pathological examples.
  • hinf_tolerance (float) – Convergence check value such that when the progress is below this tolerance the result is accepted as converged.
  • eig_tolerance (float) – The algorithm relies on checking the eigenvalues of the Hamiltonian being on the imaginary axis or not. This value is the threshold such that the absolute real value of the eigenvalues smaller than this value will be accepted as pure imaginary eigenvalues.
Returns:

  • n (float) – Computed norm. In NumPy, infinity is also float-type
  • omega (float) – For Hinf norm, omega is the frequency where the maximum is attained (technically this is a numerical approximation of the supremum).

harold.harold.haroldlcm(*args, *, compute_multipliers=True, cleanup_threshold=1e-09)

Takes n-many 1D numpy arrays and computes the numerical least common multiple polynomial. The polynomials are assumed to be in decreasing powers, e.g. s^2 + 5 should be given as numpy.array([1,0,5])

Returns a numpy array holding the polynomial coefficients of LCM and a list, of which entries are the polynomial multipliers to arrive at the LCM of each input element.

For the multiplier computation, a variant of Karcanias, Mitrouli, System theoretic based characterisation and computation of the least common multiple of a set of polynomials, Lin Alg App, 381, 2004, is used.

Parameters:
  • args (1D Numpy array) –
  • compute_multipliers (boolean) – After the computation of the LCM, this switch decides whether the multipliers of the given arguments should be computed or skipped. A multiplier in this context is [1,3] for the argument [1,2] if the LCM turns out to be [1,5,6].
  • cleanup_threshold (float) – The computed polynomials might contain some numerical noise and after finishing everything this value is used to clean up the tiny entries. Set this value to zero to turn off this behavior. The default value is \(10^{-9}\).
Returns:

  • lcmpoly (1D Numpy array) – Resulting polynomial coefficients for the LCM.
  • mults (List of 1D Numpy arrays) – The multipliers for each given argument.

Example

>>>> a , b = haroldlcm(*map(np.array,([1,3,0,-4],[1,-4,-3,18],[1,-4,3],[1,-2,-8])))
>>>> a
    (array([   1.,   -7.,    3.,   59.,  -68., -132.,  144.])

>>>> b
    [array([  1., -10.,  33., -36.]),
     array([  1.,  -3.,  -6.,   8.]),
     array([  1.,  -3., -12.,  20.,  48.]),
     array([  1.,  -5.,   1.,  21., -18.])]

>>>> np.convolve([1,3,0,-4],b[0]) # or haroldpolymul() for poly mult
    (array([   1.,   -7.,    3.,   59.,  -68., -132.,  144.]),
harold.harold.haroldgcd(*args)

Takes 1D numpy arrays and computes the numerical greatest common divisor polynomial. The polynomials are assumed to be in decreasing powers, e.g. \(s^2 + 5\) should be given as numpy.array([1,0,5]).

Returns a numpy array holding the polynomial coefficients of GCD. The GCD does not cancel scalars but returns only monic roots. In other words, the GCD of polynomials \(2\) and \(2s+4\) is still computed as \(1\).

Parameters:args (1D Numpy arrays) –
Returns:gcdpoly
Return type:1D Numpy array

Example

>>>> a = haroldgcd(*map(haroldpoly,([-1,-1,-2,-1j,1j],[-2,-3,-4,-5],[-2]*10)))
>>>> a
     array([ 1.,  2.])

Warning

It uses the LU factorization of the Sylvester matrix. Use responsibly. It does not check any certificate of success by any means (maybe it will in the future). I have played around with ERES method but probably due to my implementation, couldn’t get satisfactory results. Hence I’ve switched to matrix-based methods. I am still interested in better methods though, so please contact me if you have a working implementation that improves over this.

harold.harold.haroldcompanion(somearray)

Takes a 1D numpy array or list and returns the companion matrix of the monic polynomial of somearray. Hence [0.5,1,2] will be first converted to [1,2,4].

Example:

>>>> haroldcompanion([2,4,6])
    array([[ 0.,  1.],
           [-3., -2.]])

>>>> haroldcompanion([1,3])
    array([[-3.]])

>>>> haroldcompanion([1])
    array([], dtype=float64)
harold.harold.haroldtrimleftzeros(somearray)

Trims the insignificant zeros in an array on the left hand side, e.g., [0,0,2,3,1,0] becomes [2,3,1,0].

Parameters:somearray (1D Numpy array) –
Returns:anotherarray
Return type:1D Numpy array
harold.harold.haroldpoly(rootlist)

Takes a 1D array-like numerical elements as roots and forms the polynomial

harold.harold.haroldpolyadd(*args, *, trimzeros=True)

Similar to official polyadd from numpy but allows for multiple args and doesn’t invert the order,

harold.harold.haroldpolymul(*args, *, trimzeros=True)

Simple wrapper around the scipy convolve function for polynomial multiplication with multiple args. The arguments are passed through the left zero trimming function first.

Example:

>>>> haroldpolymul([0,2,0],[0,0,0,1,3,3,1],[0,0.5,0.5])
array([ 1.,  4.,  6.,  4.,  1.,  0.])
harold.harold.haroldpolydiv(dividend, divisor)

Polynomial division wrapped around scipy deconvolve function. Takes two arguments and divides the first by the second.

Returns, two arguments: the factor and the remainder, both passed through a left zeros trimming function.

harold.harold.frequency_response(G, custom_grid=None, high=None, low=None, samples=None, custom_logspace=None, input_freq_unit='Hz', output_freq_unit='Hz')

Computes the frequency response matrix of a State() or Transfer() object.

The State representations are always checked for minimality and, if any, unobservable/uncontrollable modes are removed.

Parameters:
  • G (State of Transfer) – The realization for which the frequency response is computed
  • custom_grid (array_like) – An array of sorted positive numbers denoting the frequencies
  • high (float) – Power of 10 denoting the maximum frequency. If a discrete realization is given this is overridden by the Nyquist frequency.
  • low (float) – Power of 10 denoting the minimum frequency.
  • samples (int) – Number of samples to be created between high and low
  • custom_logspace (3-tuple) –
Returns:

  • freq_resp_array (Complex_valued numpy array) – The frequency response of the system G
  • w (1D numpy array) – Frequency grid that is used to evaluate the frequency response

harold.harold.pair_complex_numbers(somearray, tol=1e-09, realness_tol=1e-09, positives_first=False, reals_first=True)

Given an array-like somearray, it first tests and clears out small imaginary parts via numpy.real_if_close. Then pairs complex numbers together as consecutive entries. A real array is returned as is.

Parameters:
  • somearray (array_like) – Array like object needs to be paired
  • tol (float) – The sensitivity threshold for the real and complex parts to be assumed as equal.
  • realness_tol (float) – The sensitivity threshold for the complex parts to be assumed as zero.
  • positives_first (bool) – The boolean that defines whether the positive complex part should come first for the conjugate pairs
  • reals_first (bool) – The boolean that defines whether the real numbers are at the beginning or the end of the resulting array.
Returns:

paired_array – The array that is paired

Return type:

array_like

harold.harold.matrix_printer(M, num_format='f', var_name=None)

This is a simple helper function for quickly carrying matrix data to matlab. The array is assumed to be 1- or 2D. If the array is 3D or more then use a slice in a for loop with explicit naming since matlab does not have multidimensional entry syntax.

The var_name can be used in a loop for such purposes e.g.:

matrix_printer(M,var_name = 'A(:,:,2)')

which will include the string A(:,:,2) = at the beginning of the output string.

This function is inspired by [Lee Archer’s gist](https://gist.github.com/lbn/836313e283f5d47d2e4e)

Parameters:
  • M (Numpy array) – The numpy array that is going to be pretty printed in matlab format.
  • num_format (str) – String specifier (See Python documentation Section 6.1)
  • var_name (str) – The name of the variable to be created in matlab.
Returns:

mat – Resulting string

Return type:

str

harold.harold.bodeplot(G, w=None, dont_draw=False)

Draws the Bode plot of the system G. As the name implies, this only creates a plot and for the data that is used frequency_response() should be used.

Parameters:
  • G ({State,Transfer}) – The system for which the Bode plot will be drawn
  • w (array_like) – Range of frequencies
  • dont_draw (bool) – If True the figure handle is returned instead of directly drawing to be used in elsewhere. The figure has no applied styles such as title, grid etc.
Returns:

plot – If dont_draw key is set to True then this returns the figure object.

Return type:

matplotlib.figure.Figure

harold.harold.lyapunov_eq_solver(A, Y, E=None, form='c')

This function solves the Lyapunov and the generalized Lyapunov equations of the forms

  1. X A + A^T X + Y = 0

(1’) A^T X A - X + Y = 0

and

  1. E^T X A + A^T X E + Y = 0

(2’) A^T X A - E^T X E + Y = 0

for the unknown matrix X given square matrices A, E, and Y. The numbered (primed) equations are the so-called continuous (discrete) time forms. The form keyword selects between the two.

For (1), (1’), the A matrix is brought to real Schur form and for (2), (2’) QZ decomposition is used. Then all have a similar forward substitution step. The method is a a modified implementation of T. Penzl (1998).

If the argument E is not exactly a None-type then (2) is assumed. Moreover, if (2) is assumed, the constant Y term should be symmetric.

Parameters:
  • , Y , E (A) – Data matrices for the equation. Y is a symmetric matrix.
  • form ('c' , 'continuous' , 'd' , 'discrete') – The string selector to define which form of Lyapunov equation is going to be used.
Returns:

X – Computed norm. In NumPy, infinity is also float-type

Return type:

nxn numpy array

Module contents