- Inverse of a matrix
Where Adj(A) = adjoint A which is transpose matrix of cofactors of the original matrix.
Inverse of a diagonal matrix
- Adjoint of a matrix
Let A be a nxn matrix, then
- Trace of a matrix
Let M be a n x n matrix, then
Where λ1, λ2,…,λn are eigen values of matrix M
- Matrices of co-factors
B is a matrix of cofactors of A; then
- Area of triangle determinant format
Three points A (a1, b1, c1), B (a2, b2, c2), C (a3, b3, c3). Area of triangle ABC
- Differential equation
- Integrating factor solution for differential equation
- For roots ∝± iβ of differential equation, solution
- Odd function
For t < 0, e-|t| is an odd function
- Calculating curl of a vector
Curl of a vector is defined as follows
- Taylor series
Taylor series: an infinite sum giving the value of a function f(z) in the neighbourhood of a point “a” in terms of the derivatives of the function evaluated at “a”.
- Ratio test
Ratio test for finding out whether a given series is convergent or divergent
- Some hyperbolic functions
Following hyperbolic functions are defined
sinhx, coshx, limit of tanx, limit of tanhx
- Eigen values
Characteristic equation for Eigen value is expanded form of (A-λI) = 0. For finding Eigen values
If a matrix has eigen vector X then AX = KX; eigen vector should be a non-zero vector.
- Average value of f(x)
- Finding saddle point
Where fx is derivative of function f w.r.t. x, fxx is double derivative of function f w.r.t. x. Same is for y.
Function is harmonic if Fxx + Fyy = 0
- Summation of tan-1A and tan-1B
tan-1A + tan-1B
- Calculating gradient of a function
To find unit vector normal to surface calculate gradient (grad ɸ)
- Divergence, curl and gradient of a vector
Divergence of curl of vector a
- Cauchy Riemann equation
The Cauchy–Riemann equations on a pair of real-valued functions of two real variables u(x, y) and v(x, y) are the two equations. Then f = u + iv is complex-differentiable at that point if and only if the partial derivatives of u and v satisfy the Cauchy–Riemann equations
- Laplace transformation of function f(t)
- Periodic function
Laplace transfer function of periodic function with period T
- Improper integral by laplace transformation
Finding the value of improper integral by Laplace transformation
- Complex equation
- Polar form of complex equation
Polar form of complex equation x + iy
- Definite integral
- For even function
- Integral of inverse of 1+x2
- Breaking limits of definite integral
- Green’s Theorem
- Fourier series
- Infinite fourier series
For the solution is infinite Fourier series
- Probability density function
probability density function is defined as
Mean and variance of probability density function is defined as
- Random variable probability function (P(X))
Where: X = random variable.
In continuous random variable total area under curve = 1.
- Normal distribution
Normal distribution is symmetrical about mean (µ)
- Cauchy residual theorem/ Cauchy integral formula
- Residue of a function f(z)
- Newton Raphson method
Newton – Raphson method will converge in one step if the function is linear
- Double integral
- Euler’s method
Step size h is mentioned in the question.
- Numerical integration
- Trapezoidal rule
- Simpson’s 1/3th rule
- Simpson’s 3/8th rule
- Poisson’s distribution
- i raise to power i
- Summation n
- Probability A union B
- Probability A intersection B
- Del of f
- probability A given B
- For infinitely many solutions of set of equations
For infinitely many solutions of set of equations, rank of A, R(A) should be such that
- Summation involving both AP and GP
- Taylor series expansion of ez
- Linearly independent set of equations
For a set of functions f(x), g(x), h(x) to be linearly independent w≠0
- Residual of a function at x = a
To find residual of a function at a particular point
For matrix A
If A=AT=>A is a skew matrix
- Cumulative distribution function
Cumulative distribution function of random variable X
- Idempotent matrix
If AB is idempotent matrix then
- Regula-Falsi method
Regula – Falsi method for convergence and divergence
- Ramp function
- analytic function
Let f(z) be defined as
For f(z) to be analytic
If f(z) is analytic
- Types of errors
- Integral square error (ISE)
- Integral absolute error (IAE)
- Integral time weighted absolute error (ITAE)
- Directional derivative
The directional derivative ∇uf(xo, yo, zo) is the rate at which the function f(x, y, z) changes at point (xo, yo, zo)in the direction u. Directional derivative along the path of maximum value is called gradient.
- Exact differential
Mdx + Ndy is exact differential when
- Initial value problem
The differential equation, with conditions y(0) = 0, and y(1) = 1 is called initial value problem.
- Linear differential equation
A first order differential equation is said to be linear if it can be written as
- Sum of squares of errors
Sum of squares of error is defined as
Where yi is the data value that would be calculated from known correlations and Yi ist the data point that would be mentioned. It is used to find a value of a variable from an equation on basis of various data points.
- Few random mathematical formulas
Eigen values of symmetric matrix are real.
Transpose of a square matrix A have same eigen values as that of A.
Gradient of a scalar quantity is always vector.
If A and B are 3×3 matrix and AB = 0 then rank of matrix B is zero.
Three sets of equation will be independent and have a unique solution only if they have a non-vanishing determinant.
For solution of differential equation to be y = (C1 + C2x)emx, roots of characteristics equation needs to be equal.
A set of equations will have non-trivial solutions if det[A] = 0.
A is mxn matrix with rank n; B is nxp matrix with rank p. Assuming m ≥ n ≥ p, rank (AB) ≤ min (n, p) = p; Rank of a matrix is equal to number of linear independent rows.
Trapezoidal rule will give exact integral when f(x) is linear.
For vectors to be coplanar, determinant should be zero