#SKEW HERMITIAN ?!?!?!!!!
Explore tagged Tumblr posts
Text
the more linear algebra i learn the more im convinced that it isn't math and is actually some sort of fucked up old branch of magic
#what the actual fuck is it with mathematicians in linear algebra and naming?!?!???#my linalg prof today said to us. if any of u ever become mathematicians and discover some really cool property in math#dont give that property a name along the lines of ânormalâ#BECAUSE THERE ARE SO MANY THINGS IN MATH CALLED âNORMALâ ITS ACTUALLY CRAZY#but then u also get things called the âmemoryless propertyâ of geometric and exponential distributions#the âlaw of the unconscious statisticianâ aka me after 1 lecture#the fucking..; cauchy schwartz inequality. the#gram schmidt orthogonalization procedure..??????.?!..!!#cayley hamilton theorem??????#SPECTRAL THEOREM????????? SP ECTRAL#SKEW HERMITIAN ?!?!?!!!!#i am finding it very difficult to take this course seriously because what do you mean i have to memorize an inequality called the fuckijg#cauchy schwartz inequality#i love math <//3#rambling about stuff#wait omg lets not forget the hessian matrix and the jacobian from calculus my detested <//3
0 notes
Video
THE EIGENVALUES OF A SKEW HERMITIAN MATRIX ARE EITHER PURLEY IMAGINARY O...
0 notes
Text
Matrices
Trace of a Matrix
The sum of the diagonal elements of a square matrix A is called the trace of A, denoted by trace (A) or tr (A).
Properties of Trace of a Matrix
Trace (A ± B)= Trace (A) ± Trace (B)
Trace (kA)= k Trace (A)
Trace (Aâ ) = Trace (A)
Trace (In)= n
Trace (0) = 0
Trace (AB) â Trace (A) x Trace (B)
Trace (AAâ) â„ 0
Conjugate of a Matrix
The matrix obtained from a matrix A containing complex number as its elements, on replacing its elements by the corresponding conjugate complex number is called conjugate of A and is denoted by A.
Properties of Conjugate of a Matrix
If A is a matrix of order m x n, then
Transpose Conjugate of a Matrix
The transpose of the conjugate of a matrix A is called transpose conjugate of A and is denoted by A0 or A*.
i.e., (Aâ) = Aâ = A0 or A*
Properties of Transpose Conjugate of a Matrix
(i) (A*)* = A
(ii) (A + B)* = A* + B*
(iii) (kA)* = kA*
(iv) (AB)* = B*A*
(V) (An)* = (A*)n
Some Special Types of Matrices
1. Orthogonal Matrix
A square matrix of order n is said to be orthogonal, if AAâ = In = AâA Properties of Orthogonal Matrix
(i) If A is orthogonal matrix, then Aâ is also orthogonal matrix.
(ii) For any two orthogonal matrices A and B, AB and BA is also an orthogonal matrix.
(iii) If A is an orthogonal matrix, A-1 is also orthogonal matrix.
2. ldempotent Matrix
A square matrix A is said to be idempotent, if A2 = A.
Properties of Idempotent Matrix
(i) If A and B are two idempotent matrices, then
AB is idempotent, if AB = BA.
A + B is an idempotent matrix, iff
AB = BA = 0
AB = A and BA = B, then A2 = A, B2 = B
(ii)If A is an idempotent matrix and A + B = I, then B is an idempotent and AB = BA= 0.
Diagonal (1, 1, 1, âŠ,1) is an idempotent matrix.
If I1, I2 and I3 are direction cosines, then
is an idempotent as |Î|2 = 1.
A square matrix A is said to be involutory, if A2 = I
4. Nilpotent Matrix
A square matrix A is said to be nilpotent matrix, if there exists a positive integer m such that A2 = 0. If m is the least positive integer such that Am = 0, then m is called the index of the nilpotent matrix A.
5. Unitary Matrix
A square matrix A is said to be unitary, if AâA = I
Hermitian Matrix
A square matrix A is said to be hermitian matrix, if A = A* or
= aij, for aji only.
Properties of Hermitian Matrix
If A is hermitian matrix, then kA is also hermitian matrix for any non-zero real number k.
If A and B are hermitian matrices of same order, then λ1A + λB, also hermitian for any non-zero real number λ1, and λ.
If A is any square matrix, then AA* and A* A are also hermitian.
If A and B are hermitian, then AB is also hermitian, iff AB = BA
If A is a hermitian matrix, then A is also hermitian.
If A and B are hermitian matrix of same order, then AB + BA is also hermitian.
If A is a square matrix, then A + A* is also hermitian,
Any square matrix can be uniquely expressed as A + iB, where A and B are hermitian matrices.
Skew-Hermitian Matrix
A square matrix A is said to be skew-hermitian if A* = â A or aji for every i and j.
Properties of Skew-Hermitian Matrix
If A is skew-hermitian matrix, then kA is skew-hermitian matrix, where k is any non-zero real number.
If A and B are skew-hermitian matrix of same order, then λ1A + λ2B is also skew-hermitian for any real number λ1 and λ2.
If A and B are hermitian matrices of same order, then AB â BA is skew-hermitian.
If A is any square matrix, then A â A* is a skew-hermitian matrix.
Every square matrix can be uniquely expressed as the sum of a hermitian and a skew-hermitian matrices.
If A is a skew-hermitian matrix, then A is a hermitian matrix.
If A is a skew-hermitian matrix, then A is also skew-hermitian matrix.
Adjoint of a Square Matrix
Let A[aij]m x n be a square matrix of order n and let Cij be the cofactor of aij in the determinant |A| , then the adjoint of A, denoted by adj (A), is defined as the transpose of the matrix, formed by the cofactors of the matrix.
Properties of Adjoint of a Square Matrix
If A and B are square matrices of order n,
A (adj A) = (adj A) A = |A|I
adj (Aâ) = (adj A)â
adj (AB) = (adj B) (adj A)
adj (kA) = kn â 1(adj A), k â R
adj (Am) = (adj A)m
adj (adj A) = |A|n â 2 A, A is a non-singular matrix.
|adj A| =|A|n â 1 ,A is a non-singular matrix.
|adj (adj A)| =|A|(n â 1)2 A is a non-singular matrix.
Adjoint of a diagonal matrix is a diagonal matrix.
0 notes
Photo
Introduction to Linear Algebra | Matrix | TRB, TNPSC AE, TNEB Mathematics by Raju G | Sparks Academy http://ehelpdesk.tk/wp-content/uploads/2020/02/logo-header.png [ad_1] This is a introduction part of L... #academics #assistantengineer #calculus #chineselanguage #civil #column #datastructures #diagonal #differentials #differentialscalculus #differentialsequationsintamil #engineeringmaths #engineeringmathsintamil #englishconversation #englishgrammar #englishlanguage #frenchlanguage #germanlanguage #hermitian #identity #ielts #intamil #japaneselanguage #linearalgebra #math #mathematics #maths #mathsforgate #mathsfortneb #mathsfortrb #matrix #mechanical #periodic #probability #raju #rank #row #rrb #rrbje #signlanguage #skew #spanishlanguage #sparksacademy #ssc #sscje #statistics #symmetry #teaching #thebible #tneb #tnpsc #trbcivil #trbece #trbeee #trbmechanical #trbpolytechniclecturer #triangular #typesofmatrix
0 notes
Text
Applied Mathematics - 1 Online Tutor - Online Tutoring
Applied Mathematics â 1 Online Tutor â Online Tutoring
Applied Mathematics â 1 Online Tutor â Online Tutoring
Online University Tutor Aims To Provide Online Study Support To The Students Of Indian & International Universities. Online B.Tech Tuitions In India, Find Best Online B.Tech Tutor â Online University Tutor, Online All Btech subjects tutors â Online University Tutor, B.Tech Online Tutor, B.Tech Tuition Teacher, BTech Online tuition, OnlineâŠ
View On WordPress
#Applied Mathematics - 1 Online Tutor - Online Tutoring#Applied Mathematics - 1 Online Tutor - Online Tutoring Online University Tutor Aims To Provide Online Study Support To The Students Of India#B.Tech Online Tutor#B.Tech Tuition Teacher#Besselâs and Legendreâs functions and their properties.#BTech Online tuition#Equations reducible to exact differential equations. Linear differential equation of higher order with constant coefficients#Find Best Online B.Tech Tutor â Online University Tutor#Hermitian matrix#Homogeneous and non homogeneous differential equations reducible to linear differential equations with constant coefficients. Method of vari#Leibnitz and Bernaulliâs equation. Exact differential equations#Online All Btech subjects tutors â Online University Tutor#Online Resources For Engineering Students#Online University Tutor Is A Place Of Learning Excellence#reduction of quadratic form to canonical form. Ordinary differential equations: First order linear differential equations#Skew-Hermitian matrix and Unitary matrix. Inverse of matrix by Gauss-Jordan Method (without proof). Rank of matrix by echelon and Normal (ca#What More Can You Ask For? Matrices: Orthogonal matrix#Where Students Can Feel At Home While Learning The New Concepts & Skills That Will Help Them To Score Good Grade That Thrive After Their
0 notes
Text
Read More Test
Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations.
In the 18th century Leonhard Euler studied the rotational motion of a rigid body and discovered the importance of the principal axes.[8] Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix.[9] In the early 19th century, Augustin-Louis Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions.[10] Cauchy also coined the term racine caractéristique (characteristic root) for what is now called eigenvalue; his term survives in characteristic equation.[11]
Joseph Fourier used the work of Lagrange and Pierre-Simon Laplace to solve the heat equation by separation of variables in his famous 1822 book Théorie analytique de la chaleur.[12] Charles-François Sturm developed Fourier's ideas further and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues.[13] This was extended by Charles Hermite in 1855 to what are now called Hermitian matrices.[14] Around the same time, Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle,[13] and Alfred Clebsch found the corresponding result for skew-symmetric matrices.[14] Finally, Karl Weierstrass clarified an important aspect in the stability theory started by Laplace by realizing that defective matrices can cause instability.[13]
In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called SturmâLiouville theory.[15] Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while PoincarĂ© studied Poisson's equation a few years later.[16]
At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices.[17] He was the first to use the German word eigen, which means "own", to denote eigenvalues and eigenvectors in 1904,[18] though he may have been following a related usage by Hermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is standard today.[19]
0 notes
Text
IIT JAM Mathematics Syllabus
Always remember that before starting the preparation for any exam, must check the complete Syllabus, its essential.
This will help you in making strategy for preparation.
Check out the JAM Mathematics Syllabus here.
You can also check important topics, best books for preparation, tips and get free pdf of previous papers from HERE â IIT JAM Mathematics.
 IIT-JAM Mathematics Syllabus
Sequences and Series of Real Numbers:Â Sequences and series of real numbers, Convergent and divergent sequences, bounded and monotone sequences, Convergence criteria for sequences of real numbers, Cauchy sequences, absolute and conditional convergence; Tests of convergence for series of positive terms â comparison test, ratio test, root test; Leibnitz test for convergence of alternating series.
Functions of One Variable:Â limit, continuity, differentiation, Rolleâs Theorem, Mean value theorem. Taylor's theorem. Maxima and minima.
Functions of Two Real Variables: limit, continuity, partial derivatives, differentiability, maxima and minima. Method of Lagrange multipliers, Homogeneous functions including Eulerâs theorem.
Integral Calculus:Â Integration as the inverse process of differentiation, definite integrals and their properties, Fundamental theorem of integral calculus. Double and triple integrals, change of order of integration. Calculating surface areas and volumes using double integrals and applications. Calculating volumes using triple integrals and applications.
Differential Equations:Â Ordinary differential equations of the first order of the form y'=f(x,y). Bernoulliâs equation, exact differential equations, integrating factor, Orthogonal trajectories, Homogeneous differential equations-separable solutions, Linear differential equations of second and higher order with constant coefficients, method of variation of parameters. Cauchy-Euler equation.
Vector Calculus:Â Scalar and vector fields, gradient, divergence, curl and Laplacian. Scalar line integrals and vector line integrals, scalar surface integrals and vector surface integrals, Green's, Stokes and Gauss theorems and their applications.
Group Theory:Â Groups, subgroups, Abelian groups, non-abelian groups, cyclic groups, permutation groups; Normal subgroups, Lagrange's Theorem for finite groups, group homomorphisms and basic concepts of quotient groups (only group theory).
Linear Algebra:Â Vector spaces, Linear dependence of vectors, basis, dimension, linear transformations, matrix representation with respect to an ordered basis, Range space and null space, rank-nullity theorem; Rank and inverse of a matrix, determinant, solutions of systems of linear equations, consistency conditions. Eigenvalues and eigenvectors. Cayley-Hamilton theorem. Symmetric, skew-symmetric, hermitian, skew-hermitian, orthogonal and unitary matrices.
Real Analysis:Â Interior points, limit points, open sets, closed sets, bounded sets, connected sets, compact sets; completeness of R, Power series (of real variable) including Taylorâs and Maclaurinâs, domain of convergence, term-wise differentiation and integration of power series.
 Thank You!!
0 notes
Link
This lecture includes : Types of Matrices * Square Matrix * Diagonal Matrix * Scalar Matrix * Identity and Null Matrix * Row and Column Matrix * Upper Triangular Matrix * Lower Triangular Matrix * Periodic Matrix * Idempotent and Nilpotent Matrix * Involuntary Matrix * Symmetric and Anti Symmetric or skew Symmetric Matrix * Hermitian and Anti - Hermitian Matrix * Orthogonal and Unitary Matrix
Kindly subscribe to my youtube channel RICHA SACHDEVA PHYSICS for more such videos.
0 notes
Text
BCC at Birmingham, days 1-3
This week I am in Birmingham for the British Combinatorial Conference.
The organisation of the conference is outstanding. For one small example, yesterday, fifteen minutes before the Business Meeting was due to start, the Chairman noticed that we didnât have the minutes of the previous Business Meeting to approve. The Secretary had the file on a laptop, and before the meeting started we had fifty printed copies to distribute.
After the excitement about ADE last week, these diagrams reappeared twice in the first couple of days, Hendrik Van Maldeghem (who talked about geometrical and combinatorial constructions of buildings) showed us all the crystallographic CoxeterâDynkin diagrams. In a completely different context, Alexander Gavrilyuk mentioned the fact that connected simple graphs with spectral radius at most 2 are the ADE diagrams and the extended ADE diagrams. He attributed this to Smith (1969) and Lemmens and Seidel (1973). I think it would be not unjust to say that this result was part of the classification of the complex simple Lie algebras by Cartan and Killing in the last decade of the nineteenth century. That aside, Alexander was extending this to directed graphs, using a Hermitian adjacency matrix with entries 1 if there are arcs both ways between two vertices, while single arcs from v to w have i (the complex fourth root of 1) in position (v,w) and âi in position (w,v). This had been done by Guo and Mohar, but some small corrections were necessary; he used results of Greaves and McKee to achieve these. (As a footnote to this, it seems to be that to use Ï, a complex 6th root of unity, in place of i would be more natural, since the sum of Ï and its complex conjugate is 1 rather than 0.)
In fact, for the graph case, much more is known: the graphs whose greatest eigenvalue is at most 2 are the ADE diagrams and their extensions, but the graphs whose least eigenvalue is at least â2 can also be described.
The conference featured mini-symposia, and I organised one on âDesigns and Finite Geometriesâ, which in my opinion has had some beautiful talks so far, from Ian Wanless on plexes in Latin squares, Rosemary Bailey on designs related to the Sylvester graphs (and the wrong turnings on the way to finding them), Peter Keevash on his and othersâ results on existence of designs (including the fact that estimates for the number of Steiner systems, asymptotic in the logarithm, are now available, and hinting that he had constructions of large sets of Steiner systems for large admissible orders), and Moura Paterson on authentication schemes.
One of the most exciting talks was by Igor Pak. He has formulae, and good asymptotic estimates, for the numbers of standard Young tableaux for various skew Young diagrams. This was a mix of all kinds of things, including counting linear extensions of posets, rhombus tilings, plane partitions, counting disjoint paths, Vershikâs limiting tableau shapes, and a remarkable formula of Coxeter, which (if I copied it correctly) says
ÎŁ(Ïn/n2)Â cos(2Ïn/5)Â Â =Â Â Ï2/100
(the sum over all positive integers n.)
Coxeterâs discovery of this formula was based on the existence of the 600-cell (a regular polytope in 4 dimensions) and some spherical geometry. As far as I can tell, the formula was not actually used in the talk, but the philosophy of it led to some of the things that came later.
Two things about the talk were a pity. First, there was no paper in the Proceedings. (In the history of the BCC, it has happened a few times that a speaker provided no talk; indeed I was the editor of the first âpublished-in-advanceâ volume, at Royal Holloway in 1975, where I failed to get papers from either Conway or Kasteleyn.) So I am unable to check these details. Second, Igor started in a bit of a rush, and some things were not clearly explained. For example, I think some nodding acquaintance with Plancherel measure is needed to make sense of the Vershik asympotic shape of a random Young diagram, and I didnât find that in the talk. But it was so full of amazing stuff that it is perhaps churlish to complain.
Apart from these I will be very selective in my reporting. One contributed talk I really enjoyed was by Natasha Dobrinen, on the Ramsey theory of Hensonâs homogeneous Kn-free graphs, which included a description of them in terms of trees. It went part rather fast (the talks were only 20 minutes), but I wonder whether this leads to a probabilistic approach to Hensonâs graph. I have reported before how I laboured over this, and how Anatoly Vershik explained to me his construction with Petrov in a leisurely afternoon in Penderelâs Oak in London â a construction which is clearly related to the topic of graphons, the subject of Dan KrĂĄlâs talk.
Then there was a sequence of three nice talks on quite different topics, but all related to permutations (in the combinatorial rather than the group-theoretic sense). Simon Blackburn proved a nice asymptotic result about random permutations for the uniform measure. At the end, Robert Johnson asked whether there were similar results for other measures. This was because Robertâs talk, which was next, was able to prove some of the results for wider classes of measures, though not for the Boltzmann measure, which he gave as an open problem. Then Fred Galvin talked. One of his results was that, far from being monotonic, the sequence of coefficients (excluding the constant term) in the independent set polynomial of a graph with independence number m can be any permutation of {1,âŠm}. This suggested to me another interesting measure on permutations. Choose n much larger than m, and choose a random graph on n vertices with independence number m; this induces a probability measure on the permutations. Does this measure tend to a limit as nââ? If so, this could claim to be a ânaturalâ measure on permutations. Fred thought this was an interesting question.
Any ideas?
We had a reception in the remarkable Barber Institute of Fine Arts. Guided tours of the gallery were offered. We went upstairs, and the first picture we saw was RenĂ© Magritteâs famous picture âThe flavour of tearsâ. Tuesday was the concert, and apart from having to move to a different room because the piano hadnât been unlocked, we had a remarkable eveningâs entertainment; there are several outstanding pianists at the conference. Today is the excursion, to the Museum of Black Country Living; but I have work to do âŠ
from Peter Cameron's Blog https://ift.tt/2Kt7JnJ from Blogger https://ift.tt/317149m
0 notes
Text
A Unique Connection for Born Geometry. (arXiv:1806.05992v1 [hep-th])
It has been known for a while that the effective geometrical description of compactified strings on $d$-dimensional target spaces implies a generalization of geometry with a doubling of the sets of tangent space directions. This generalized geometry involves an $O(d,d)$ pairing $\eta$ and an $O(2d)$ generalized metric $\mathcal{H}$. More recently it has been shown that in order to include T-duality as an effective symmetry, the generalized geometry also needs to carry a phase space structure or more generally a para-Hermitian structure encoded into a skew-symmetric pairing $\omega$. The consistency of string dynamics requires this geometry to satisfy a set of compatibility relations that form what we call a Born geometry. In this work we prove an analogue of the fundamental theorem of Riemannian geometry for Born geometry. We show that there exists a unique connection which preserves the Born structure $(\eta,\omega,\mathcal{H})$ and which is torsionless in a generalized sense. This resolves a fundamental ambiguity that is present in the double field theory formulation of effective string dynamics.
from gr-qc updates on arXiv.org https://ift.tt/2Mz1w9V
0 notes
Text
Mathematics Plus Engineering Aptitude (Syllabus)
SYLLABUS PART A General skills with emphasis on reasonable reasoning, graphical analysis, synthetic and numerical ability, quantitative comparisons, series formation, questions, and so forth SYLLABUS PART B Mathematics Plus Engineering Aptitude Linear Algebra Calculus Complex factors Vector Calculus Ordinary Gear Algebra of matrices, inverse, rank, system of geradlinig equations, symmetric, skew-symmetric plus orthogonal matrices. Hermitian, skew-Hermitian and unitary matrices. eigenvalues and eigenvectors, diagonalisation associated with matrices. Functions of the solitary variable, limit, continuity plus differentiability, Mean worth theorems, Indeterminate forms plus L'Hospital rule, Maxima plus minima, Taylor's series, Newtonâs method for finding origins of polynomials. Fundamental plus means value-theorems of essential calculus. Numerical integration simply by trapezoidal and Simpsonâs guideline. Evaluation of definite plus improper integrals, Beta and Gamma functions, Features of two variables, limit, continuity, partial derivatives, Euler's theorem for homogeneous functions, total derivatives, maxima and minima, Lagrange technique of multipliers, double integrals and their applications, series and series, tests with regard to convergence, power series, Fourier Series, Half range sine and cosine series. Inductive functions, Cauchy-Riemann equations, Collection integral, Cauchy's integral theorem and integral formula Taylorâs and Laurent' series, Remains theorem as well as applications. Lean, divergence and curl, vector identities, directional derivatives, collection, surface and volume integrals, Stokes, Gauss and Green's theorems and their programs. First order equation (linear and non-linear ), 2nd order linear differential equations with variable coefficients, Variance of equation parameters technique, higher order linear gear equations with constant coefficients, Cauchy-Euler's equations, power collection solutions, Legendre polynomials plus Bessel's functions from the particular first kind and their own properties. Numerical solutions associated with first order ordinary gear equations by Eulerâs plus Runge -Kutta methods. Meanings of probability and easy theorems, conditional probability, Bayes Theorem. Solid Body Movement and Fluid Motion: Energetics: Electron Transport: Electromagnetics: Materials: Particle dynamics; Projectiles; Rigid Body Dynamics; Lagrangian formulation; Eulerian formulation; Bernoulliâs Equation; Continuity equation; Surface area tension; Viscosity; Brownian Movement. Laws of Thermodynamics; Concept of Free energy; Enthalpy, and Entropy; Equation associated with State; Thermodynamics relations. The framework of atoms, Concept associated with energy level, Bond Concept; Definition of conduction, Semiconductor and Insulators; Diode; Fifty percent wave & Full influx rectification; Amplifiers & Oscillators; Truth Table. Theory associated with Electric and Magnetic possible & field; Biot and Savartâs Law; Theory associated with Dipole; Theory of Vacillation of electron; Maxwellâs equations; Transmission theory; Amplitude plus Frequency Modulation. Periodic desk; Properties of elements; Outcome of materials; Metals plus nonmetals (Inorganic materials), Primary understanding of monomeric plus polymeric compounds; Organometallic substances; Crystal structure and proportion, Structure-property correlation-metals, ceramics, plus polymers.
0 notes
Video
THE EIGENVALUES OF A SKEW HERMITIAN MATRIX ARE EITHER PURLEY IMAGINARY O...
0 notes
Text
Determinants
Determinant
Every square matrix A is associated with a number, called its determinant and it is denoted by det (A) or |A| .
Only square matrices have determinants. The matrices which are not square do not have determinants
(i) First Order Determinant
If A = [a], then det (A) = |A| = a
(ii) Second Order Determinant
|A| = a11a22 â a21a12
(iii) Third Order Determinant
Evaluation of Determinant of Square Matrix of Order 3 by Sarrus Rule
then determinant can be formed by enlarging the matrix by adjoining the first two columns on the right and draw lines as show below parallel and perpendicular to the diagonal.
The value of the determinant, thus will be the sum of the product of element. in line parallel to the diagonal minus the sum of the product of elements in line perpendicular to the line segment. Thus,
Î = a11a22a33 + a12a23a31 + a13a21a32 â a13a22a31 â a11a23a32 â a12a21a33.
Note This method doesnât work for determinants of order greater than 3.
Properties of Determinants
(i) The value of the determinant remains unchanged, if rows are changed into columns and columns are changed into rows e.g.,
|Aâ| = |A|
(ii) If A = [aij]n x n , n > 1 and B be the matrix obtained from A by interchanging two of its rows or columns, then
det (B) = â det (A)
(iii) If two rows (or columns) of a square matrix A are proportional, then |A| = O.
(iv) |B| = k |A| ,where B is the matrix obtained from A, by multiplying one row (or column) of A by k.
(v) |kA| = kn|A|, where A is a matrix of order n x n.
(vi) If each element of a row (or column) of a determinant is the sum of two or more terms, then the determinant can be expressed as the sum of two or more determinants, e.g.,
(vii) If the same multiple of the elements of any row (or column) of a determinant are added to the corresponding elements of any other row (or column), then the value of the new determinant remains unchanged, e.g.,
(viii) If each element of a row (or column) of a determinant is zero, then its value is zero.
(ix) If any two rows (columns) of a determinant are identical, then its value is zero.
(x) If each element of row (column) of a determinant is expressed as a sum of two or more terms, then the determinant can be expressed as the sum of two or more determinants.
Important Results on Determinants
(i) |AB| = |A||B| , where A and B are square matrices of the same order.
(ii) |An| = |A|n
(iii) If A, B and C are square matrices of the same order such that ith column (or row) of A is the sum of i th columns (or rows) of B and C and all other columns (or rows) of A, Band C are identical, then |A| =|B| + |C|
(iv) |In| = 1,where In is identity matrix of order n
(v) |On| = 0, where On is a zero matrix of order n
(vi) If Î(x) be a 3rd order determinant having polynomials as its elements.
(a) If Î(a) has 2 rows (or columns) proportional, then (x â a) is a factor of Î(x).
(b) If Î(a) has 3 rows (or columns) proportional, then (x â a)2 is a factor of Î(x). ,
(vii) A square matrix A is non-singular, if |A| â 0 and singular, if |A| =0.
(viii) Determinant of a skew-symmetric matrix of odd order is zero and of even order is a non-zero perfect square.
(ix) In general, |B + C| â |B| + |C|
(x) Determinant of a diagonal matrix = Product of its diagonal elements
(xi) Determinant of a triangular matrix = Product of its diagonal elements
(xii) A square matrix of order n, is non-singular, if its rank r = n i.e., if |A| â 0, then rank (A) = n
(xiv) If A is a non-singular matrix, then |A-1| = 1 / |A| = |A|-1
(xv) Determinant of a orthogonal matrix = 1 or â 1.
(xvi) Determinant of a hermitian matrix is purely real .
(xvii) If A and B are non-zero matrices and AB = 0, then it implies |A| = 0 and |B| = 0.
Minors and Cofactors
then the minor Mij of the element aij is the determinant obtained by deleting the i row and jth column.
The cofactor of the element aij is Cij = (- 1)i + j Mij
Adjoint of a Matrix â Adjoint of a matrix is the transpose of the matrix of cofactors of the give matrix, i.e.,
Properties of Minors and Cofactors
(i) The sum of the products of elements of .any row (or column) of a determinant with the cofactors of the corresponding elements of any other row (or column) is zero, i.e., if
then a11C31 + a12C32 + a13C33 = 0 ans so on.
(ii) The sum of the product of elements of any row (or column) of a determinant with the cofactors of the corresponding elements of the same row (or column) is Î
Differentiation of Determinant
Integration of Determinant
If the elements of more than one column or rows are functions of x, then the integration can be done only after evaluation/expansion of the determinant.
Solution of Linear equations by Determinant/Cramerâs Rule
Case 1. The solution of the system of simultaneous linear equations
a1x + b1y = C1 âŠ(i)
a2x + b2y = C2 âŠ(ii)
is given by x = D1 / D, Y = D2 / D
(i) If D â 0, then the given system of equations is consistent and has a unique solution given by x = D1 / D, y = D2 / D
(ii) If D = 0 and Dl = D2 = 0, then the system is consistent and has infinitely many solutions.
(iii) If D = 0 and one of Dl and D2 is non-zero, then the system is inconsistent.
Case II. Let the system of equations be
a1x + b1y + C1z = d1
a2x + b2y + C2z = d2
a3x + b3y + C3z = d3
Then, the solution of the system of equation is
x = D1 / D, Y = D2 / D, Z = D3 / D, it is called Cramerâs rule.
(i) If D â 0, then the system of equations is consistent with unique solution.
(ii) If D = 0 and atleast one of the determinant D1, D2, D3 is non-zero, then the given system is inconsistent, i.e., having no solution.
(iii) If D = 0 and D1 = D2 = D3 = 0, then the system is consistent, with infinitely many solutions.
(iv) If D â 0 and D1 = D2 = D3 = 0, then system has only trivial solution, (x = y = z = 0).
Cayley-Hamilton Theorem
Every matrix satisfies its characteristic equation, i.e., if A be a square matrix, then |A â xl| = 0 is the characteristics equation of A. The values of x are called eigenvalues of A.
i.e., if x3 â 4x2 â 5x â 7 = 0 is characteristic equation for A, then
A3 â 4A2 + 5A â 7I = 0
Properties of Characteristic Equation
(i) The sum of the eigenvalues of A is equal to its trace.
(ii) The product of the eigenvalues of A is equal to its determinant.
(iii) The eigenvalues of an orthogonal matrix are of unit modulus.
(iv) The feigen values of a unitary matrix are of unit modulus.
(v) A and Aâ have same eigenvalues.
(vi) The eigenvalues of a skew-hermitian matrix are either purely imaginary or zero.
(vii) If x is an eigenvalue of A, then x is the eigenvalue of A* .
(viii) The eigenvalues of a triangular matrix are its diagonal elements.
(ix) If x is the eigenvalue of A and |A| â 0, then (1 / x) is the eigenvalue of A-1.
(x) If x is the eigenvalue of A and |A| â 0, then |A| / x is the eigenvalue of adj (A).
(xi) If x1, x2,x3, ⊠,xn are eigenvalues of A, then the eigenvalues of A2 are x22, x22,âŠ, xn2.
Cyclic Determinants
Applications of Determinant in Geometry
Let three points in a plane be A(x1, y1), B(x2, y2) and C(x3, y3), then
= 1 / 2 [x1 (y2 â y3) + x2 (y3 â y1) + x3 (y1 â y2)]
Maximum and Minimum Value of Determinants
where ais â [α1, α2,âŠ, αn]
Then, |A|max when diagonal elements are
{ min (α1, α2,âŠ, αn)}
and non-diagonal elements are
{ max (α1, α2,âŠ, αn)}
Also, |A|min = â |A|max
0 notes