Home1860 Edition

EQUATIONS

Volume 9 · 37,223 words · 1860 Edition

1. In all the applications of algebra, it is not the magnitudes concerned that we immediately consider, but merely their proportions. In every class of quantities of the same kind, one being adopted as the unit of comparison, all the rest are referred to this standard, and are represented by the proportions they bear to it. The letters of the alphabet, or other symbols, used in algebra, are not, therefore, properly speaking, the representatives of magnitudes; they denote ratios, or abstract numbers, viewed, as in the fifth book of Euclid, in the most general manner, and independently of any particular system of arithmetic or numeration.

The ancient geometry follows a different procedure. In that science the attention is in every case confined to the magnitudes under actual consideration. A general property of triangles is established, by showing that it is true of any particular triangle that comes under the proposed hypothesis. The geometer contemplates particular instances, presenting for the most part relations not very complex, and easily kept in view. On this account he carries on his investigations with the greatest clearness, and is in no danger of falling into contradiction or paradox. But his science is little susceptible of general methods. If any process within the compass of the ancient geometry be entitled to that appellation, it is what is called the method of exhaustion. Every geometer perceives that all the demonstrations under this head have the closest analogy. Yet, after a hundred applications, it is still necessary, in any new case, to pursue the reasoning through all its details, without deriving assistance from any general conclusion previously obtained.

Algebra possesses a great advantage over geometry in generalizing its processes. Problems relating to magnitudes of the most different kinds, nevertheless, lead to similar expressions in numbers. Questions in geometry, in mechanics, or concerning mercantile business, are made to depend on the same rules for their solution. It may be said that algebra and the modern analysis accomplish, for all the mathematical sciences, the project, entertained by some ingenious men, of an universal and philosophical language, which, being founded on an exact scrutiny into the nature of things, and on what they possess in common, might greatly facilitate the acquisition and the extension of our knowledge.

The spirit of generalization peculiar to algebra is nowhere more conspicuous than in the doctrine of equations. Every determinate problem that can occupy the attention of the mathematician, is ultimately reduced to the finding of such numbers as are necessary to determine the unknown quantity or quantities, by means of the equations that subsist between those numbers, and others which are given in the question. A wide field of mathematical investigation is thus brought under a limited number of algebraic expressions.

In treating of equations, it will not be necessary to begin with laying down a formal definition. We confine ourselves, in this article, to the consideration of such equations as contain only one unknown quantity. We further suppose that the elementary operations preparatory to solution are already performed; so that the unknown quantity is clear of radical signs, and is nowhere found in the denominator of a fraction; likewise that all the separate terms are brought to one side of the sign of equality, and arranged in such a manner that the first term, which must always be positive, and have unit for its index, contains the highest power of the unknown quantity, or \(x\); the second term contains the next highest power, and so on, the term which does not contain \(x\) being placed last. This arrangement must always be understood when any term is distinguished by the order it stands in; but it will sometimes be convenient to write the terms in an inverted order, arranging them according to the indices of the unknown quantity.

Equations are divided into different classes or orders, according to the highest power of the unknown quantity found in their terms.

An equation of the first degree, or a simple equation, is one which contains \(x\) only, without any of its powers, as \(x - A = 0\).

A quadratic equation, or one of the second degree, contains the square of \(x\), as \(x^2 - A = 0\), or \(x^2 - Ax + B = 0\).

A cubic equation, or one of the third degree, contains the cube, or third power of \(x\), as \(x^3 - A = 0\), or \(x^3 - Ax^2 + Bx - C = 0\).

A biquadratic equation, or one of the fourth degree, contains the fourth power, or biquadrate of \(x\), as \(x^4 - A = 0\), or \(x^4 - Ax^3 + Bx^2 - Cx + D = 0\).

And, in general, an equation of the \(n\)th degree contains the \(n\)th power of \(x\), and the powers inferior to the \(n\)th, such as

\[x^n - Ax^{n-1} + Bx^{n-2} \ldots \ldots \ldots \ldots - Mx + N = 0.\]

A root of an equation is a value of the unknown number \(x\). Thus, if \(a\) represent a number, and if its powers, \(a, a^2, a^3, \ldots\), when they are substituted in the equation for \(x\), produce an equality between the positive and negative terms, then \(a\) is a root of the equation, and it is a positive root; but if, for \(x, x^2, x^3, \ldots\), we must substitute \(-a, a^2, -a^3, \ldots\), which are the powers of \(-a\), in order to obtain the like equality, then \(a\) is a negative root of the equation.

What we have here called roots are more generally named real roots, to distinguish them from those expressions to which the appellation of imaginary or impossible roots has been given. As it will conduce to perspicuity, we shall always use the word root in the sense here defined, unless when imaginary or impossible roots are expressly mentioned.

From the definitions laid down, it follows that the negative roots of the equation,

\[0 = N + Mx + Lx^2 + Kx^3 + \ldots,\]

are the same with the positive roots of the equation,

\[0 = N - Mx + Lx^2 - Kx^3 + \ldots,\]

in which the signs only of all the terms containing the odd powers of \(x\) are changed. For the same result is obtained, whether we make \(x\) equal to \(-a\) in the first equation, or to \(+a\) in the second.

2. A great advantage has resulted from the practice introduced by Harriot, of writing all the terms of an equation on one side of the sign of equality. The polynomials formed by all the terms thus brought together are rational and integral functions of the unknown quantity; and the question is, to find in what circumstances such expressions are equal to zero. The most likely way of succeeding in this research, is to resolve the functions into their most simple component factors. Harriot supposed that every rational function can be produced by the continued multiplication of binomial factors, and in Equations this he has been followed by succeeding algebraists. The modern theory of equations is entirely founded on this supposition, which, although it has not been demonstrated, has yet, in some measure, been verified in the progress of the science, and by the admission of those artificial expressions called imaginary or impossible quantities. But there is a distinction between the real and impossible binomial factors of a rational polynomial; for the first are expressions complete and significant by themselves, without reference to other quantities; whereas one impossible factor necessarily supposes the existence of another, the two related expressions being such that their multiplication produces one real factor of the second degree. Thus, every pair of impossible factors is equivalent to a real quadratic factor; and, by an unavoidable consequence of the forced supposition made by Harriot, the attention of algebraists has been drawn to the two impossible expressions, instead of the real one which they compose. In order to place the doctrine of equations and the theory of impossible roots on a solid foundation, it appears necessary to attempt the resolution of rational functions into their component factors by a rigorous analysis, free from arbitrary suppositions.

To resolve the rational function \( f(x) \) into its component factors, we must begin with inquiring whether it can be divided without a remainder, by a division such as \( x-a \), or \( x+a \). If it can, the proposed function will be equal to \( (x-a) \times f'(x) \), where \( f'(x) \), the quotient of the division, is a function similar to \( f(x) \), but of an order one degree lower. In like manner, it may be possible to reduce \( f'(x) \) to a degree still lower, by means of one or more divisors of the same form; and, in certain cases, the first function may be entirely exhausted by successive binomial divisors. When this happens, the divisors \( x-a \), \( x-b \), \( x-c \), &c., will be equal in number to the exponent of the highest power of \( x \), and their continued product will be equal to \( f(x) \). It is evident, that by multiplying together a proper number of such factors, an algebraic expression may be formed similar to any rational and integral function, and the co-efficients of this product will likewise contain as many quantities to be determined at pleasure as there are co-efficients in the given function. But we should reason badly if, from this process of composition, we should infer that a product arising from the multiplication of a certain number of simple factors may have any given co-efficients, or will coincide with any proposed polynomial of the same degree. This is a point that can be ascertained only by a process of analysis or resolution, and by seeking all the binomial divisors any given function admits of. In fact, the cases are extremely rare in which an algebraic function can be completely exhausted by real binomial divisors. There are many polynomials which have not a single divisor of this kind; and, in the progress of resolution, we generally arrive at a function which cannot be further divided. When this is the case, it must be tried whether a quadratic divisor, as \( x^2 + mx + n \), will not be successful in lowering the function. But here it must be observed that such divisors are of two kinds; one, as \( (x-\xi)^2 - r^2 \), which can be resolved into two binomial factors; and one, as \( (x-\xi)^2 + r^2 \), which cannot be so resolved without introducing imaginary or impossible expressions. Now, to divide by a divisor of the first kind is the same thing as to divide by the two binomial factors of which it is composed; and, therefore, it is the second kind of quadratic factors only that need be tried, or that can succeed, in lowering a function already deprived of all its simple divisors. After quadratic divisors, those of the third degree would naturally come to be considered; but this is unnecessary, because algebraists have found that every rational function may be completely exhausted by simple Equations and quadratic factors.

What has now been said naturally distributes the subject under two heads; one treating of the simple or binomial factors, and the other of the quadratic or trinomial factors, of algebraic equations.

**Binomial Factors.**

3. The first object of inquiry must be to find the conditions necessary, in order that a binomial quantity, as \( x-a \), or \( x+a \), shall divide a rational polynomial without a remainder. Suppose that \( x-a \) is a divisor of the polynomial,

\[ x^n + Ax^{n-1} + Bx^{n-2} + \cdots + Mx + N, \]

which we shall denote by \( f(x) \); then we shall have

\[ f(x) = N + Mx + Lx^2 + Kx^3 + \cdots, \] \[ f(a) = N + Ma + La^2 + Ka^3 + \cdots, \]

wherefore, by subtracting and dividing by \( x-a \), we get

\[ \frac{f(x)}{x-a} - \frac{f(a)}{x-a} = M \frac{x-a}{x-a} + L \frac{x^2-a^2}{x-a} + K \frac{x^3-a^3}{x-a} + \cdots. \]

Now, it is known that the difference between any like powers of two numbers is exactly divisible by the difference of those numbers; hence all the quantities on the right-hand side of the sign of equality form an integral expression. But as \( f(a) \) does not contain \( x \), it cannot be divisible by \( x-a \); it follows, therefore, that \( f(x) \) cannot be divisible by \( x-a \), unless \( f(a) = 0 \); and it is obvious that this condition is the only one necessary. Thus, the polynomial \( f(x) \) will be divisible by \( x-a \) when \( a \) is a positive root of the equation \( f(x)=0 \), otherwise not.

Again, let the divisor be \( x+a \); then,

\[ f(x) = N + Mx + Lx^2 + Kx^3 + \cdots, \] \[ f(-a) = N - Ma + La^2 - Ka^3 + \cdots, \]

and by proceeding as before,

\[ \frac{f(x)}{x+a} - \frac{f(-a)}{x+a} = M \frac{x+a}{x+a} + L \frac{x^2-a^2}{x+a} + K \frac{x^3-a^3}{x+a} + \cdots. \]

Here again all the divisions on the right-hand side of the sign of equality can be exactly performed; and we must, therefore, conclude that \( f(x) \) will be divisible by \( x+a \) only when \( f(-a) = 0 \), that is, when \( a \) is a negative root of the equation \( f(x)=0 \).

Now \( x-a \) being a divisor of \( f(x) \), the quotient, which we may denote by \( f'(x) \), will be a polynomial of \((n-1)\) dimensions, or one degree lower than \( f(x) \); and we shall have

\[ f(x) = (x-a) \times f'(x). \]

From this equation it appears that every value of \( x \) that makes \( f'(x) \) equal to zero, will likewise make \( f(x) \) equal to zero; consequently every binomial divisor of the first function will likewise be a divisor of the second. And if \( f'(x) \) has no roots, and no binomial divisors, neither will \( f(x) \) have any roots except \( x=a \), nor any binomial divisors except \( x=a \). Suppose that the polynomials \( f(x) \) and \( f'(x) \) have the common root \( x=b \); they will likewise have the common divisor \( x-b \); and if we put \( f''(x) \) for the quotient arising from the division of \( f'(x) \) by \( x-b \), so that \( f'(x) = (x-b) \cdot f''(x) \), we shall have

\[ f(x) = (x-a) \cdot (x-b) \cdot f''(x), \]

in which equation \( f''(x) \) is a polynomial of \( n-2 \) dimensions, or two degrees lower than \( f(x) \).

It is evident we may continue to reason in the same manner either till, after successive divisions, we come at last to a binomial quotient, in which case the original polynomial \( f(x) \) will be completely resolved into binomial factors; or till we come to a quotient that has no roots, Equations in which case \( f(x) \) will have no binomial factors except those previously found. We may therefore conclude that "a rational polynome has as many binomial factors as it has roots, and no more; every positive root producing a factor of the form \( x - a \), and every negative root one of the form \( x + a \); and since the number of binomial factors can never be greater than the dimensions of the polynome, its roots cannot exceed the same number."

4. There are very few cases in which it can be known immediately and by inspection that an equation has one or more roots. These cases depend upon the following propositions, viz., "If \( \varphi(x) \) denote a rational polynome, having \( x \), or some integral power of \( x \), in every one of its terms, and likewise having the term that contains the greatest power of \( x \) positive, a value of \( x \) may be found that will make \( \varphi(x) \) equal to any positive quantity, as \( s \)."

Suppose, first, that all the terms of \( \varphi(x) \) are positive; then \( x^s \) being the first term, or that in which \( x \) rises to the highest power, if \( s = t^n \) and \( \lambda > t \), it is manifest that

\[ \varphi(\lambda) > t^n > s. \]

Therefore, while \( x \) increases from 0 to be equal to \( \lambda \), the function \( \varphi(x) \) increases from 0 to be greater than \( s \); and as the variations of \( \varphi(x) \), however irregular they may be, are connected by the law of continuity, the function will pass through every gradation of magnitude between 0 and the greatest limit \( \varphi(\lambda) \). Consequently, there is a value of \( x \) between 0 and \( \lambda \), that will make \( \varphi(x) \) equal to \( s \).

When the terms of \( \varphi(x) \) are not all positive, let all the positive terms except \( x^n \) be rejected, and all the negative terms be retained, and we shall have \( \varphi(x) \) equal to, or greater than,

\[ x^n - Fx^{n-1} - \cdots - Hx^{n-t-1}, \quad \text{&c.} \]

But, \( s \) being equal to \( t^n \), we have

\[ s = x^n - (x-t) \cdot \left( x^{n-1} + tx^{n-2} + t^2x^{n-3} + \cdots + t^{n-1} \right) \]

Now, by equating the negative terms of the first expression to the terms containing the like powers of \( x \) in the value of \( t^n \), we shall get

\[ (x-t) \cdot t^n = F, \quad (x-t) \cdot t^n = H, \quad \text{&c.} \]

And hence,

\[ x = t + \frac{F}{t^n}, \quad x = t + \frac{H}{t^n}, \quad \text{&c.} \]

Let \( \lambda \) be either equal to or exceed the greatest of those values of \( x \), then we shall have

\[ \varphi(\lambda) > t^n > s. \]

Wherefore, as before, there is a value of \( x \) between 0 and \( \lambda \), that will make \( \varphi(x) \) equal to \( s \).

From what has now been proved, we derive the following properties of equations.

1. "Every equation of odd dimensions has at least one positive root when the last term is negative, and one negative root when the last term is positive."

If the last term be negative, as in this instance,

\[ x^{2n+1} + Ax^{2n} + Bx^{2n-1} + \cdots + Mx - N = 0, \]

according to what has been proved, a value of \( x \), viz. \( a \), may be found that will satisfy the condition,

\[ a^{2n+1} + Aa^{2n} + Ba^{2n-1} + \cdots + Ma = N; \]

then \( a \) is a positive root of the equation.

When the last term is positive, as in this equation,

\[ x^{2n+1} - Ax^{2n} + Bx^{2n-1} + \cdots + Mx + N = 0, \]

change the sign of the last term, and the signs of all the terms that contain the even powers of \( x \), then the polynome will become

\[ x^{2n+1} - Ax^{2n} + Bx^{2n-1} + \cdots + Mx - N; \]

and a value of \( x \), viz. \( a \), may be found such that

\[ a^{2n+1} - Aa^{2n} + Ba^{2n-1} + \cdots + Ma = N. \]

Now transpose \( N \), and then change the signs of all the terms, and we shall get

\[ -a^{2n+1} + Aa^{2n} - Ba^{2n-1} + \cdots - Ma + N = 0, \]

which shows that \( a \) is a negative root of the equation.

2. "Every equation of even dimensions having its last term negative, has two roots, one positive and one negative."

Let the equation be

\[ x^{2n} + Ax^{2n-1} + Bx^{2n-2} + \cdots + Mx - N = 0; \]

and consider the polynomes,

\[ x^{2n} + Ax^{2n-1} + Bx^{2n-2} + \cdots + Mx - N, \]

in the latter of which the signs of all the terms containing the odd powers of \( x \) are changed; then there are two values of \( x \), viz. \( a \) and \( b \), such as to answer the conditions,

\[ a^{2n} + Aa^{2n-1} + \cdots + Ma = N, \]

\[ b^{2n} - Ab^{2n-1} + \cdots - Mb = N; \]

consequently \( a \) is a positive and \( b \) a negative root of the equation.

3. "A polynome of even dimensions, which has no binomial factors, is always positive, whatever value be substituted for the unknown quantity."

Let the polynome be \( f(x) \) or

\[ x^{2n} + Ax^{2n-1} + \cdots + Mx + N; \]

then the last term, or that term which does not contain \( x \), must be positive; for otherwise the polynome would have two roots and two binomial factors, contrary to the hypothesis. Now, if it be possible, let the polynome have a negative value when \( \lambda \) is substituted for \( x \), so that \( f(\lambda) = -P \); therefore, when \( x = 0 \), \( f(x) \) is equal to the positive quantity \( N \); and, when \( x = \lambda \), the same function is equal to \(-P\); but since \( f(x) \) passes through all degrees of magnitude between \( N \) and \(-P\), while \( x \) varies from 0 to \( \lambda \), it will become equal to zero when \( x \) has some intermediate value; therefore the polynome has one root between 0 and \( \lambda \), and one binomial divisor corresponding to that root, contrary to the hypothesis.

It may be observed, that the converse of this proposition is not true; for a polynome of even dimensions, that has such factors as \((x-a)^p, (x-a)^q, (x-a)^m\), may never become negative, although it is capable of being equal to zero.

5. The properties demonstrated in the last section lead to this general proposition relating to the number of roots in any equation, viz., "In any equation the number of all the roots is even when the dimensions are even, and odd when the dimensions are odd."

For every equation has as many binomial divisors as it has roots; and if we suppose an odd number of roots in an equation of even dimensions, or an even number in one of odd dimensions, the last quotient, after dividing successively by all the divisors, would be a polynome of odd dimensions, having at least one root, which would likewise be a root of the proposed equation. Therefore the number of all the roots of an equation cannot be even when the dimensions are odd, nor odd when the dimensions are even.

And again, since every polynome is equal to the continued product of all its binomial divisors, and the quotient last found, after dividing by them all successively, we obtain the following proposition, viz.: "Every rational polynome is equal either to the continued product of as many binomial factors as it has dimensions; or to the con- Equations

Equations tinned product of an even or odd number of such factors, according as the dimensions of the polynome are even or odd, and a polynome of even dimensions, which, having no binomial factors, is always positive, whatever value be substituted for the unknown quantity.

6. When several of the binomial factors of an equation are equal to one another, it is said to have so many equal roots. In this case the equation can be divided a number of times successively by the same binomial divisor. Thus, an equation which is twice divisible by \(x-a\), or which is the same thing, once by \((x-a)^2\), has two roots equal to \(a\); and if it can be divided by \((x-a)^m\), it has \(m\) roots equal to \(a\).

The most obvious way of finding the conditions on which the equality of the roots depends would therefore be to expand the divisor \((x-a)^m\) by the binomial theorem, and then divide the equation by it; for, after the integral quotient is obtained, the required conditions will be found by making the several parts of the remainder separately equal to zero. The number of the conditions found in this manner is equal to the exponent of the divisor; for of so many parts will the remainder of the division consist. But, in a complex operation, it is difficult to ascertain the remainder; and, besides, it is not necessary to consider all the equations obtained by this process, because both the number and the value of the equal roots can be found by means of two of them only.

The inconveniences just mentioned will be avoided by proceeding in the following manner: Let the equation be

\[x^n + Ax^{n-1} + Bx^{n-2} + \ldots + Mx + N = 0;\]

then, if it be divisible by \((x-a)^m\), the quotient will be a polynome of \(n-m\) dimensions; and we may therefore suppose that the expression

\[x^n + Ax^{n-1} + Bx^{n-2} + \ldots + Mx + N\]

is equal to the product,

\[(x-a)^m \times \{x^{n-m} + Ax^{n-m-1} + Bx^{n-m-2} + \ldots + M\},\]

In these expressions, \(x\) may have any value whatever; and therefore the equality between them will still subsist if we substitute \(x+i\) for \(x\), \(i\) being any arbitrary number; therefore the expression

\[(x+i)^m + A(x+i)^{m-1} + B(x+i)^{m-2} + \ldots + M(x+i) + N\]

will be equal to the product

\[(x-a+i)^m \times \{(x+i)^{n-m} + A(x+i)^{n-m-1} + \ldots + M\}.\]

Now, let the several powers of \((x+i)\) be expanded by the binomial theorem, and put

\[X = x^n + Ax^{n-1} + Bx^{n-2} + \ldots + Mx + N,\] \[Y = nx^{n-1} + (n-1)Ax^{n-2} + (n-2)Bx^{n-3} + \ldots + M,\] \[Z = n(n-1)x^{n-2} + (n-1)(n-2)x^{n-3} + \ldots + M,\] \[V = n(n-1)(n-2)x^{n-3} + (n-1)(n-2)x^{n-4} + \ldots + M.\]

then the given polynome of \(n\) dimensions will become

\[X + Y + Z + V + \ldots + M.\]

And if the like operations are performed in the polynome of \(n-m\) dimensions, and \((x-a+i)^m\) be expanded by the binomial theorem, the product of these two expressions will become

\[\{x-a\}^m + m(x-a)^{m-1} + m\frac{m-1}{2}(x-a)^{m-2}i^2 + \ldots + M(x-a)^{m-n} + N\]

\[\times \{X + Y + Z + V + \ldots + M\}.\]

The expression \((A)\) being equal to the product \((B)\),

whatever \(i\) stands for, the co-efficients of the like powers of \(i\) must be equal; and hence, by equating the terms in which \(i\) is wanting, and likewise the terms that contain the first power of \(i\), we get

\[X = (x-a)^m X'\] \[Y = (x-a)^m Y' + m(x-a)^{m-1}X';\]

which proves that \((x-a)^{m-1}\) is a common divisor of \(X\) and \(Y\). If, therefore, by means of the usual process, we seek the greatest common measure of the two polynomes \(X, Y\), or,

\[x^n + Ax^{n-1} + Bx^{n-2} + \ldots + Mx + N,\] \[nx^{n-1} + (n-1)Ax^{n-2} + (n-2)Bx^{n-3} + \ldots + M;\]

we shall obtain the factor \((x-a)^{m-1}\); and the given polynome \(X\) will be divisible by \((x-a)^m\); that is, it will contain the common factor \(x-a\) once more than the polynome \(Y\) contains it.

If we proceed farther, and equate the co-efficients of \(i^2\) in the expressions \((A)\) and \((B)\), we shall get

\[Z = (x-a)^m Z' + m(x-a)^{m-1}Y' + m\frac{m-1}{2}(x-a)^{m-2}X';\]

which shows that \(Z\) is divisible by \((x-a)^{m-2}\). In the same manner, it may be proved, that \(V\) is divisible by \((x-a)^{m-3}\), and so on. It appears, therefore, that the first \(m\) co-efficients of the expression \((A)\) are respectively divisible by \((x-a)^m, (x-a)^{m-1}, (x-a)^{m-2}, \ldots\), etc.; and consequently we shall have

\[X = 0, Y = 0, Z = 0, V = 0, \ldots\]

when the common root \(a\) is substituted for \(x\).

If the polynome \(X\) is divisible by \((x-a+\beta)^m\), it may be proved in like manner that \((x-a+\beta)^{m-1}\) will be a common divisor of \(X\) and \(Y\).

We may therefore lay down the following rule for finding all the double, triple, etc. divisors of any given polynome \(X\): "Find \(R\), the greatest common measure of \(X\) and \(Y\), and resolve it into its elementary factors; then each of these factors will be contained in \(X\) once more than in \(R\)."

7. If it be required to find how many of the roots of an equation are positive, and how many are negative, we have positive for this purpose the rule first published in the Geometry of Descartes. This celebrated rule seems to have been discovered by induction; at least its author gave no demonstration of it; and disputes arose about its true import. It was demonstrated for the first time by Du Gua, in the Mémoires de Paris; but many other demonstrations of it have since appeared, of which that of Segner, in the Mémoires de Berlin, 1756, is not only the most simple, but probably the most simple that will ever be invented.

Segner deduced the rule of Descartes from the following analytical proposition, viz.

"If any rational polynome be multiplied by \(x-a\), the changes from one sign to another, from \(+\) to \(-\), and from \(-\) to \(+\), will be at least one more in the product than in the given polynome; and if it be multiplied by \(x+a\), the successions of the same sign, of \(+\) to \(+\), and of \(-\) to \(-\), will be at least one more."

Let the proposed polynome be

\[x^n + Ax^{n-1} + Bx^{n-2} + \ldots + Mx + N;\]

then, according to the usual process, the product of the polynome by \(x-a\) will be found by adding these two lines, viz.

\[x^n + Ax^{n-1} + Bx^{n-2} + \ldots + Mx + Nx\] \[-ax^n + Ax^{n-1} + Bx^{n-2} + \ldots + Mx + Na\]

the signs of the several terms remaining unchanged in the first line, and being all changed in the second line. It is Equations evident, therefore, that the terms of the product will have the same signs with the respective terms of the proposed polynome, except when a co-efficient in the second line is greater than the one above it, and likewise has a contrary sign; the sign of the last term of the product being always the same with the sign of the last term of the second line. Now, beginning on the left hand, pass over the terms of the first line, so long as they have the same signs with the terms of the product. When this ceases to be the case, the signs in the product will be the same as in the second line, and contrary to those in the first line; wherefore descend to the second line, and pass along its terms till the signs in the product are again the same as those in the first line, and then ascend to that line. Continue thus descending and ascending alternately till all the terms in both lines are taken in. At the conclusion, it is evident that the descendings are always one more than the ascendings, because the passing from one line to another both begins and ends with descending.

If we descend from \( \pm Aa^n \) in the first line, to \( \pm Aa^{n-1} \) in the second line, it is evident that the signs of \( \pm Aa^n \) and \( \pm Ba^{n-1} \) in the first line will be the same, both being contrary to the sign of \( \pm Aa^{n-1} \) in the second line. Therefore, in the given polynome, the first and second terms have the same sign. But in the product the like terms have contrary signs; for the second term of the product has the same sign with \( \pm Aa^n \) in the first line, and the third term of the product has the same sign with \( \pm Aa^{n-1} \) in the second line. Thus it appears that a variation from one sign to another is introduced in the product, instead of a continuation of the same sign that takes place in the given polynome; and the same thing will happen at every descending.

In ascending from the second line to the first, there may either be a continuation of the product instead of a variation in the given polynome, or the contrary; but one of these two must take place.

Now, so long as we keep on the first line, the signs in the product are the same with those of the given polynome; and, so long as we keep on the second line, the signs in the product are contrary to those in the polynome. In both cases, therefore, the variations from \( + \) to \( - \), and from \( - \) to \( + \), are the same in the product and in the polynome. Every descending introduces a variation in the product, instead of a continuation that takes place in the polynome; and although it be supposed that every ascending introduces a continuation in the product instead of a variation that exists in the polynome, yet, on the whole, the variations introduced must be one more than the continuations, because the descendings are one more than the ascendings.

Again, if the given polynome be multiplied by \( x + a \), the product will be the sum of these two lines, viz.

\[ x^{n+1} \pm Aa^n \pm Ba^{n-1} \ldots \pm Mx^2 \pm Nx \\ + ax^n \pm Aax^{n-1} \pm La^{n-1} \pm Max \pm Na. \]

Here the terms of both lines have the same signs; and, as before, the signs in the product will be the same with the signs of the proposed polynome, unless when a co-efficient in the second line is greater than the one above it, and likewise has a contrary sign; the sign of the last term of the product being always the same with the sign of the last term in the second line. Now, if we pass along all the terms of both lines, descending from the first line to the second, when the signs in the product change from being the same with those in the given polynome, to be contrary to them; and ascending from the second line to the first, when the signs in the product change from being contrary to those in the polynome, to be the same with them; it is evident that the descendings will be one more than the ascendings, as in the former case.

If we descend from \( \pm Aa^n \) in the first line, to \( \pm Aa^{n-1} \) in the second line, the two terms \( \pm Aa^n \) and \( \pm Ba^{n-1} \) in the first line will have different signs; for, on account of the descending, \( \pm Ba^{n-1} \) has a contrary sign to the term \( \pm Aa^{n-1} \) below it, and, consequently, to \( \pm Aa^n \) in the first line. Therefore the second and third terms in the polynome have different signs. But the like terms in the product have the same sign; for the second term in the product has the same sign with \( \pm Aa^n \) in the first line; and the third term of the product has the same sign with \( \pm Aa^{n-1} \) in the second line. Thus there is a continuation of the same sign introduced in the product, instead of a variation from one sign to another that takes place in the polynome; and the same thing is true at every descending.

In ascending from the second line to the first, there may either be a variation in the product instead of a continuation that exists in the polynome, or the contrary. But one of these two must take place.

Now it is evident that, except at the descendings and ascendings, there is the same number of continuations of the same sign, and the same number of variations from one sign to another, in the product and in the given polynome. Every descending introduces a continuation in the product instead of a variation existing in the polynome. And even if we suppose that every ascending introduces a variation in the product instead of a continuation that takes place in the polynome, yet, on the whole, there will be one continuation more in the product than in the polynome, because the descendings are one more than the ascendings.

In the preceding demonstration, it is supposed that all the ascendings have a contrary effect to the descendings, by which means there is introduced in the product the least possible number of variations from one sign to another in the one case, and the least possible number of continuations of the same sign in the other. But if, in the first case, we suppose that, at one ascending, there is a variation in the product, and a continuation in the polynome, this will add one to the variations in the product, and one to the continuations in the polynome; so that the variations in the product will now exceed those in the polynome by three, namely, by two more than in the circumstances supposed in the demonstration. And if we extend the like reasoning to two, three, &c. ascendings, the variations in the product will exceed those in the polynome respectively by five, seven, &c. The like conclusion is evidently true of the second case, mutatis mutandis; and hence the preceding proposition, when it is generalized as much as it can be, may be thus enunciated:

"If any rational polynome be multiplied by \( x - a \), the variations from one sign to another in the product will exceed those in the polynome by one, or three, or five, or by some odd number; and if it be multiplied by \( x + a \), the continuations of the same sign in the product will exceed those in the polynome by one, or three, or five, or by some odd number."

Now, if we conceive that any rational polynome is resolved into its binomial factors, there will be a factor of the form \( x - a \) for every positive root, and one of the form \( x + a \) for every negative root; and when all the factors are multiplied together in order to reproduce the polynome, it follows, from what has been proved, that the product will contain at least one change from \( + \) to \( - \), or from \( - \) to \( + \), for every factor of the form \( x - a \), or for every positive root; and at least one succession of \( + \) to \( + \), or of \( - \) to \( - \), for every factor of the form \( x + a \), or for every negative root. Hence this rule, viz. "An Equations cannot have more positive roots than it has variations from one sign to another, nor more negative roots than it has continuations of the same sign.

In general, this rule merely points out limits which the number of the positive and negative roots of an equation cannot exceed. But it gives no criterion by which we can certainly know that an equation has even one positive or one negative root, much less does it ascertain the exact number of each kind.

But if the proposed equation can be completely resolved into real binomial factors; in which case the total number of its roots will be equal to its dimensions, and consequently to the sum of all the variations from one sign to another, and of all the continuations of the same sign; it is evident that the number of the positive roots will be precisely equal to that of the variations, and the number of the negative roots precisely equal to that of the continuations. In this case, therefore, and in this case only, the rule of Descartes is perfect, ascertaining the exact number of each kind of roots in the proposed equation.

We subjoin some consequences that result from the principles laid down.

"If a polynome \( f(x) \) of \( n \) dimensions be multiplied by \( x - a \), or \( x + a \); and, in the first case, if the number of variations from one sign to another be augmented by the odd number \( 2i + 1 \); or, in the second case, if the number of continuations of the same sign be augmented by \( 2i + 1 \); then the total number of the roots, positive and negative, of the proposed polynome, cannot be greater than \( n - 2i \).

For, when the multiplier is \( x - a \), let \( m \) denote the number of the variations from one sign to another in the proposed polynome \( f(x) \); then \( m + 2i + 1 \) will be the total number of variations in the product \( (x - a) \times f(x) \); consequently the total number of continuations in \( (x - a) \times f(x) \) will be equal to \( (n + 1) - (m + 2i + 1) \), or \( n - m - 2i \). But a polynome cannot have more negative roots than it has continuations of the same sign; wherefore the number of the negative roots of \( (x - a) \times f(x) \) cannot be greater than \( n - m - 2i \). Now, the two polynomess \( f(x) \) and \( (x - a) \times f(x) \) have the same negative roots; and hence the number of the negative roots of \( f(x) \) cannot exceed \( n - m - 2i \). But the number of the positive roots of \( f(x) \) cannot exceed \( m \); consequently the total number of the roots of \( f(x) \) cannot be greater than \( m + n - m - 2i \); that is, than \( n - 2i \). And the proposition may be demonstrated in a similar manner when the multiplier is \( x + a \).

"If one or several consecutive terms of an equation be wanting, and if the next terms on each side of those wanting have the same sign, the equation cannot have as many roots as it has dimensions.

Let the equation be \( P + Q = 0 \), \( P \) and \( Q \) denoting the two parts on each side of the terms wanting. Having multiplied \( P + Q \) by \( x - a \), the product will be \( (x - a)P + (x - a)Q \); and it is evident that we may consider \( P, Q, (x - a)P, (x - a)Q \) as separate polynomess; hence, in each of the polynomess \( (x - a)P \) and \( (x - a)Q \), there will be at least one more variation from one sign to another than there is in \( P \) and \( Q \). Again, in the polynome \( P + Q \), there will be a continuation of the same sign in passing from \( P \) to \( Q \); because the last term of \( P \) is supposed to have the same sign with the first term of \( Q \). On the other hand, because the last term of \( (x - a)P \) has a contrary sign to the last term of \( P \); and the first term of \( (x - a)Q \), the same sign with the first term of \( Q \), it follows that, in the polynome \( (x - a)P + (x - a)Q \), there will be a variation from one sign to another in passing from \( (x - a)P \) to \( (x - a)Q \). Therefore, on the whole, there will be at least three variations from one sign to another in \( (x - a)P + (x - a)Q \), more than there is in \( P + Q \). Consequently, by the last proposition, the number of all the roots of the proposed equation must be at least two less than its dimensions.

8. An important inquiry is, to find how many roots, that is, real roots, there are in any proposed equation. Much real roots has been written on this subject, but not very successful; in an equally. No general method has been found that is practically useful. Many criteria have been contrived, by means of which we can certainly discover that roots are wanting in an equation, although we cannot infer the existence of the roots when the same criteria fail. But great value cannot be attached to such rules, since they are neither sufficient guides in practice, nor have much tendency to throw light on the theory.

Waring first, and nearly about the same time Lagrange, proposed a method which is successful in finding the conditions necessary in order that an equation have as many roots as it has dimensions, and which in all cases points out a limit that the number of the roots cannot exceed. This is effected by an auxiliary equation, and merely by the signs of its co-efficients, without requiring the computation of any of its roots. This procedure answers very well for equations of the third and fourth degrees; and it has even been extended by Waring to those of the fifth degree; but in this last case the calculation is very long, and would be altogether impracticable in the higher orders of equations. It is also not a little probable that this rule employs more conditions than are absolutely necessary for determining the point in question; there being great reason to think that some of them are implied in the rest, and are deducible from them. The method here alluded to depends upon the theory of trinomial divisors; and as it is much referred to by algebraists of the present day, we shall, in a subsequent part of this article, briefly explain the principles on which it is founded.

There is also another way of finding the number of real roots in an equation, which is general for all orders, and requires the solution of such equations only as are of lower dimensions than the one proposed. As to practical utility, indeed, this method is of little avail in equations passing the third and fourth degrees, or at most the fifth degree; but it is nevertheless not without interest, both because it is founded on the principles essential to the inquiry, and because it leads to some useful properties.

Algebraists differ from one another in their exposition of this method. Some derive it from the theory of Harriot, namely, that every rational polynome is the product of as many binomial factors as it has dimensions; in which manner of proceeding the impossible roots are the occasion of uncertainty and embarrassment. Others, again, deduce it from the variations of magnitude which a rational polynome undergoes when the unknown quantity is made to pass through all possible degrees of increasing and decreasing. This last mode of investigation seems greatly to deserve the preference, being in reality the only one that is entirely unexceptionable, and requires no principles foreign to the research.

Suppose an equation, \( x^n + Ax^{n-1} + Bx^{n-2} + \ldots + Mx + N = 0 \), which we may denote by \( f(x) = 0 \): substitute \( x - i \) in place of \( x \), and put

\[ X = f(x) = x^n + Ax^{n-1} + \ldots + Mx + N, \]

\[ X' = nx^{n-1} + (n-1)Ax^{n-2} + (n-2)Bx^{n-3} + \ldots + M, \]

\[ X'' = n(n-1)x^{n-2} + (n-1)(n-2)Ax^{n-3} + \ldots + M, \]

then the function \( f(x-i) \) will be transformed into

\[ X - X', i + X'', i^2 - X''', i^3 + \ldots \]

If we suppose the notation of the differential calculus, the same transformation will be thus represented:

\[ 20 \] \[ f(x) = \frac{d}{dx} \cdot f(x) + \frac{1}{2} \frac{d^2}{dx^2} \cdot f(x) - \cdots \]

which has the advantage of pointing out in what manner the several functions, \( X, X', \ldots \), are derived from one another, and from the first function \( X \), or \( f(x) \).

Let \( \alpha, \beta, \gamma, \ldots \) denote the real roots of the equation \( X = 0 \), or \( f(x) = 0 \), arranged according to the order of their magnitude, that is, \( \alpha \) greater than \( \beta \), \( \beta \) greater than \( \gamma \), and so on. In like manner, observing the same order of arrangement, let \( \alpha', \beta', \gamma', \ldots \) represent the roots of

\[ X' = 0, \quad \text{or} \quad \frac{d}{dx} \cdot f(x) = 0; \]

and, for the sake of simplicity, suppose that the equation \( X' = 0 \) has no equal roots.

The relations which the variations of the polynome \( X \) bear to the variations of \( x \), depend upon the functions \( X, X', \ldots \), and principally upon the first of these. If \( X' \) be positive, \( X \) will decrease as \( x \) decreases; if \( X' \) be negative, \( X \) will increase as \( x \) decreases; and if \( X' \) pass from being positive to become negative, or the contrary, then \( x \) continuing to decrease, \( X \) will change from decreasing to increasing, or the contrary; that is, it will attain a minimum or a maximum value. What is here said is the foundation of the method taught in the differential calculus, for finding the maxima and minima of algebraic quantities.

Now, when \( x \) has a value great enough, the polynome \( X \) will have the same sign with its first term, that is, it will be positive; and it will continue positive so long as \( x \) is greater than \( \alpha' \), the greatest root of the equation \( X' = 0 \); after which it will become negative. Hence, while \( x \) decreases to the limit \( \alpha' \), the polynome \( f(x) \), which is positive when \( x \) is sufficiently great, will continually decrease; and when \( x = \alpha' \), \( f(x) \) will pass from decreasing to increasing, or it will have a minimum value. Now if this minimum \( f(\alpha') \) be positive, \( f(x) \) has not decreased to zero, and the given equation will have no root greater than \( \alpha' \). If \( f(\alpha') = 0 \), then, because the two equations \( X = 0 \) and \( X' = 0 \), take place at the same time, the given equation will have two roots equal to \( \alpha' \). (Sect. 6.) Lastly, if \( f(\alpha') \) be negative, the polynome \( f(x) \) has decreased from being positive to be negative; and therefore it has passed through zero, and the given equation will have one root, viz. \( \alpha' \) greater than \( \alpha' \).

As \( x \) continues to decrease from \( \alpha' \) to \( \beta' \), the polynome \( X' \) being negative, \( f(x) \) will continually increase. At the limit \( x = \beta' \), \( X' \) is first equal to zero, and then becomes positive; and \( f(x) \) will therefore change from increasing to decreasing, or will attain a maximum value. If this maximum \( f(\beta') \) be negative, the polynome \( f(x) \) has not increased to zero, and the given equation will have no root between \( \alpha' \) and \( \beta' \); if \( f(\beta') = 0 \), it will have two roots equal to \( \beta' \); and if \( f(\beta') \) be positive, \( f(x) \), in increasing from the negative quantity \( f(\alpha') \) to the positive quantity \( f(\beta') \), must have passed through zero, and the given equation will have one root, viz. \( \beta' \), between \( \alpha' \) and \( \beta' \).

In like manner, \( x \) continuing to decrease from \( \beta' \) to \( \gamma' \), the polynome \( f(x) \) will decrease from the maximum \( f(\beta') \) to the minimum \( f(\gamma') \): if \( f(\gamma') \) be positive, the proposed equation will have no root between \( \beta' \) and \( \gamma' \); if \( f(\gamma') = 0 \), it will have two roots equal to \( \gamma' \); and if \( f(\gamma') \) be negative, it will have one root, viz. \( \gamma' \), between the limits \( \beta' \) and \( \gamma' \).

As the function \( f(x) \) must become a minimum or a maximum, or must pass from decreasing to increasing, or the contrary, between every two contiguous roots of the equation \( f(x) = 0 \); and as the limits where the changes take place are determined by the roots of the equation \( X' = 0 \); it follows that there must be at least one root of this last equation between every two contiguous roots of the first. Hence the equation \( f(x) = 0 \) cannot have as many roots as dimensions, unless the equation \( X' = 0 \) likewise have as many roots as dimensions; and in general we have this rule, which determines a limit that the number of the roots of an equation cannot surpass, although it may fall short of it: "The roots of an equation \( f(x) = 0 \) cannot exceed in number those of the equation \( \frac{d}{dx} \cdot f(x) = 0 \), by more than one."

But if we can find the roots of the equation \( X' = 0 \), which is always one degree lower than the proposed equation, we can thence discover exactly both the number and the limits of the roots of this last. For let \( \alpha', \beta', \gamma', \ldots \) be substituted in the polynome \( f(x) \), and let the results be arranged in order, viz.

\[ f(\alpha'), f(\beta'), f(\gamma'), f(\delta'), \ldots \]

if these quantities are alternately negative and positive; the first, third, fifth, &c., which are all minima, having the sign minus; and the second, fourth, &c., which are all maxima, having the sign plus; then the proposed equation \( f(x) = 0 \) will have just one root more than the equation \( X' = 0 \). When some of the conditions fail, the roots of the proposed equation will fall short of the number specified. If one maximum have the sign minus, or one minimum the sign plus, two roots will be wanting in the proposed equation; and in general as many roots will disappear, as there are consecutive minima and maxima that have the same sign deducting one; unless the minima and maxima precede the greatest root, or come after the least root, in which cases there will be as many roots wanting as there are minima and maxima that have the same sign.

Since the series of functions, \( X, X', X'', \ldots \), are derived similarly from one another, we may prove, as has been done with respect to the two first, that the roots of any one are contained between the roots of that which follows it. Hence, if the given equation have as many roots as dimensions, every equation in the series will likewise have as many roots as dimensions; and if there be roots wanting in any one, there will be at least as many wanting in every equation preceding it in the series.

The connected equations necessarily terminate in one of the first degree, which gives a limit between the two roots of the quadratic immediately before it; in like manner, the roots of the quadratic are the limits of the roots of the cubic preceding it; and in this manner, by going through all the successive equations, we shall finally arrive at the limits of the roots of the proposed equation. This process has been called La Methode des Casseaux; but the length of the calculations renders it useless in practice.

The procedure explained above would enable us to find the number of roots in an equation of any order, if we were in possession of rules for solving equations of the inferior degrees. For want of such rules, the practical advantage that can be derived from it is very limited. Mathematicians have therefore turned their attention to determine the point in question in a way that should not require the resolution of equations. They have sought to investigate rational functions of the co-efficients, which, by means of the signs they are affected with in every particular case, might indicate the number of roots the equation possesses. Of this nature is the method which Du Guin has given in the Memoires de Paris, 1741, for finding the conditions necessary in order that an equation have as many roots as dimensions. By a process analogous to that of Du Guin, M. Cauchy, in an excellent Me- Equations, more, published in the sixteenth volume of the Journal de l'Ecole Polytechnique, has shown not only that the total number of the roots may in every case be discovered, but likewise that the numbers of the positive and negative roots may be separately ascertained. The principles of both these methods are to be found in the theory explained above; but as many considerations of some intricacy are involved in them, a particular account of them would exceed the limits of this article.

In what goes before, we have supposed that all the roots of the equation $X = 0$ are unequal; and in order to complete the theory, it remains to notice the consequences that follow when the case is otherwise. Suppose, then, that $X' = (x - \lambda)^i \times Q$: And, in the first place, if $\lambda$ be a root of the equation $f(x) = 0$, there will in reality be no exception to the general conclusion; because in this case it is known that the polynome $f(x)$ will be divisible by $(x - \lambda)^{i+1}$. (Sect. 6.) Now, the case just mentioned being set aside, if $i$ be an even number, the polynome $X'$, or $(x - \lambda)^i \cdot Q$, will be equal to zero when $x = \lambda$; but it will not change its sign when $x$, from being less, comes to be greater than $\lambda$. Hence the polynome $f(x)$ will neither attain a maximum nor a minimum value at the same limit; and it will have no root, either between $\lambda$ and the next greater root of the equation $X = 0$, or between $\lambda$ and the next less root of the same equation. It appears, therefore, that when $i$ is even, the number of the roots of the equation $f(x) = 0$, and their limits, will depend entirely upon the equation $Q = 0$. Again, when $i$ is an odd number, the polynome $X'$ will be equal to zero when $x = \lambda$, and it will likewise change its sign when $x$ is taken on contrary sides of that limit: Consequently, when $x = \lambda$, the polynome $f(x)$ will be a maximum or a minimum; and the nature of its roots will depend upon the equation $(x - \lambda) \cdot Q = 0$. It is evident that we may extend the same conclusions to any two adjacent equations in the series,

$$X = 0, X' = 0, X'' = 0, X''' = 0, \text{&c.}$$

provided the one which stands lower in the series is reducible to the form $(x - \lambda)^i \cdot Q$; and that $x - \lambda$ is not a common divisor of both. We may likewise draw this general inference from the principles that have been explained, viz. "If, in the series of connected equations, any one be found which is divisible by $(x - \lambda)^{2i}$, or $(x - \lambda)^{2i+1}$, at the same time that $x - \lambda$ is not a divisor of the equation immediately preceding, there will be at least $2i$ roots wanting in this last equation, and in all that stand before it in the series."

The following not inelegant proposition is a consequence of what has just been proved: "The number of the roots of an equation of $n$ dimensions, in which $2i$ or $2i + 1$, consecutive terms, are wanting, cannot be greater than $n - 2i$."

Let the equation be represented by

$$P + Q = 0;$$

supposing that $2i$, or $2i + 1$ terms, are wanting between $P$ and $Q$. Therefore, if the first term of $Q$ contain $x^m$, the last term of $P$ will contain $x^{m+2i+1}$, or $x^{m+2i+2}$.

Now, in the series of equations, we shall at length arrive at one from which all the quantities of $Q$ are exterminated; which equation, if we use the notation of the differential calculus, is equivalent to

$$\frac{d^{m+1}P}{dx^{m+1}} = 0;$$

and it is divisible by $x^{2i}$, or $x^{2i+1}$. And as the one immediately preceding it in the series, viz.

$$\frac{d^nP}{dx^n} + \frac{d^nQ}{dx^n} = 0,$$

is not divisible by $x$, it follows, from what has been shown, that there will be at least $2i$ roots wanting in this last equation, and in all those that stand before it; consequently the proposed equation cannot have more than $n - 2i$ roots.

From this we learn that it is not always possible, at least by any operations with real quantities, to transform an equation into another in which any proposed number of the intermediate terms shall be wanting. For the terms to be taken away may be such that the transformed equation could not have the same number of real roots as the one given; but it is impossible, without introducing imaginary quantities, to transform an equation with a certain number of real roots into another with a different number of such roots.

9. In what goes before, we have sought for the roots and binomial divisors in the nature of the polynome. We are now to take an inverted view of the subject, and to consider a rational polynome as produced by the continued multiplication of as many binomial factors as it has dimensions; from which source there arises an interesting set of properties.

If we take the words root and binomial factor strictly in the sense in which we have hitherto used them, and as denoting real quantities only, nothing is more certain than that all polynomes cannot be generated by binomial factors. But it will afterwards be proved that every rational polynome can be completely exhausted by binomial and trinomial divisors; and if we admit the resolution of every trinomial divisor into two imaginary factors, we shall arrive, with all the rigour of which the investigation is capable, at the genesis of equations supposed by Harriot, which represents them as entirely composed of binomial factors, possible or impossible. Besides, in extending to all equations the conclusions obtained from the manner of generating them, it may be observed that the properties so obtained, being ultimately expressed in functions of the co-efficients from which the roots and generating factors have disappeared, are in a manner independent of the method of investigation. Such is the structure of the language of algebra, that the conclusions to which it leads, although deduced by reasoning from a hypothesis not strictly general, are nevertheless true in all cases, when they are finally disengaged from what is peculiar in the analysis.

Suppose a polynome, as

$$x^n - A(1)x^{n-1} + A(2)x^{n-2} - \ldots - A(n-1)x + A(n),$$

which is produced by the multiplication of the $n$ factors,

$$(x - \alpha)(x - \beta)(x - \gamma)(x - \delta), \text{&c.}$$

then, by actually multiplying the factors, and equating the like terms of the equivalent expressions, we shall get

$$A(1) = \alpha + \beta + \gamma + \delta + \text{&c.}$$

$$A(2) = \alpha\beta + \alpha\gamma + \alpha\delta + \beta\gamma + \beta\delta + \text{&c.}$$

$$A(3) = \alpha\beta\gamma + \alpha\beta\delta + \alpha\gamma\delta + \beta\gamma\delta + \text{&c.}$$

$$A(4) = \alpha\beta\gamma\delta + \text{&c.}$$

etc.

Hence it appears that the co-efficient of the second term of the polynome, or $-A(1)$, is equal to the sum of all the roots with their signs changed; the co-efficient of the third term, or $+A(2)$, to the sum of all the products of every two roots; the co-efficient of the fourth term, Equations or $A^{(3)}$, to the sum of all the products of every three roots with their signs changed, and so on, the signs of the roots being always changed in the products of an odd number; and finally, the last term is the product of all the roots with their signs changed or not, according as their number is odd or even.

It is evident that the ultimate product of the binomial factors will always be the same, in whatever order they are multiplied; and hence the co-efficients of the polynome will consist of the same products, however the roots be interchanged among one another. Expressions of the kind just mentioned, which have constantly the same value, whatever change is made in the order of the quantities they contain, are called invariable functions and symmetrical functions. The co-efficients of an equation are the most simple symmetrical functions of the roots, from which it may be required, on the one hand to deduce all other functions of the like kind, and on the other to go back to the roots themselves. Most inquiries relating to equations are connected with one or other of these two problems; of which the first, like most direct methods, is attended with little difficulty, and has been completely solved; while the other, past equations of the fourth degree, has eluded all the attempts of algebraists.

After the co-efficients of the polynome, the next most simple symmetrical functions of the roots are the sums of the squares, cubes, &c. In the universal arithmetic of Sir Isaac Newton, a very elegant rule is given for computing the sum of any proposed powers of the roots; and as this rule is a fundamental point in the theory of equations, we subjoin an elementary investigation of it.

Of the binomial factors before set down, let the first $x-a$ be left out, and having multiplied the rest together, let the product be

$$x^n - \varphi(1)x^{n-2} + \varphi(2)x^{n-3} - \varphi(3)x^{n-4} + \ldots,$$

in which expression $\varphi(1)$ is the sum of all the roots $\beta, \gamma, \delta$, &c. except the first $a$; $\varphi(2)$ is the sum of the products of every two of them, and so on. Now, multiply by $x-a$, and the product will be equivalent to the given polynome; hence we get

$$A(1) = a + \varphi(1),$$ $$A(2) = a\cdot\varphi(1) + \varphi(2),$$ $$A(3) = a\cdot\varphi(2) + \varphi(3),$$ $$\vdots$$ $$A(r) = a\cdot\varphi(r-1) + \varphi(r).$$

Again, multiply these formulae in order by $a^{r-1}$, $a^{r-2}$, $a^{r-3}$, &c.; then

$$a^r = a^r,$$ $$A(1)\cdot a^{r-1} = a^r + a^{r-1}\cdot\varphi(1),$$ $$A(2)\cdot a^{r-2} = a^{r-1}\cdot\varphi(1) + a^{r-2}\cdot\varphi(2),$$ $$\vdots$$ $$A(r-1)\cdot a = a^2\varphi(r-2) + a\cdot\varphi(r-1),$$ $$A(r) = a\cdot\varphi(r-1) + \varphi(r);$$

and, by adding and subtracting alternately, we get

$$a(r) - A(1)\cdot a^{r-1} + A(2)\cdot a^{r-2} \ldots = A(r-1)\cdot a,$$ $$\Rightarrow A(r) = \varphi(r),$$

in which expression $\varphi(r)$ is the sum of all the products of $r$ dimensions of the roots $\beta, \gamma, \delta$, &c. leaving out the first $a$.

In like manner, if we leave out the factor $x-\beta$, and Equations multiply all the rest, and proceed as before, we shall get

$$\beta^r - A(1)\beta^{r-1} + A(2)\beta^{r-2} \ldots = A(r-1)\beta = A(r),$$ $$\Rightarrow \varphi(r),$$

the symbol $\varphi(r)$ being the sum of the products of $r$ dimensions of all the roots, $\alpha, \gamma, \delta$, except the second $\beta$.

And if we next leave out the factor $x-\gamma$, and follow a like procedure, we shall get

$$\gamma^r - A(1)\gamma^{r-1} + A(2)\gamma^{r-2} \ldots = A(r-1)\gamma = A(r),$$ $$\Rightarrow \varphi(r);$$

where $\varphi(r)$ represents the sum of the products of $r$ dimensions of all the roots $\alpha, \beta, \delta$, &c. except the third $\gamma$.

If we proceed similarly till every one of the $n$ factors is left out in its turn, and then add all the results, we shall get

$$S_r - A(1)S_{r-1} + A(2)S_{r-2} \ldots = A(r-1)S_1,$$ $$\Rightarrow nA(r) = \varphi(r) + \varphi(r) + \varphi(r) + \ldots,$$

in which expression $S_r$ is written for the sum of the $r$ powers of the roots; $S_{r-1}$ for the sum of the $(r-1)$ powers, and so on.

Every product in any one of the aggregate quantities $\varphi(r), \varphi(r), \varphi(r), \ldots$, &c., is found in $A(r)$, which is the sum of the products of $r$ dimensions of all the roots; and hence it is easy to perceive that the sum of all the aggregates must be a multiple of $A(r)$. Take any product in $A(r)$; then that product will not be contained in $r$ of the quantities $\varphi(r), \varphi(r), \varphi(r), \ldots$, &c.; because, in so many of them, one or other of the letters of the product will be wanting; but the same product will be contained once in every one of the $n-r$ remaining quantities, because in every one of these all the letters of the product will be contained.

Every product in $A(r)$ is therefore repeated $n-r$ times in the sum of the quantities $\varphi(r), \varphi(r), \varphi(r), \ldots$, &c.; consequently,

$$\varphi(r) + \varphi(r) + \varphi(r) + \ldots = (n-r)A(r).$$

Substitute this value in the formula obtained above, and after transposing and cancelling $nA(r)$, which appears with contrary signs, we shall get

$$S_r - A(1)S_{r-1} + A(2)S_{r-2} \ldots = A(r-1)S_1,$$ $$\Rightarrow rA(r) = 0.$$

This is the rule of Sir Isaac Newton, and contains all his particular formulae, as will readily appear by putting 1, 2, 3, &c. successively for $r$.

The preceding formula will enable us to compute, in succession, the sums of all the positive powers of the roots, both when $r$ is less and when it is greater than the dimensions of the equation. But, in applying the formula in the latter case, we must observe that all the co-efficients of the polynome after $A(r)$ are wanting, or equal to nothing.

If, in the first step of the preceding investigation, we take the co-efficients that follow $A(r)$, we shall get

$$A(r+1) = a\cdot\varphi(r) + \varphi(r+1),$$ $$A(r+2) = a\cdot\varphi(r+1) + \varphi(r+2),$$ \[ A^{(n-1)} = a \varphi^{(n-2)} + \varphi^{(n-1)}, \] \[ A^{(n)} = a \varphi^{(n-1)}. \]

And, by first dividing by \( a, a^2, a^3, \) &c., in order, and then subtracting and adding alternately, we shall obtain

\[ \frac{A^{(r+1)}}{a} - \frac{A^{(r+2)}}{a^2} + \frac{A^{(r+3)}}{a^3}, \ldots, \text{&c.} = \varphi^{(r)}. \]

In a similar manner we get

\[ \frac{\beta^{(r+1)}}{\beta} - \frac{\beta^{(r+2)}}{\beta^2} + \frac{\beta^{(r+3)}}{\beta^3}, \ldots, \text{&c.} = \varphi^{(r)}. \]

Therefore, by adding all these formulae, and substituting for the sum of \( \varphi^{(r)}, \varphi^{(r)}, \) &c., the value of it already found, we shall finally obtain

\[ A^{(r+1)} S_1 - A^{(r+2)} S_2 + A^{(r+3)} S_3 - \ldots, \text{&c.} \]

\[ = (n-r) A^{(r)}, \]

the symbols \( S_1, S_2, \ldots \) being put for the sums of the negative powers of the roots according to the indices underwritten. This formula will enable us to compute the sums of the negative powers of the roots.

If, in the formula for the sums of the positive powers of the roots, we make \( r \) successively equal to 1, 2, 3, &c., we shall get

\[ A^{(1)} = S_1, \] \[ -2A^{(2)} = -A^{(1)} S_1 + S_2, \] \[ 3A^{(3)} = A^{(2)} S_1 - A^{(1)} S_2 + S_3, \] \[ -4A^{(4)} = -A^{(3)} S_1 + A^{(2)} S_2 - A^{(1)} S_3 + S_4, \ldots \]

and from this we learn that the quantities \( S_1, S_2, S_3, \ldots \) may be found by means of this expression, viz.

\[ \frac{A^{(1)}}{1} - \frac{2A^{(2)}}{z} + \frac{3A^{(3)}}{z^2} - \ldots + \frac{A^{(n)}}{z^n} = S_1 \]

\[ + S_2 z + S_3 z^2 + \ldots, \text{&c.} \]

for if we multiply the series on the right-hand side of the sign of equality, by the denominator of the fraction on the other side, and then equate the co-efficients of the product to the like co-efficients of the numerator, we shall obtain the very formulae set down above. Hence the sums of the powers of the roots expressed in terms of the co-efficients of the polynomial will be found by developing the fraction in a series. In effecting the development different analytical methods may be followed; and the quantities sought will thus be obtained by different rules, or exhibited in expressions of different forms, such as those given by Waring, Vandermonde, Euler, and La Grange.

And in like manner, if, in the formula for the sums of the negative powers of the roots, we make \( r \) successively equal to \( n-1, n-2, n-3, \ldots \), we shall get

\[ A^{(n-1)} = A^{(n)} S_1, \] \[ -2A^{(n-2)} = -A^{(n-1)} S_1 + A^{(n)} S_2, \] \[ 3A^{(n-3)} = A^{(n-2)} S_1 - A^{(n-1)} S_2 + A^{(n)} S_3, \] \[ -4A^{(n-4)} = -A^{(n-3)} S_1 + A^{(n-2)} S_2 - A^{(n-1)} S_3 + A^{(n)} S_4, \ldots \]

from which it appears that the values of all the quantities \( S_1, S_2, S_3, \ldots \) will be obtained by means of this expression, viz.

\[ \frac{A^{(n-1)}}{1} - \frac{2A^{(n-2)}}{z} + \frac{3A^{(n-3)}}{z^2} - \ldots + \frac{A^{(n)}}{z^n} = S_1 \]

\[ + S_2 z + S_3 z^2 + \ldots, \text{&c.} \]

Two kinds of quantities only can enter into any rational and symmetrical function of the roots of an equation; and these are the sums of the like powers of the roots, and the sums of such products as \( a^i b^j c^k \), &c., which arise from multiplying different powers of the roots, two and two, three and three, &c. We shall now shortly point out in what manner the latter sums are deduced from the sums of the like powers, for the computation of which rules have already been given; by which means we shall be enabled to find the value of any proposed function of the kind above mentioned.

Let it be required to find the sum of all the products, such as \( a^i b^j \), that arise from combining two powers of the roots in all possible ways; which sum may be denoted by the symbol \( \Sigma a^i b^j \). Now it is evident that the product, \( S_i \times S_j \), will contain two sorts of terms only, namely, powers of the roots, such as \( a^i b^j \), and the products of which the sum is sought; therefore

\[ \Sigma a^i b^j = S_i \times S_j - S_{i+j}. \]

Next let it be required to find \( \Sigma a^i b^j c^k \), or the sum of all the products of three powers of the root. Now \( \Sigma a^i b^j c^k \times S_r \) will contain three sorts of terms, namely, products, such as \( a^i b^j c^k \) and \( a^i b^j c^k \), in which two roots only are combined, and the products of which the sum is required; therefore

\[ \Sigma a^i b^j c^k = \Sigma a^i b^j \times S_r \]

\[ - \Sigma a^i b^j c^k \]

\[ - \Sigma a^i b^j c^k : \]

but, according to the last case,

\[ \Sigma a^i b^j c^k = S_i \times S_j \times S_k \]

\[ - S_{i+j} \times S_k \]

\[ - S_{i+k} \times S_j \]

\[ - S_{j+k} \times S_i \]

\[ + 2S_{i+j+k}. \]

In like manner, when four different powers of the roots are multiplied together, we get

\[ \Sigma a^i b^j c^k d^l = \Sigma a^i b^j c^k \times S_l \]

\[ - \Sigma a^i b^j c^k d^l \]

\[ - \Sigma a^i b^j c^k d^l \]

\[ - \Sigma a^i b^j c^k d^l ; \]

and we have only to apply the preceding case in order to obtain the expression of the quantity sought in terms of the sums of the like powers of the roots.

According to the procedure just explained, the case where any number of powers are multiplied together, is reduced to the simpler case where the powers multiplied are one less. There would be no great difficulty in de- Equations

ducing a general formula for the sum when the products contain any proposed number of different powers; but this would lead to calculations incompatible with the length of this article; and it may be doubted whether the use of such a formula is preferable in any cases likely to occur in practice, to the application of the principles here laid down.

The theory of symmetrical functions is of the most extensive use in every branch of the doctrine of equations. Thus, if it be required to form an equation, the roots of which shall be any combinations of the roots of a given equation; it is manifest that the co-efficients of the equation sought will be symmetrical functions of the roots of the given equation; and hence they may be found by calculating these functions in terms of the co-efficients of the given equation.

The theory of symmetrical functions is also of use in approximating to the roots of numerical equations. Sir Isaac Newton seems to have had this application in view in giving his rule for computing the sums of the like powers of the roots. He observes that the powers of a great number increase in a much higher ratio than the same powers of less numbers; and hence the $2r$th power of the greatest root of an equation will approach nearer to the sum of the $2r$th powers of all the roots as $r$ is greater. Therefore, neglecting the distinction between positive and negative roots, if we calculate $S_{2r}$ and then extract its $2r$th root, we shall have an approximation to the root of the equation greatest in point of magnitude; and the approximation will be so much more accurate as $r$ is greater.

But there is a more convenient way of approximating to the greatest and least roots of an equation, by means of symmetrical functions. For, since

$$S_{r+1} = \alpha^{r+1} + \beta^{r+1} + \ldots,$$

$$S_r = \alpha^r + \beta^r + \ldots,$$

we have

$$\frac{S_{r+1}}{S_r} = \frac{\alpha^{r+1}}{\alpha^r} + \frac{\beta^{r+1}}{\beta^r} + \ldots.$$

Now, $\alpha$ being the greatest root, the fraction on the right-hand side will approach to unit when $r$ is sufficiently large, in which case $\frac{S_{r+1}}{S_r}$ will be nearly equal to $\alpha$.

Hence, if we compute a series of consecutive sums, viz., $S_r, S_{r+1}, S_{r+2}, \ldots$, the values

$$\frac{S_{r+1}}{S_r}, \frac{S_{r+2}}{S_{r+1}}, \frac{S_{r+3}}{S_{r+2}}, \ldots,$$

will approach nearer and nearer to the greatest root of the equation.

In like manner, if we take the sums of the negative powers of the roots, we shall have

$$\frac{S_{-r}}{S_{-r-1}} = \frac{\alpha^{-r}}{\beta^{-r}} + \frac{\beta^{-r}}{\alpha^{-r}} + \ldots,$$

from which it appears that $\frac{S_{-r}}{S_{-r-1}}$ will approximate so much more to $\alpha$, the least root of the equation, as $r$ is greater.

**Trinomial Divisors.**

10. We proceed next to consider the trinomial divisors of a given polynome; and, in order to avoid reference to other treatises, we shall begin with a short investigation of a preliminary point.

We have this identical expression,

$$x^2 - y^2 = (x+y) \cdot (x-y);$$

consequently,

$$(x^2 - y^2)^n = (x+y)^n \cdot (x-y)^n;$$

and again,

$$(x^2 - y^2)^n = \frac{1}{4} \left\{ (x+y)^n + (x-y)^n \right\}^2$$

$$- \frac{1}{4} \left\{ (x+y)^n - (x-y)^n \right\}^2.$$

Now, using the letters $H$ and $G$ as the characteristics of the particular functions under consideration, let

$$H_n(x, y^2) = \frac{1}{2} \left\{ (x+y)^n + (x-y)^n \right\};$$

$$G_n(x, y^2) = \frac{1}{2} \left\{ (x+y)^n - (x-y)^n \right\}.$$

or by expanding the binomial quantities in series,

$$H_n(x, y^2) = x^n + n \cdot \frac{n-1}{2} \cdot x^{n-2} y^2 + \ldots,$$

$$G_n(x, y^2) = nx^{n-1} + n \cdot \frac{n-1}{2} \cdot x^{n-2} y^2 + \ldots,$$

then, by means of these notations, the preceding expression will be thus written, viz.

$$(x^2 - y^2)^n = \left\{ H_n(x, y^2) \right\}^2 - y^2 \cdot \left\{ G_n(x, y^2) \right\}^2.$$

This equation is identical; that is, when the expressions on both sides of the sign of equality are expanded in series of terms containing the powers of $y^2$, they will consist of the same quantities with the same signs. It is evident, therefore, that the equation will still be identical if we change $y^2$ into $-y^2$; for by this change the simple quantities of the developed expressions will not be affected, and no alteration will be produced, except in the signs of the odd powers of $y^2$, which will now be contrary to what they were before. We therefore have

$$(x^2 + y^2)^n = \left\{ H_n(x, -y^2) \right\}^2 + y^2 \cdot \left\{ G_n(x, -y^2) \right\}^2;$$

in which equation it is to be observed that the functional expressions are not, as in the former instance, susceptible of an abridged algebraic notation, at least without introducing a new sign; but they can be exhibited in series, viz.,

$$H_n(x, -y^2) = x^n - n \cdot \frac{n-1}{2} \cdot x^{n-2} y^2$$

$$+ n \cdot \frac{n-1}{2} \cdot \frac{n-2}{3} \cdot x^{n-4} y^2 + \ldots,$$

$$G_n(x, -y^2) = nx^{n-1} - n \cdot \frac{n-1}{2} \cdot \frac{n-2}{3} \cdot x^{n-3} y^2 + \ldots.$$

Now put $x = r \cos \phi$, $y = r \sin \phi$, $x^2 + y^2 = r^2$; and let $\phi(n)$ denote an arc, depending, in a certain manner, not yet discovered, upon the arc $\phi$ and the index $n$; then, in consequence of the equation obtained above, we shall have

$$r^n \cos \phi(n) = H_n(x, -y^2),$$

$$r^n \sin \phi(n) = yG_n(x, -y^2).$$

Again, multiply both sides of the same equation last referred to by $x^2 + y^2$; then Equations.

\[(x^2 + y^2)^n + 1 = \left\{ x \cdot H_n(x, -y^2) - y^2 G_n(x, -y^2) \right\}^2 + y^2 \left\{ H_n(x, -y^2) + x G_n(x, -y^2) \right\}^2,\]

but, since the equation alluded to is general for all the values of \(n\), we may write \(n + 1\) for \(n\); and thus we get

\[(x^2 + y^2)^{n+1} = \left\{ H_{n+1}(x, -y^2) \right\}^2 + y^2 \left\{ G_{n+1}(x, -y^2) \right\}^2;\]

therefore, by comparing the two values of \((x^2 + y^2)^{n+1}\)

\[H_{n+1}(x, -y^2) = x \cdot H_n(x, -y^2) - y^2 G_n(x, -y^2),\] \[y^2 G_{n+1}(x, -y^2) = y H_n(x, -y^2) + y^2 G_n(x, -y^2);\]

and finally, by substituting the values of the functions in terms of the arcs, \(\varphi(n)\), \(\varphi(n+1)\), we shall obtain

\[\cos \varphi(n+1) = \cos \varphi(n) \cos \varphi(n) - \sin \varphi(n) \sin \varphi(n) = \cos (\varphi(n) + \varphi),\] \[\sin \varphi(n+1) = \cos \varphi(n) \sin \varphi(n) + \sin \varphi(n) \cos \varphi(n) = \sin (\varphi(n) + \varphi),\] \[\varphi(n+1) = \varphi(n) + \varphi.\]

Now, if we make \(n\) successively equal to 1, 2, 3, &c. the results will be,

\[\varphi(2) = 2\varphi,\] \[\varphi(3) = 3\varphi,\] \[\text{&c.}\]

and generally, \(\varphi(n) = n\varphi\).

Thus it appears that

\[r^n \cos n\varphi = H_n(x, -y^2),\] \[r^{n-1} \times \frac{\sin n\varphi}{\sin \varphi} = G_n(x, -y^2);\]

or, if we take the expanded expressions of the functions,

\[r^n \cos n\varphi = x^n - \frac{n-1}{2} x^{n-2} y^2 + \frac{n-1}{2} \cdot \frac{n-2}{3} \cdot \frac{n-3}{4} x^{n-4} y^4 + \text{&c.}\] \[r^{n-1} \times \frac{\sin n\varphi}{\sin \varphi} = nx^{n-1} - \frac{n-1}{2} \cdot \frac{n-2}{3} \cdot \frac{n-3}{4} x^{n-3} y^3 + \text{&c.}\]

in which formulae, \(x = r \cos \varphi, y = r \sin \varphi\).

The functions here designated by the letters \(H\) and \(G\) may be expressed by means of the imaginary sign; for we have

\[H_n(x, -y^2) = \frac{(x + y\sqrt{-1})^n + (x - y\sqrt{-1})^n}{2},\] \[G_n(x, -y^2) = \frac{(x + y\sqrt{-1})^n - (x - y\sqrt{-1})^n}{2y\sqrt{-1}};\]

And, in the case of \(r = 1\), the formulae obtained above are equivalent to the expressions known in analysis since the time of Dr Moivre, viz.

\[\cos n\varphi = \frac{(\cos \varphi + \sin \varphi\sqrt{-1})^n + (\cos \varphi - \sin \varphi\sqrt{-1})^n}{2},\] \[\sin n\varphi = \frac{(\cos \varphi + \sin \varphi\sqrt{-1})^n - (\cos \varphi - \sin \varphi\sqrt{-1})^n}{2\sqrt{-1}}.\]

But the mode of investigation we have followed is rigorous; and it has the advantage of leading to the true import of the imaginary sign, and of putting in a clear light its real effect in analytical operations. The real use of this sign may be shortly described by saying that it performs for even and odd functions the same office that the negative sign does for ordinary functions; in other words, when, by means of the ordinary operations of analysis, it has been proved that an even or odd function of an indeterminate quantity is equal to zero, it is by means of the impossible sign that the same equation is extended to the case when the square of the indeterminate quantity is negative. Every function of the indeterminate quantity \(x\) may be thus represented, viz.

\[\varphi(x^2) = x \cdot \varphi(x^2);\]

and the substitution of \(x\sqrt{-1}\) in place of \(x\), has no other effect than to change the preceding expression into the one following, viz.

\[\varphi(-x^2) = x\sqrt{-1} \cdot \varphi(-x^2);\]

and from this it is obvious, that the same operations which, in the one case, lead us to the equations \(\varphi(x^2) = 0\) and \(x \cdot \varphi(x^2) = 0\), will, in the other, necessarily conduct us to the equations \(\varphi(-x^2) = 0\) and \(x \cdot \varphi(-x^2) = 0\). It is to be observed, too, that the truth of the two latter equations is involved in that of the former. For the former equations cannot be generally true for all values of \(x^2\), unless they are identical, or consist of equal quantities with opposite signs that mutually destroy one another; in which case the latter equations will also be identical. The sign of impossibility, as it has been called, is therefore one as truly significant as any other in analysis. It has, indeed, no consistent meaning when we consider it as only affecting \(x\), or the indeterminate quantity to which it is joined; but it becomes perfectly intelligible when we contemplate the real changes produced by it in the functions of even and odd dimensions, in which its conclusions are always ultimately expressed. When the true import and real effect of the imaginary sign are clearly apprehended, the truth of its conclusion is no longer doubtful or mysterious, but follows as a necessary consequence of a fundamental principle of analytical language. Proceeding on this principle, we may even lay aside the imaginary character; and, in every particular case, with the assistance of a proper notation, arrive, by the ordinary operations, at the same conclusion to which it leads, as has been done in the preceding instance. It is to be observed further, that the imaginary arithmetic is not merely a short method of calculation convenient in practice, and that it may be dispensed with; it is strictly a necessary branch of analysis, without which, or some equivalent mode of investigation, that science would be extremely imperfect. The equations \(\varphi(x^2) = 0\) and \(x \cdot \varphi(x^2) = 0\), are unchangeable by any operations with the signs commonly received, by the use of which alone it is impossible to deduce, in a direct manner, the related equations \(\varphi(-x^2) = 0\) and \(x \cdot \varphi(-x^2) = 0\); although the latter are equally true, of as frequent occurrence, and as extensive application, as the former. Without the impossible sign the operations of algebra would, therefore, be defective; since there are analytical truths that could not be investigated in a direct manner by means of the elementary signs usually admitted. It is to supply this defect that the imaginary arithmetic has been introduced, and has grown up to be an extensive branch of analysis; advancing at first by slow steps, because the true import of the character it employs, and the real effect of its operations, were neither clearly perceived nor fully understood. But having premised what is conducive to our present purpose, we proceed to the investigation of the trinomial divisors of rational functions.

11. Every polynomial of odd dimensions having at least one binomial factor, it may, by dividing by that factor, be reduced to another polynomial one degree lower. And hence, in this part of our subject, we may confine our attention to polynomials of even dimensions. We may also suppose that the even polynomials under consideration have no double, triple, &c. factors of any kind; since, in case Equations, any such present, they can be found separately and eliminated by division.

Suppose, then, that \( f(x) \) represents any polynome of even dimensions; let \( u + v \) be substituted in place of \( x \); and, by using the notation of the differential calculus, the given polynome will be transformed into

\[ f(u) + \frac{d^2 f(u)}{du^2} \cdot u + \frac{1}{2} \cdot \frac{d^3 f(u)}{du^3} \cdot u^2 + \ldots \]

Since \( f(x) \) is an even polynome, the equation \( \frac{df(x)}{dx} = 0 \) will be one of odd dimensions, having at least one root.

Let \( z \) be the sole root of \( \frac{df(x)}{dx} = 0 \), when it has but one, and the greatest root when it has several; then, because \( \frac{df(z)}{dz} = 0 \), the transformed function will become

\[ f(z) + \frac{1}{2} \cdot \frac{d^2 f(z)}{dz^2} \cdot z^2 + \frac{1}{6} \cdot \frac{d^3 f(z)}{dz^3} \cdot z^3 + \ldots \]

It readily appears, from what was formerly proved (Sect. 8), that \( z \), the greatest root of \( \frac{df(x)}{dx} = 0 \), exceeds the greatest root of any of the equations, \( \frac{1}{2} \cdot \frac{d^2 f(x)}{dx^2} = 0 \),

\[ \frac{1}{6} \cdot \frac{d^3 f(x)}{dx^3} = 0, \ldots \]

and because, in any equation, the substitution of a value greater than the greatest root must give a positive result, all the quantities \( \frac{1}{2} \cdot \frac{d^2 f(z)}{dz^2}, \frac{1}{6} \cdot \frac{d^3 f(z)}{dz^3}, \ldots \)

will be positive. With regard to \( f(z) \) it may be either positive or negative, but not equal to zero; since this last case can happen only when the polynome has equal roots.

The original polynome will, therefore, assume this form, viz.

\[ y = A(2)u^2 + A(3)u^3 + A(4)u^4 + \ldots + A(2a-1)u^{2a-1} + u^{2a}, \]

in which expression \( y, A(2), A(3), \ldots \) represent any positive quantities.

The most interesting proposition in the branch of the subject under consideration, is to prove that every polynome of even dimensions has a quadratic divisor, either of the form \( (u + a)^2 - r^2 \), which admits two real binomial factors, or of the form \( (u - a)^2 + r^2 \), which has two imaginary factors. By the preceding transformation this proposition is brought under two cases, according as \( y \) is affected with the sign minus or plus; the quadratic divisor being always of the form \( (u + a)^2 - r^2 \) in the first case, and always of the form \( (u - a)^2 + r^2 \) in the other case; a distinction that agrees with what was before proved, Sect. 8.

Now the first of these cases is attended with no difficulty. For two values of \( u \), one negative and one positive, may be found that will satisfy the equation, Sect. 4.

\[ y = A(2)u^2 + A(3)u^3 + A(4)u^4 + \ldots + u^{2a}. \]

Of these values, it is obvious that the negative one will be always greater than the positive one; and they may, therefore, be represented by \( -(r + a) \) and \( r - a \); wherefore, the polynome

\[ -y + A(2)u^2 + A(3)u^3 + A(4)u^4 + \ldots + u^{2a}, \]

will be divisible by each of the binomial factors,

\[ u + r + a, \]

\[ u - r - a; \]

and likewise by the quadratic factor,

\[ (u + a)^2 - r^2, \]

produced by their multiplication.

But the same mode of reasoning will not apply when \( y \) Equations, has the sign plus; in which case the demonstration must be deduced from other principles.

12. If we put

\[ \varphi(u) = A(2)u^2 + A(3)u^3 + A(4)u^4 + \ldots + u^{2a}, \]

the transformed polynome, supposing \( y \) to have the sign plus, will become

\[ y + \varphi(u). \]

Let \( (u - a)^2 + r^2 \) be a quadratic divisor of this polynome, and put \( u - a = z \), or \( u = a + z \); then, by substituting \( a + z \) for \( u \), and writing all the terms of the transformed function \( \varphi(a + z) \) in two lines, one containing all the even and the other all the odd powers of \( z \), the polynome \( y + \varphi(u) \) will be equal to

\[ y + \varphi(a) + \frac{1}{2} \cdot \frac{d^2 \varphi(a)}{da^2} \cdot z^2 + \frac{1}{24} \cdot \frac{d^4 \varphi(a)}{da^4} \cdot z^4 + \ldots \]

\[ + z \left\{ \frac{d^2 \varphi(a)}{da^2} + \frac{1}{6} \cdot \frac{d^4 \varphi(a)}{da^4} \cdot z^2 + \ldots \right\}. \]

By the same substitution of \( z \) for \( u - a \), the divisor \( (u - a)^2 + r^2 \) is changed into the binomial quantity \( z^2 + r^2 \); which will be a divisor of each of the preceding lines, if \( -r^2 \), when it is substituted for \( z^2 \), render each of them equal to zero, Sect. 3. Hence we obtain the following equations, viz.

\[ 0 = y + \varphi(a) - \frac{1}{2} \cdot \frac{d^2 \varphi(a)}{da^2} \cdot r^2 + \frac{1}{24} \cdot \frac{d^4 \varphi(a)}{da^4} \cdot r^4 - \ldots \]

\[ 0 = \frac{d^2 \varphi(a)}{da^2} - \frac{1}{6} \cdot \frac{d^4 \varphi(a)}{da^4} \cdot r^2 + \ldots \]

(C)

If two numbers, \( a \) and \( r^2 \), can be found that will satisfy these equations, it is evident that \( z^2 + r^2 \) will be a divisor of each of the two lines that compose the transformed function \( y + \varphi(a + z) \); consequently it will be a divisor of the sum of both lines, or of the function itself, that is, \( (u - a)^2 + r^2 \) will be a divisor of the proposed polynome \( y + \varphi(u) \). We are now to prove that two such numbers may be found.

Substitute \( \lambda^2 a^2 - s \) for \( r^2 \) in the equations (C), \( \lambda \) being a quantity to be afterwards determined; and, in order to shorten expressions, put

\[ M = \varphi(a) - \frac{1}{2} \cdot \frac{d^2 \varphi(a)}{da^2} \cdot (\lambda^2 a^2 - s) + \frac{1}{24} \cdot \frac{d^4 \varphi(a)}{da^4} \cdot (\lambda^2 a^2 - s)^2 - \ldots \]

\[ N = \frac{d^2 \varphi(a)}{da^2} - \frac{1}{6} \cdot \frac{d^4 \varphi(a)}{da^4} \cdot (\lambda^2 a^2 - s) + \ldots \]

And the two equations (C) will be thus written, viz.

\[ y + M = 0, \]

\[ N = 0. \]

In these equations \( a \) and \( s \) are always supposed to represent positive numbers, in which case the equation \( N = 0 \) cannot take place when \( s \) is greater than \( \lambda^2 a^2 \); for then all the terms of \( N \) would be positive.

Considering \( N \) as a function of \( a \), the part of it that does not contain \( a \) is evidently

\[ A(3)s + A(5)s^3 + A(7)s^5 + \ldots \]

which is always positive. The highest power of \( a \) contained in the same function is \( a^{2n-1} \); and we shall obtain all the terms of \( N \) that contain this power by putting \( a^{2n} \) for \( \varphi(a) \) in the expression

\[ \frac{d^2 \varphi(a)}{da^2} - \frac{1}{6} \cdot \frac{d^4 \varphi(a)}{da^4} \cdot \lambda^2 a^2 + \frac{1}{120} \cdot \frac{d^6 \varphi(a)}{da^6} \cdot \lambda^4 a^4 - \ldots \]

which terms are therefore as follows, viz.

\[ \lambda^{2n-1} \left\{ \frac{2n-2n}{2} \cdot \frac{2n-2n}{3} \cdot \lambda^2 + \ldots \right\} \] Now, in the expression obtained in Sect. 10, viz.

\[ r^{2n-1} \times \frac{\sin 2\pi}{\sin \varphi} = 2n - 2n - 1 - 2n - 2 - 3 + \ldots \]

if we put \( x^2 = \tan^2 \varphi \), and divide both sides by

\[ \frac{\sin 2\pi}{\cos 2\pi - 1} \]

we shall obtain

\[ \frac{1}{\sin \varphi} \times \frac{1}{\cos 2\pi - 1} = 2n - 2n - 1 - 2n - 2 - 3 + \ldots \]

from which formula it follows, that the polynome on the right-hand side of the sign of equality will be equal to nothing, where \( \varphi = \pm \frac{m}{n} \times 90^\circ \), \( m \) being any integer number less than \( n \), zero not included. Therefore the first, third, etc. roots of the polynome will be expressed by the formula

\[ x^2 = \tan^2 \frac{2k + 1}{n} \cdot 90^\circ, \]

\( 2k + 1 \) representing any odd number less than \( n \); and the second, fourth, etc. roots by the formula

\[ x^2 = \tan^2 \frac{2k + 2}{n} \cdot 90^\circ, \]

\( 2k + 2 \) being any even number less than \( n \). And it is evident that the polynome will be negative for every value of \( x^2 \) that lies between any odd root and the next even root, that is, for every value between these limits, viz.

\[ x^2 > \tan^2 \frac{2k + 1}{n} \cdot 90^\circ, \] \[ x^2 < \tan^2 \frac{2k + 2}{n} \cdot 90^\circ. \]

Thus, an indefinite number of values of \( x^2 \) may be found that will make the polynome negative.

Having assumed such a value of \( x^2 \), let any positive number whatever be substituted for \( s \), and \( N \) will be converted into a rational function of \( s \); the greatest power of \( a \), or \( a^{2n-1} \), being odd, and having a negative coefficient; and the term which does not contain \( a \) being positive. Therefore at least one positive value of \( a \) may be found that will satisfy the equation \( N = 0 \); and, as has already been observed, this value of \( a \) will be such as to make \( x^2 - s \) a positive quantity. It is possible indeed that, in the equation \( N = 0 \), there may be several values of \( a \) for every assumed value of \( s \); but we here confine our attention to the least positive value, which is distinguished by this circumstance, that it vanishes with the absolute term of the equation, or with \( s \); whereas, when \( s \) is equal to zero, all the other roots of the equation \( N = 0 \) have finite values depending upon the given coefficients.

Now, if we suppose \( s \) to increase from zero to infinity, and assume two values, \( s \) and \( s + \delta s \), very near one another, according to what has been proved, we shall have the corresponding values \( a \) and \( a + \delta a \), such, that the equation \( N = 0 \) will be satisfied by substituting both \( s \) and \( a \), and likewise \( s + \delta s \) and \( a + \delta a \). Hence, because \( N = 0 \), and \( \delta N = 0 \), we get

\[ \frac{dN}{da} \cdot \delta a + \frac{dN}{ds} \cdot \delta s = 0, \]

and, \( \delta a = - \left( \frac{dN}{da} \right) \times \frac{dN}{ds}. \)

Again, if we substitute first \( s \) and \( a \), and then \( s + \delta s \) Equations, and \( a + \delta a \), in the function \( M \), we shall get

\[ \delta M = \frac{dM}{da} \cdot \delta a + \frac{dM}{ds} \cdot \delta s. \]

But, by comparing the functions \( M \) and \( N \), the following properties will readily be discovered, viz.

\[ \frac{dM}{da} + 2x^2 a \cdot \frac{dM}{ds} = N - 2 \frac{dN}{ds} (x^2 a^2 - s), \] \[ \frac{dM}{ds} = \frac{1}{2} \frac{dN}{da} + x^2 a \cdot \frac{dN}{ds}, \]

whence,

\[ \frac{dM}{da} = N - 2 \frac{dN}{ds} (x^2 a^2 - s) - \frac{dN}{da} x^2 a - 2x^2 a \cdot \frac{dN}{ds}. \]

Consequently,

\[ \delta M = \left[ N - 2 \frac{dN}{ds} (x^2 a^2 - s) - \frac{dN}{da} x^2 a - 2x^2 a \cdot \frac{dN}{ds} \right] \cdot \delta s, \]

and, if we observe that \( N = 0 \), and substitute the value of \( \delta s \) found above, we shall get

\[ \delta M = \left( \frac{dN}{da} \right) \cdot \left[ \frac{1}{2} \left( \frac{dN}{ds} \right)^2 (x^2 a^2 - s) + \frac{1}{2} \left( \frac{dN}{da} \right)^2 + 2x^2 a \cdot \frac{dN}{ds} \right], \]

in which expression all the quantities are essentially positive, except \( \frac{dN}{da} \), which is always negative, as may be thus proved.

The quantity \( s \) remaining invariable, if we make \( a = 0 \), the function \( N \) will be positive; for it is equal to

\[ A^{(3)}_s + A^{(5)}_s + A^{(7)}_s + \ldots, \]

and the same function will continue positive, while \( a \) increases from zero to the least root of the equation \( N = 0 \). At this limit \( N \) is first equal to zero, and then becomes negative; it must, therefore, be decreasing, and consequently \( \frac{dN}{da} \) is negative. It may indeed happen, that, for particular values of \( s \), the co-efficients of \( N \) may be such, that \( N \) and \( \frac{dN}{da} \) shall be both equal to zero at the same time; but, in such cases, it will readily appear that \( \frac{dN}{ds} \) and \( \delta M \) will likewise be equal to zero. Wherefore \( \delta M \) will be negative; at least, if it become equal to zero for any particular values of \( s \) and \( a \), it cannot become positive. It follows, therefore, that the function \( M \) itself will be invariably negative, while \( s \) and \( a \) increase together from zero to be infinitely great.

Now assume a series of values of \( s \) increasing from zero without limit, viz.

\[ 0, s^{(1)}, s^{(2)}, s^{(3)}, \ldots, s^{(x)}, s^{(x+1)}, \ldots \]

and having substituted these in the function \( N \), find, by means of the equation \( N = 0 \), the corresponding values of \( a \), viz.

\[ 0, a^{(1)}, a^{(2)}, a^{(3)}, \ldots, a^{(x)}, a^{(x+1)}, \ldots \]

then, by substituting these values in \( M \), we shall obtain a series of results all negative, and increasing from zero without limit, viz.

\[ 0, -M^{(1)}, -M^{(2)}, -M^{(3)}, \ldots, -M^{(x)}, -M^{(x+1)}, \ldots \] Equations, and whatever be the magnitude of the positive quantity \( y \), it must be contained between two consecutive terms of this last series, viz. between \( M(x) \) and \( M(x+1) \). But as the values of \( s \) may be assumed as near one another as we please, it follows that \( M(x) \) and \( M(x+1) \) may be made to approach to one another and to \( y \), within any required degree of accuracy. Thus two values of \( s \) and \( a \) may be found that will satisfy both the equations,

\[ y + M = 0, \\ N = 0; \]

and having found these values, we shall obtain the quadratic divisor of the proposed polynomial \( y + \varphi(u) \), viz.

\[ (u - a)^2 + r^2, \]

or

\[ (u - a)^2 + \lambda^2a^2 - s. \]

In the preceding demonstration it is supposed that \( M \) increases without limit, as \( s \) becomes indefinitely great; which may be thus proved: The values of \( M \) and \( N \) will coincide nearly with the terms containing the highest powers of \( s \) and \( a \), when these quantities are very great; and ultimately the functions may be considered as equal to those terms alone. In such circumstances, therefore, the values of the functions will be found by writing \( x^{2n} \) for \( \varphi(x) \); whence we get

\[ M = a^{2n} - 2n \cdot \frac{2n-1}{2} \cdot a^{2n-2} (\lambda^2a^2 - s) + \ldots \]

\[ N = 2n \cdot a^{2n-1} - 2n \cdot \frac{2n-1}{2} \cdot a^{2n-2} (\lambda^2a^2 - s) + \ldots \]

and if we put \( \lambda^2a^2 - s = \ell^2a^2 \), or \( a^2 = \frac{s}{\lambda^2 - \ell^2} \),

then \( M = a^{2n} \left( 1 - 2n \cdot \frac{2n-1}{2} \cdot \ell^2 + \ldots \right) \)

\[ N = a^{2n-1} \left( 2n - 2n \cdot \frac{2n-1}{2} \cdot \ell^2 + \ldots \right) \]

Now, \( s \) remaining invariable, \( a \) will increase as \( \ell \) increases; and the least value of \( a \) that will satisfy the equation \( N = 0 \), corresponds to the least value of \( \ell \) that will make the polynomial in the expression of \( N \) equal to zero; which value, according to what was before shown, is

\[ \ell = \tan \frac{1}{n} \times 90^\circ. \]

But, if we put \( t = \tan \varphi \), we shall get

\[ M = a^{2n} \times \cos \frac{2n\varphi}{\cos \varphi} \]

or, because \( \varphi = \frac{1}{n} \times 90^\circ; \cos \varphi = \frac{1}{\sqrt{1+t^2}} \),

and \( a^2 = \frac{s}{\lambda^2 - \ell^2} \);

\[ M = -s^n \times \left( \frac{1+\ell^2}{\lambda^2 - \ell^2} \right)^n; \]

which proves the point assumed in the demonstration.

By a similar mode of reasoning, we may likewise prove the former case of the proposition, when \( y \) is negative. In this case the quadratic divisor is \( (u - a)^2 - r^2 \); and if we proceed as before, or, which is the same thing, if we change the signs of \( y \) and \( r^2 \) in the equations (C) already obtained, and put

\[ M = \varphi(a) + \frac{1}{2} \frac{d^2\varphi(a)}{da^2} \cdot r^2 + \ldots \]

\[ N = \frac{d\varphi(a)}{da} + \frac{1}{6} \frac{d^3\varphi(a)}{da^3} \cdot r^2 + \ldots; \]

we shall get

\[ -y + M = 0, \\ N = 0. \]

Now, by pursuing the steps of the foregoing analysis, we may prove, first, that, for every assumed value of \( r^2 \), a negative value of \( a \) may be found, which will satisfy the equation \( N = 0 \); and, secondly, that when the values which satisfy the equation \( N = 0 \) are substituted in the function \( M \), the results will be invariably positive; whence it follows that a positive value of \( r^2 \), and a negative value of \( a \), may be found that will satisfy both the equations, whatever be the magnitude of \( y \). The analogy between the two cases is thus placed in a strong light; and a little reflection will even bring us to this conclusion, that in reality the one case is a necessary consequence of the other. For since \( a \) and \( r^2 \) depend only upon \( y \), and the given co-efficients of the polynomial, they will be functions of \( y \); therefore, in the equations of the first case, viz.

\[ -y + M = 0, \\ N = 0, \]

\( a \) being negative, and \( r^2 \) positive, we may suppose \( a = y\varphi(y) \) and \( r^2 = y\psi(y) \), these values being such as to render each of the equations identical; and then the quadratic divisor \( (u - a)^2 - r^2 \) will become

\[ \left( u + y\varphi(y) \right)^2 - y\psi(y). \]

But, because the foregoing equations become identical by the substitution of the values mentioned, it is a necessary consequence that the equations of the second case, viz.

\[ y + M = 0, \\ N = 0, \]

in which the signs of \( y \), \( a \), and \( r^2 \), are contrary to what they were in the former equations, will likewise be identical, when \( a = -y\varphi(-y) \) and \( r^2 = -y\psi(-y) \); and the quadratic divisor, \( (u - a)^2 - r^2 \), will now become

\[ \left( u - y\varphi(-y) \right)^2 + y\psi(-y). \]

Thus, when the quadratic divisor of the first case is expressed in terms of \( y \), we have only to change the sign of that quantity, in order to have the quadratic divisor of the second case. It is not difficult to perceive, that what has now been proved is nothing more than another application of the principle employed in Sect. 10; a principle which is the real foundation of the imaginary arithmetic, with the processes of which the preceding investigations are intimately connected. None but real quantities have occurred in the analysis we have pursued, because we have sought to investigate \( r^2 \), which is always rational; whereas, if we had proposed to find \( r \), we should inevitably have been led to the real quantity \( \sqrt{y} \) in the one case, and to the impossible quantity \( \sqrt{-y} \) in the other. These few observations are made for the purpose of throwing light upon a part of analysis which is certainly obscure in its principles, although there is no question that it is a useful, and even a necessary branch of the art of calculation. A fuller elucidation of the subject would be unsuitable to this place; but enough has been said to show that we must seek in the principles of analysis itself for the explanation of the operations it employs; and we may with great probability conclude that no satisfactory account of the imaginary calculus will ever be obtained by having recourse to fanciful geometrical constructions, or to the analogy between the circle and the hyperbola, or to the metaphysical proposition, that all processes with general symbols, whether signifi- Equations, cant or not, are equally entitled to be considered as demonstrative.

13. Having now proved, in a rigorous manner, that every polynome of even dimensions has at least one quadratic divisor of the one kind or the other, it follows that it may be reduced by division to another polynome two degrees lower; in like manner, this last polynome will admit of being lowered two degrees more; and by repeating the same process, the first polynome will at length be completely exhausted by quadratic divisors.

If, therefore, we recollect, that every polynome of odd dimensions has one binomial divisor, we shall arrive at this general conclusion, "That every rational polynome can be completely exhausted by binomial and trinomial divisors; and, consequently, that it is equal to the product of a certain number of factors of the two first degrees."

It appears also that the binomial factors of any polynome are such only as arise from the resolution of the quadratic divisors; and they are, therefore, either real or imaginary. And thus we finally obtain the following proposition, which was assumed by Harriot, and is the foundation of the received theory of equations, namely, "Every rational polynome has as many binomial factors, and as many roots, real and imaginary, as it has dimensions."

The necessity of confirming, by a general demonstration, the assumed theory of the impossible roots of equations, was early felt; and accordingly, this point has engaged the attention of all the great mathematicians to whom analysis is indebted for the progress it has made in the course of the last and the present centuries. An account of their several researches would greatly exceed the limits of this article; but the reader will find all the information he can wish for in two long notes (9 and 10) of the Traité des Equations Numeriques, by La Grange, in which the author, with his usual elegance, has explained and commented upon the various modes of investigation that have been proposed. It will be sufficient to observe here, that all the demonstrations that have appeared are either calculations with impossible quantities, or they proceed upon the assumption that every equation has as many roots as dimensions, and thus involve the very thing to be proved.

14. The general cases in which mathematicians have been successful in resolving rational functions into their trinomial factors, are confined to the theorem of Cotes, and to a more general proposition of a similar kind, for which we are indebted to De Moivre. These instances are of great importance in analysis, and we shall therefore subjoin an investigation of them, because they are deduced in a very direct manner from the method we have followed.

Suppose, as before, that \( f(x) \), or \( x^n + A(1)x^{n-1} + A(2)x^{n-2} + \ldots + A(n-1)x + A(n) \), is a rational polynome of \( n \) dimensions, and \((x-a)^2 + r^2\) one of its quadratic divisors; put \( z = x - a \), substitute \( a + z \) for \( x \), and write the transformed function in two lines, one containing all the even, and the other all the odd powers of \( z \); then the polynome will be equal to

\[ f(a) + \frac{1}{2} \frac{d^2f(a)}{da^2}z^2 + \frac{1}{24} \frac{d^4f(a)}{da^4}z^4 + \ldots \]

\[ + z \left\{ \frac{df(a)}{da} + \frac{1}{6} \frac{d^3f(a)}{da^3}z^2 + \frac{1}{120} \frac{d^5f(a)}{da^5}z^4 + \ldots \right\} \]

By the same substitution of \( z \) for \( x - a \), the divisor \((x-a)^2 + r^2\) will become \( z^2 + r^2 \); and, as before, the conditions that \( z^2 + r^2 \) shall divide each of the foregoing lines, will be expressed by the following equations, viz.

\[ 0 = f(a) - \frac{1}{2} \frac{d^2f(a)}{da^2}z^2 + \frac{1}{24} \frac{d^4f(a)}{da^4}z^4 + \ldots \]

\[ 0 = \frac{df(a)}{da} - \frac{1}{6} \frac{d^3f(a)}{da^3}z^2 + \frac{1}{120} \frac{d^5f(a)}{da^5}z^4 + \ldots \]

In these formulae substitute the expanded values of \( f(a), \frac{df(a)}{da}, \ldots \); and class together all the homogeneous terms of the same order, that is, all the terms in which the exponents of \( a \) and \( r \) amount to the same sum, then we shall have

\[ 0 = a_n - \frac{n-1}{2}a_{n-2}r^2 + \ldots \]

\[ + A(1) \left\{ a_{n-1} - \frac{n-2}{2}a_{n-3}r^2 + \ldots \right\} \]

\[ + A(2) \left\{ a_{n-2} - \frac{n-3}{2}a_{n-4}r^2 + \ldots \right\} \]

\[ \ldots \]

\[ 0 = a_{n-1} - \frac{n-1}{2}a_{n-3}r^2 + \ldots \]

\[ + A(1) \left\{ (n-1)a_{n-2} - \frac{n-2}{2}a_{n-4}r^2 + \ldots \right\} \]

\[ + A(2) \left\{ (n-2)a_{n-3} - \frac{n-3}{2}a_{n-5}r^2 + \ldots \right\} \]

Now, put \( a = r \cos \varphi, r = r \sin \varphi \); and, by what was proved in Sect. 10, the two foregoing equations will become

\[ r^n \cos n\varphi + A(1)r^{n-1} \cos (n-1)\varphi + \ldots + A(n-1)r \cos \varphi + A(n) = 0, \]

\[ \frac{1}{\sin \varphi} \left\{ r^{n-1} \sin n\varphi + A(1)r^{n-2} \sin (n-1)\varphi + \ldots + A(n-1)\sin \varphi \right\} = 0; \]

And the quadratic divisor \((x-a)^2 + r^2\) will be changed into

\[ x^2 - 2r \cos \varphi \cdot x + r^2. \]

When \(\sin \varphi = 0\), and \(\varphi = 0\) or \(180^\circ\), the preceding equations coincide with these, viz.

\[ r^n = A(1)r^{n-1} + A(2)r^{n-2} + \ldots + A(n) = 0, \]

\[ nr^{n-1} = (n-1)A(1)r^{n-2} + (n-2)A(2)r^{n-3} + \ldots + A(n) = 0, \]

which express the condition that the given polynome has two or more factors equal to \( x = a \); at which limits a quadratic divisor changes from being of the form \((x-a)^2 + r^2\) to be of the form \((x-a)^2 + r^2\), or the contrary. Thus we learn that, in the equations (E), \(\sin \varphi\) must always have a finite value, and then the denominator of the second equation may be neglected.

Let the preceding investigation be applied to find the quadratic factors of \( x^n - a^n \). In this case the two equations (E) will become

\[ r^n \cos n\varphi - a^n = 0, \] Equations.

\[ r^n - 1 \times \frac{\sin n\varphi}{\sin \varphi} = 0; \]

whence

\[ r = a, \] \[ \cos n\varphi = 1, \] \[ \frac{\sin n\varphi}{\sin \varphi} = 0. \]

Now, excluding the cases when \( \varphi = 0 \) and \( \varphi = 180^\circ \), the last equation will be satisfied when \( \varphi = \frac{2k + 1}{n} \times 180^\circ \),

or \( \varphi = \frac{2k}{n} \times 180^\circ \), the numerators of the fractions representing all the odd and even numbers less than the common denominator; but the second equation will be satisfied only when \( \varphi = \frac{2k}{n} \times 180^\circ \); therefore all the quadratic factors of the function \( x^n - a^n \) will be comprehended in the formula

\[ x^2 - 2ax \times \cos \frac{2k}{n} \times 180^\circ + a^2. \]

When \( n \) is an even number, the quadratic factors will amount to \( \frac{n-2}{2} \); and if to them we add the simple factors \( x + a \) and \( x - a \), we shall have the complete resolution of the function. When \( n \) is odd, the number of quadratic factors is \( \frac{n-1}{2} \), to which must be added the binomial factor \( x - a \).

By proceeding in a similar manner in the case of the function \( x^n + a^n \), we shall have the equations

\[ r = a, \] \[ \cos n\varphi = -1, \] \[ \frac{\sin n\varphi}{\sin \varphi} = 0. \]

Excluding the cases when \( \varphi = 0 \) and \( \varphi = 180^\circ \), the second and third equations will be both satisfied, when \( \varphi = \frac{2k + 1}{n} \times 180^\circ \), the numerator of the fraction representing any odd number less than \( n \). Therefore all the quadratic factors will be comprehended in the formula

\[ x^2 - 2ax \times \cos \frac{2k + 1}{n} \times 180^\circ + a^2. \]

When \( n \) is even, the number of quadratic factors is \( \frac{n}{2} \) and they exhibit the complete resolution of the function.

When \( n \) is odd, the number of quadratic factors is \( \frac{n-1}{2} \), to which the binomial factor \( x + a \) must be added.

Let us next take the more general function.

\[ x^{2n} - 2a^n x^n + a^{2n}. \]

And, in the first place, when \( \beta \) is greater than unit, the function is equal to

\[ \left\{ x^n - a^n (\beta + \sqrt{\beta^2 - 1}) \right\} \times \left\{ x^n - a^n (\beta - \sqrt{\beta^2 - 1}) \right\}; \]

and the quadratic factors may be found by the cases already considered.

When \( \beta \) is less than unit, let \( \beta = \cos \theta \), and the function to be resolved will be

\[ x^{2n} - 2a^n x^n \cos \theta + a^{2n}. \]

By means of the equations (E) we get

\[ r^n \cos 2n\varphi - 2a^n r^n \cos \theta \cos n\varphi + a^{2n} = 0, \] \[ r^n - 1 \times \frac{\sin 2n\varphi}{\sin \varphi} - 2a^n r^n - 1 \times \frac{\sin n\varphi}{\sin \varphi} \times \cos \theta = 0; \]

and hence

\[ r = a, \] \[ \cos 2n\varphi - 2 \cos \theta \cos n\varphi + 1 = 0, \] \[ \frac{\sin 2n\varphi}{\sin \varphi} - 2 \frac{\sin n\varphi}{\sin \varphi} \times \cos \theta = 0. \]

But, \( \cos 2n\varphi + 1 = 2 \cos^2 n\varphi \); and \( \sin 2n\varphi = 2 \cos n\varphi \times \sin n\varphi \); therefore the two last equations will become

\[ 2 \cos n\varphi (\cos n\varphi - \cos \theta) = 0, \] \[ 2 \frac{\sin n\varphi}{\sin \varphi} (\cos n\varphi - \cos \theta) = 0; \]

and these, supposing \( \cos \theta \) different from unit, can be satisfied only by making \( \cos n\varphi - \cos \theta = 0 \), or \( \cos n\varphi = \cos \theta \).

Now, \( \cos \theta = \cos (m \times 360^\circ + \delta) \), \( m \) being any integer number whatever, zero included; and hence

\[ \varphi = \frac{m \times 360^\circ + \delta}{n}, \]

which formula comprehends all the values of \( \varphi \) that will satisfy the above equations. Therefore all the factors sought will be contained in this general expression, viz.

\[ x^2 - 2ax \cos \frac{m \times 360^\circ + \delta}{n} + a^2; \]

in which, if for \( m \) we substitute all the integer numbers less than \( n \), zero included, we shall obtain the \( n \) quadratic factors of the proposed function.

15. The quadratic divisors \( (x - a)^2 - s^2 \) and \( (x - a)^2 + s^2 \), have hitherto been considered separately; but they may be both represented by \( (x - a)^2 - s^2 \), which will coincide with the one or the other, according as \( s \) is positive or negative. And, if we now proceed as before, we shall get the following equations, which express the conditions necessary, in order that the polynome \( f(x) \) of any proposed dimensions, as \( n \), shall be divisible by \( (x - a)^2 - s^2 \), viz.

\[ 0 = f(a) + \frac{1}{2} \frac{d^2f(a)}{da^2} s + \frac{1}{24} \frac{d^4f(a)}{da^4} s^3 + \ldots \] \[ 0 = \frac{df(a)}{da} + \frac{1}{6} \frac{d^3f(a)}{da^3} s + \frac{1}{120} \frac{d^5f(a)}{da^5} s^4 + \ldots \]

By eliminating \( s \) we shall obtain an equation, viz.

\[ \Lambda = 0, \]

in which \( \alpha \) is the unknown quantity. As the process of elimination is independent of the particular values of the co-efficients of \( f(x) \), the degree of the resulting equation will be the same when the polynome \( f(x) \) has as many real roots as dimensions, and when the case is otherwise. But when \( f(x) \) is equal to the product of \( n \) real binomial factors, the multiplication of every two of them will form a quadratic factor. The number of such factors will, therefore, be equal to \( n \times \frac{n-1}{2} \), which expresses all the combinations made with \( n \) things taken two and two. Consequently, there will be just so many different values of \( \alpha \) that will satisfy the equation \( \Lambda = 0 \), which will, therefore, have its exponent equal to \( n \times \frac{n-1}{2} \). It thus appears that the equation \( \Lambda = 0 \) rises in its dimensions very rapidly above the given polynome, on which account little advantage is derived from this procedure. Equations. Again, by eliminating \(a\) from the same two equations, we shall obtain one, viz.

\[ S = 0, \]

in which \(s\) is the unknown quantity. This equation, which has already been alluded to (Sect. 8), rises to the same dimensions with the former equation, \(A = 0\); but it is possessed of some useful properties, derived chiefly from the consideration that every positive root gives a quadratic factor of the form \((x - a)^2 - s^2\) in the polynome \(f(x)\), and every negative root a quadratic factor of the form \((x - a)^2 + s^2\) in the same polynome.

The quadruple of \(s\) is equal to the square of the difference of the two binomial factors of \((x - a)^2 - s^2\); whence it follows that the quadruples of the several roots of the equation \(S = 0\) are equal to the squares of the differences of the roots of \(f(x) = 0\). If therefore, we put \(x_1, x_2, x_3, \ldots\) for the roots of \(f(x) = 0\), the roots of \(S = 0\) will be

\[ \frac{1}{4}(x_1 - x_2)^2, \frac{1}{4}(x_1 - x_3)^2, \frac{1}{4}(x_1 - x_4)^2, \ldots \]

and from this it is manifest that the co-efficients of the same equation will be known symmetrical functions of the quantities \(x_1, x_2, x_3, \ldots\) or of the roots of \(f(x) = 0\). The rules formerly explained may, therefore, be employed for calculating the co-efficients of \(S = 0\); and this method of forming the equation is not only more convenient than the process of eliminating, but it likewise has the advantage of enabling us to find any one co-efficient separately without computing the rest. Thus, if we put

\[ K(n) = (x_1 - x_2)^2, (x_1 - x_3)^2, (x_1 - x_4)^2, \ldots \]

and expand this product, and in place of the symmetrical functions of which it is composed, substitute their values in terms of the given co-efficients of \(f(x) = 0\), we shall obtain the value of \(K(n)\); and the last term of the equation \(S = 0\) will be equal to

\[ \frac{K(n)}{2^{n-1}} \]

the upper sign taking place when \(n \times \frac{n-1}{2}\), the dimensions of the equation \(S = 0\) is even, and the lower sign when the same number is odd.

If we suppose the given equation \(f(x) = 0\) to be possessed of as many real roots as dimensions, or to have \(n\) real binomial factors, the product of every two of these will be a quadratic factor \((x - a)^2 - s^2\), in which \(s\) is positive; wherefore, the roots of \(S = 0\) will be all real and all positive. On the other hand, when the given equation \(f(x) = 0\) has not as many real roots as dimensions, it will be divisible by one or more quadratic factors not resolvable into real binomial factors, and in which \(s\) is negative; consequently, the equation \(S = 0\) will have one or more negative roots. It is, therefore, a property of the auxiliary equation \(S = 0\), that when the roots are all real they are all positive, and when they are not all real some of them are negative. Now the rule of Descartes will enable us to find whether the roots are all positive or not; and by this means we shall discover whether the roots of the given equation \(f(x) = 0\) are all real or not. From what has been said, we may lay down this rule: "The proposed equation \(f(x) = 0\) will have all its roots real when the auxiliary equation \(S = 0\) has as many variations from one sign to another as it has dimensions, or when its terms are alternately positive and negative; otherwise the proposed equation will have one or more quadratic factors of the form \((x - a)^2 + s^2\), but the number of such factors cannot exceed the continuations of the same sign in the auxiliary equation."

Again, in the equation \(S = 0\), the polynome \(S\) is equal to a certain number of binomial factors or the forms \(x - a\) and \(x + a\), multiplied into a supplementary polynome of even dimensions, which, not being capable of having a negative value, will have its last term positive (Sect. 5). It is manifest, therefore, that the last term of \(S = 0\) will be positive or negative, according as the number of factors of the form \(x - a\) is even or odd, that is, according as the equation has an even or odd number of real and positive roots. But every two real roots in the equation \(f(x) = 0\) give one real and positive root in the subsidiary equation \(S = 0\); therefore, if \(m\) denote the number of real roots in the former equation, the number of real and positive roots in the latter will be equal to \(m \times \frac{m-1}{2}\); and the last term of the subsidiary equation will be positive or negative, according as \(m \times \frac{m-1}{2}\) is an even or an odd number.

In a cubic equation \(x^3 + px + q = 0\), \(m\) is either one or three. In the first case, the equation \(S = 0\) will have no positive roots, and the last term will be positive; in the second case, it will have three real and positive roots, and the last term will be negative. Now the dimensions of \(S = 0\) being odd, the function \(K(3)\) will be negative in the first case and positive in the second. Therefore the given cubic equation will have one real root, or three, according as the function \(K(3)\), that is,

\[ (x_1 - x_2)^2, (x_1 - x_3)^2, (x_1 - x_4)^2, \]

or \(-4p^3 - 27q^2\), is negative or positive.

In a biquadratic equation \(x^4 + px^3 + qx + r = 0\), \(m\) is equal to zero, or two, or four. In the first case the equation \(S = 0\) has no positive roots, in the third it has six, and in both cases the last term is positive. In the second case the same equation has only one real and positive root, and the last term is negative. The dimensions of \(S = 0\), equal to \(\frac{4 \times 3}{2}\), being even, the function \(K(4)\) will be positive in the first and third cases, and negative in the second case. Therefore the proposed biquadratic equation will have only two real roots when the function \(K(4)\), that is,

\[ (x_1 - x_2)^2, (x_1 - x_3)^2, (x_1 - x_4)^2, (x_1 - x_5)^2, (x_1 - x_6)^2, \]

or \(256p^3 - 128p^2r + 144pq^2 - 16p^2r^2 - 27q^3\), is negative; and when the same function is positive, the proposed equation will have four real roots, if the terms of the auxiliary equation \(S = 0\) be alternately positive and negative; otherwise it will have no real roots.

In an equation of the fifth degree, \(m\) is equal to one, or three, or five. In the first and third cases the last term of \(S = 0\) will be positive, for there are either no positive roots or ten; in the second case the last term is negative, the number of positive roots being three. The dimensions of \(S = 0\), equal to \(\frac{5 \times 4}{2}\), being even, the function \(K(5)\) will be positive in the first and third cases, and negative in the second. Therefore the given equation of the fifth degree will have three real roots when the function \(K(5)\) is negative; and when the same function is positive, it will have five real roots if the terms of the auxiliary equation \(S = 0\) be alternately positive and negative; otherwise it will have but one.

Resolution of Algebraic Equations.

16. When the co-efficients of an equation are given in Equations. numbers, we may investigate the numerical value of any one root separately, by first seeking the limits between which it lies, and then narrowing those limits to any required degree of approximation. But this process is not what is meant by the general solution of algebraical equations, which supposes that the co-efficients are denoted by general symbols, and consists in finding such a function of those quantities as shall, by the multiplicity of its values, represent all the roots. An algebraical expression is susceptible of many values, by means of the different radical quantities it contains; but these radical quantities being themselves the roots of an equation, it follows that the general formula for the solution of any proposed equation can be nothing more than a function of the given coefficients combined with the roots of another equation.

The solution of quadratic equations has been known since the origin of algebra; it is found in the work of Diophantus, the first treatise on the science extant, if it be not the very first that was written. The Italian mathematicians, who are the founders of the modern algebra, discovered the solution of cubic and biquadratic equations. The rules they invented for this purpose are, however, rather the result of particular artifices than deductions from any profound views of the structure of the equations they considered. In the course of the last and the present centuries, the general solution of equations has been the subject of almost innumerable researches by all the mathematicians of the first rank; but their labours have not been successful in advancing this branch of the science beyond the steps made by the first algebraists.

The rules usually given for the solution of cubic and biquadratic equations are to be found in all the elementary books, and it would be superfluous to repeat them here. An account of the attempts that have been made to obtain a general theory for solving algebraic equations would greatly exceed the limits we must prescribe to ourselves. What has most impeded the progress of algebraists in their researches on this subject, is the difficulty of treating it by a perfect analysis, or of arriving at general conclusions by a process of reasoning founded solely on the principles of the inquiry, and disengaged from particular artifices of calculation, and from particular suppositions. In what follows we shall endeavour to lay before our readers the general principles on which is founded all that has been successfully accomplished in this theory.

Let the three roots of a cubic equation be represented by \(a, b, c\); and having interchanged these letters among one another in all possible ways, we shall get the six permutations following, viz.:

- \(abc, cab, bea\) - \(acb, bac, cba\)

The combinations that stand first on the left are formed by prefixing the same letter to the permutations made with the other two; and those on each line are derived from one another by making the last letter of one stand first in that which follows, while the other two letters preserve the same order.

Now let \(t^3 - 1 = 0\); and let the letters of first combination of each line be prefixed in order to the three terms of \(1 + t + t^2\); then we shall get

\[t = a + b + c,\] \[s = a + c + b,\] \[t^2 = a + b + c,\] \[s^2 = a + c + b,\] \[t^3 = a + b + c,\] \[s^3 = a + c + b.\]

The six quantities \(t, t^2, s, s^2, t^3, s^3\) comprehend all the values that can be formed by combining with \(1 + t + t^2\), the three letters taken in any order whatever; and it is obvious that the cubes of all these six quantities, being each equal either to \(t^3\) or \(s^3\), have no more than two va-

And because \(t^3\) and \(s^3\) have only one value each, any symmetrical functions of them, as \(t^3 + s^3\) and \(t^3 s^3\), will have determinate values, which remain the same, however the letters \(a, b, c\) be interchanged among one another. The quantities \(t^3 + s^3\) and \(t^3 s^3\) must, therefore, be symmetrical functions of \(a, b, c\); and, consequently, they can be found in terms of the co-efficients of the given equation.

By actually involving to the third power, we get

\[t^3 = a^3 + b^3 + c^3 + 6abc\] \[+ 3(a^2b + b^2c + c^2a) \cdot t^2\] \[+ 3(a^2c + b^2a + c^2b) \cdot t^2\] \[s^3 = a^3 + b^3 + c^3 + 6abc\] \[+ 3(a^2c + b^2a + c^2b) \cdot t^2\] \[+ 3(a^2b + b^2c + c^2a) \cdot t^2.\]

and likewise

\[(a + b + c)^3 = a^3 + b^3 + c^3 + 6abc\] \[+ 3(a^2b + b^2c + c^2a)\] \[+ 3(a^2c + b^2a + c^2b).\]

Now \(1 + t + t^2 = 0\), when \(t\) is any root of \(t^3 - 1 = 0\) different from unit; therefore, by adding the last three expressions, we get

\[t^3 + s^3 = 3(a^3 + b^3 + c^3) + 18abc\] \[- (a + b + c)^3.\]

Again, by actually multiplying

\[ts = a^2 + b^2 + c^2\] \[+ (ab + bc + ca) \cdot t^2\] \[+ (ab + bc + ca) \cdot t^2;\]

and, because \(t + s = -1\),

\[ts = a^2 + b^2 + c^2\] \[- (ab + bc + ca).\]

By means of the preceding formulae, we can compute the values of \(t^3 + s^3\) and \(t^3 s^3\); and these values being the co-efficients of a quadratic equation having its roots equal to \(t^3\) and \(s^3\), we can thence find \(t^3\) and \(s^3\), and \(t\) and \(s\). Now \(t\) and \(s\) being known, we have

\[a + b + c = a + b + c,\] \[t = a + b + c,\] \[s = a + c + b;\]

wherefore,

\[a = \frac{1}{3}(a + b + c) + \frac{1}{3}(t + s),\] \[b = \frac{1}{3}(a + b + c) + \frac{1}{3}(t^2 + s^2),\] \[c = \frac{1}{3}(a + b + c) + \frac{1}{3}(t^3 + s^3).\]

To apply the foregoing investigation, we shall take a cubic equation, \(x^3 - 3px - 2q = 0\), which is so prepared as to want the second term, then (Sect. 9)

\[a + b + c = 0,\] \[ab + ac + bc = -3p,\] \[a^2 + b^2 + c^2 = 6p,\] \[abc = 2q;\]

consequently \(t^3 + s^3 = 3^2 \times 2q\); \(ts = 9p\), and \(t^3 s^3 = 3^2 \times 3^2 \times p^2\). Hence

\[\frac{1}{3}t = (q + \sqrt{q^2 - p^2})^{\frac{1}{3}},\] \[\frac{1}{3}s = (q - \sqrt{q^2 - p^2})^{\frac{1}{3}};\]

Wherefore, by substituting these values in the expressions of the roots, we get

\[a = (q + \sqrt{q^2 - p^2})^{\frac{1}{3}} + (q - \sqrt{q^2 - p^2})^{\frac{1}{3}},\] \[b = q^{\frac{1}{3}}(q + \sqrt{q^2 - p^2})^{\frac{1}{3}} + q^{\frac{1}{3}}(q - \sqrt{q^2 - p^2})^{\frac{1}{3}},\] The preceding investigation, as well as all other methods that have been proposed for cubic equations, leads to the same result with the rule invented by Cardan; and, like that rule, it becomes in some cases insufficient for arithmetical computation, on account of the imaginary quantities that appear in the expressions of the roots. What is now mentioned is not an accidental circumstance, but a necessary consequence of the method of investigation pursued, and of the introduction of the imaginary roots of the equation \(x^3 - 1 = 0\). When \(a, b, c\) are real quantities, the value of \(t\) and \(s\) will be both imaginary, because they involve \(\varepsilon\) and \(\varepsilon^2\), or \(-\frac{1 + \sqrt{-3}}{2}\) and \(-\frac{1 - \sqrt{-3}}{2}\).

In this case, therefore, although the three roots of the proposed equation are all real, yet the algebraic expressions of them are all imaginary, and useless for the purpose of numerical calculation; and the former circumstance is precisely the reason of the latter. On the other hand, when one root \(a\) is real and the other two imaginary, the impossible quantities destroy one another in the expressions of \(t\) and \(s\), which are, therefore, real quantities; and in this case the algebraic formulae answer for finding the numerical values of the roots. The distinction here pointed out depends on the radical \(\sqrt{q^3 - p^2}\), which is real or imaginary, according as the equation has one or three real roots, because \(q^3 - p^2\) is always positive in the first case and negative in the second.

Much labour and thought have been bestowed in order to free the formulae for the roots of cubic equations from the imaginary expressions that render them unfit for arithmetical computation. In particular instances the difficulty disappears; namely, when the radical quantities are perfect cubes, in which cases the impossible parts of the cube roots destroy one another, so as to leave none but real quantities in the expressions of the roots of the equation. And, by expanding the radical quantities, we may in all cases obtain the roots of a cubic equation in series of an infinite number of terms free from the imaginary sign. But when it is required to transform the formulae for the case of a cubic equation with three real roots, into finite expressions free from impossible quantities, and to do so without employing any other than the received notations of algebra, all attempts to solve the problem have led to equations in the same circumstances with the one proposed, and have ended in bringing back the same difficulty; in so much that equations of the description mentioned are said to be in the irreducible case.

It is, however, possible to transform the formulae for the roots of a cubic equation in the irreducible case into real expressions, although not so as to fulfil all the conditions above mentioned. Let \(q^3 - p^2 = y^2\); then \(p = (q^3 - y^2)^{\frac{1}{2}}\); wherefore the equation \(x^3 - 3px - 2q = 0\), will become

\[x^3 - 3(q^3 - y^2)^{\frac{1}{2}} x - 2q = 0 \ldots (1).\]

By the preceding formula the value of \(x\) in this equation will be

\[x = (q + y)^{\frac{1}{2}} + (q - y)^{\frac{1}{2}};\]

or, according to the notation of Section 10, making

\[x = 2H_{\frac{1}{2}}(q, y^2).\]

By substituting this value of \(x\), we get

\[2H_{\frac{1}{2}}(q, y^2)^3 - 3(q^3 - y^2)^{\frac{1}{2}} \cdot 2H_{\frac{1}{2}}(q, y^2) - 2q = 0;\]

which equation, being true for all values of \(q\) and \(y^2\), must be identical, or, when expanded, must consist of a series of quantities that mutually destroy one another. Now the equation will still be identical, when \(y^2\) is changed into \(-y^2\); so that we shall have

\[2H_{\frac{1}{2}}(q, -y^2)^3 - 3(q^3 + y^2)^{\frac{1}{2}} \cdot 2H_{\frac{1}{2}}(q, -y^2) - 2q = 0;\]

and this proves that the equation

\[x^3 - 3(q^3 + y^2)^{\frac{1}{2}} x - 2q = 0 \ldots (2)\]

is solved by the formula

\[x = 2H_{\frac{1}{2}}(q, -y^2).\]

As the investigation in Section 10 is equally true, whether \(n\) be a whole or a fractional number, we may apply it to find the value of the symbol \(2H_{\frac{1}{2}}(q, -y^2)\).

For this purpose, let

\[g = r \cos \varphi = r \cos (\varphi + 360^\circ) = r \cos (\varphi + 2 \cdot 360^\circ),\] \[y = r \sin \varphi = r \sin (\varphi + 360^\circ) = r \sin (\varphi + 2 \cdot 360^\circ);\]

then \(r = \sqrt{q^3 + y^2}\); and, according as we take one or other of the angles that have the same sines and cosines, we shall obtain three different values of \(2H_{\frac{1}{2}}(q, -y^2)\), or of \(x\), viz.:

\[a = 2r^{\frac{1}{2}} \cdot \cos \left(\frac{2}{3} + 120^\circ\right),\] \[b = 2r^{\frac{1}{2}} \cdot \cos \left(\frac{2}{3} + 240^\circ\right).\]

By putting \(p = (q^3 + y^2)^{\frac{1}{2}}\), the equation (2) will assume the same form as at first, namely,

\[x^3 - 3px - 2q = 0;\]

and because \(p^3 = q^3 + y^2 = x^2\), and \(y = \sqrt{p^3 - q^3}\), if we determine the angles by means of their tangents instead of their sines and cosines, we shall get \(\sqrt{p^3 - q^3} = \tan \varphi = \tan (\varphi + 360^\circ) = \tan (\varphi + 2 \cdot 360^\circ)\); and the three roots of the equation will be

\[a = 2\sqrt{p} \cdot \cos \left(\frac{2}{3} + 120^\circ\right),\] \[b = 2\sqrt{p} \cdot \cos \left(\frac{2}{3} + 240^\circ\right).\]

Every cubic equation falls under one or other of the formulae (1) and (2), except when \(y = 0\), or \(p^3 = q^3\), which takes place when an equation changes from one class to another; and in this case we have

\[x^3 - 3q^{\frac{3}{2}} x - 2q = (x - 2q^{\frac{1}{2}})(x + q^{\frac{1}{2}})(x + q^{\frac{1}{2}}).\]

The several rules that have now been given, therefore, include every possible case.

The difficulty attending the irreducible case arises from a real distinction between the two subordinate classes of cubic equations, and is insurmountable by the ordinary operations of algebra. There is no permanent distinctions of equations belonging to the same order, when we consider their roots as positive or negative; because, in any proposed equation, all the roots, or as many of them as we please, can be changed from positive to negative, by the simple artifice of increasing or diminishing them. Equations all by a given quantity. But the case is otherwise when we consider the roots of an equation in their character of real or imaginary quantities. No transformation can change an equation with one real root into another with three real roots, without involving the operations of the impossible arithmetic. If, therefore, we lay down this condition, namely, that the formulae for the roots of equations must be in a shape fit for numerical calculation, we may conclude that in fact there is no resolution of equations except what consists in reducing all those of the same class to some one of that class, the most simple and convenient in its form that can be found. If we examine the preceding investigation, it will appear that it is merely an attempt to reduce all cubic equations to the form $x^3 - A = 0$; and this readily succeeds without impossible operations, when the proposed equation, and that with which it is compared, have their roots of a similar description; and it as surely fails when the case is otherwise.

In geometry, where the relations of the magnitudes under consideration are never lost sight of, there is no tendency to refer the solution of a problem to a class to which it does not belong. The ancient geometer could never be in danger of applying the problem for finding two mean proportionals to a case that can be constructed only by the trisection of an angle. The modern analyst, dismissing the original magnitudes of his problem, and reducing all possible relations to equations in abstract numbers, is apt to overlook distinctions, and sometimes to waste his labour, in seeking to accomplish what a due separation of cases would show to be impossible. There is the same distinction between the class of cubic equations with one real root, and that with three real roots, that there is between the two geometrical problems alluded to above; and the algebraist who attempts, by means of the ordinary operations of his art, to transform Cardan's formula so as to make it apply to the irreducible case, is precisely in the same situation with the geometer who should set about trisecting an angle by finding two mean proportions.

The power and force of the algebraic method does not consist in breaking down real distinctions, but in connecting, by sure and general principles, many truths which in geometry are joined only by vague analogies, and even have no affinity at all. This advantage is derived chiefly from the doctrine of negative quantities, and from the impossible arithmetic. By means of the first, a formula which is obtained by considering only one state of the data of a problem, applies, necessarily and by the very structure of analytical language, to the same problem in all possible conditions of the data. On the other hand, when the relations of the data vary, the geometer is obliged to subdivide his problem into cases, or into other subordinate problems; and although it may be perceived that great similitude prevails among all the subdivisions, yet it is impossible to reduce the analogy between them to determinate rules, as is done in algebra. But in the whole compass of geometry there is nothing that bears any resemblance to the imaginary arithmetic. When the geometer has fixed the determination of his problem, or ascertained the limits within which it is possible, he has drawn a line that must be the boundary of his investigation. Now it is to truths lying beyond this line that the meaning of the comprehensive expressions of the imaginary arithmetic must be referred. It is not to be understood that a problem can be solved by algebra, which is impossible in geometry; but the analytical formulae, at the same time that they mark the limits of the problem, go beyond them, and point out connected truths, that require only certain changes to be made in the algebraic expressions, in like manner as all the possible cases of the same problem are derived from one only, by means of the variations of the signs.

If $a, b, c, d$ represent the four roots of a biquadratic equation; and if we prefix the same letter $a$ to all the permutations made with the other three, we shall get the six combinations following, viz.

- abed, adbe, acbd, - adeb, acbd, abdc.

In the first line, the letters $b, c, d$, are made to circulate, by placing immediately after the immovable letter $a$ that which stands last in the combination preceding; and in the second line the moveable letters have respectively an inverted order to what they have in the first line.

Let $\varepsilon^2 - 1 = 0$; and let the four letters taken in the several orders of the six combinations be prefixed to the terms of $1 + \varepsilon + \varepsilon^2 + \varepsilon^3$; the results of the first line being $t, t', t''$, and those of the second line $s, s', s''$; then

$$ \begin{align*} t &= a + b\varepsilon + c\varepsilon^2 + d\varepsilon^3, \\ t' &= a + d\varepsilon + b\varepsilon^2 + c\varepsilon^3, \\ t'' &= a + c\varepsilon + b\varepsilon^2 + d\varepsilon^3, \\ s &= a + b\varepsilon + c\varepsilon^2 + d\varepsilon^3, \\ s' &= a + d\varepsilon + b\varepsilon^2 + c\varepsilon^3, \\ s'' &= a + c\varepsilon + b\varepsilon^2 + d\varepsilon^3. \end{align*} $$

Now, in the equation $\varepsilon^2 - 1 = 0$, $\varepsilon$ is either equal to $+1$ or to $-1$; and whether we take the one value or the other, it is apparent that $t = s, t' = s', t'' = s''$.

Again, from every one of the six foregoing combinations, four others are derived by circulating the letters continually from the last place to the first; and in this manner we obtain twenty-four permutations, which are all that can be made with four letters. Thus, if we take abed, and move the letters as directed, we shall get these four combinations, viz.

- abed, abde, adeb, acbd.

And if we multiply $t$ by $\varepsilon$ continually, observing to retain the first three powers of $\varepsilon$, and to make $\varepsilon^4 = 1$, we shall get

$$ \begin{align*} t &= a + b\varepsilon + c\varepsilon^2 + d\varepsilon^3, \\ t\varepsilon &= a + d\varepsilon + b\varepsilon^2 + c\varepsilon^3, \\ t\varepsilon^2 &= a + c\varepsilon + b\varepsilon^2 + d\varepsilon^3, \\ t\varepsilon^3 &= a + b\varepsilon + c\varepsilon^2 + d\varepsilon^3, \end{align*} $$

so that $t, t\varepsilon, t\varepsilon^2, t\varepsilon^3$ are the functions formed by prefixing to $1 + \varepsilon + \varepsilon^2 + \varepsilon^3$, the letters of the four combinations; and it is obvious that these functions have all the same square, equal to $\varepsilon^2$.

Wherefore, if the four letters, taken in all possible orders, be prefixed to the terms of $1 + \varepsilon + \varepsilon^2 + \varepsilon^3$, the squares of the twenty-four resulting functions will be equal to one or other of the six quantities, $t, t', t'', s, s', s''$; and since it has been proved that $t = s, t' = s', t'' = s''$, it follows that the twenty-four squares have no more than three different values, equal to $t^2, t'^2, t''^2$.

And because $t^2, t'^2, t''^2$, can have no more than one value each, any symmetrical functions of them, viz.

$$ \begin{align*} &t^2 + t'^2 + t''^2, \\ &t^2t' + t'^2t'' + t''^2t, \\ &t^2t'^2 + t'^2t''^2 + t''^2t^2, \end{align*} $$

will have determinate values independent of the order of the letters $a, b, c, d$. The same functions will therefore be symmetrical expressions of the roots of the given biquadratic equation, and they will be known in terms of the co-efficients of that equation.

Supposing $\varepsilon = -1$, we get

$$ \begin{align*} t &= a - b + c - d, \\ t' &= a - d + b - c, \\ t'' &= a - c + d - b; \end{align*} $$

and hence

$$ \begin{align*} t^2 &= a^2 + b^2 + c^2 + d^2 \\ &- 2(ab + ad + bc + cd) + 2(ac + bd), \\ (a + b + c + d)^2 &= 4ab + 4(ac + bd); \end{align*} $$ Equations, the symbol \( z \cdot ab \) being used here, as in Sect. 9, to denote the sum of the products of every two of the roots.

Therefore, if we put

\[ M = (a + b + c + d)^3 - 4z \cdot ab, \]

\[ m = ac + bd, \]

\[ m' = ab + cd, \]

\[ m'' = ad + bc, \]

then

\[ \ell = M + 4m, \]

\[ \ell' = M + 4m', \]

\[ \ell'' = M + 4m''; \]

and hence

\[ \ell^2 + \ell'^2 + \ell''^2 = 3M + 4(m + m' + m''), \]

\[ \ell^2 \ell'^2 + \ell'^2 \ell''^2 + \ell''^2 \ell^2 = 3M^2 + 8M(m + m' + m'') + 16(mm' + mm'' + m'm''). \]

But it will readily appear that

\[ m + m' + m'' = z \cdot ab, \]

\[ mm' + mm'' + m'm'' = (a + b + c + d) \times z \cdot abc - 4abcd. \]

Now, by substituting these values, we get

\[ \ell^2 + \ell'^2 + \ell''^2 = 3(a + b + c + d)^2 - 8z \cdot ab, \]

\[ \ell^2 \ell'^2 + \ell'^2 \ell''^2 + \ell''^2 \ell^2 = 3(a + b + c + d)^4 - 16(a + b + c + d)^2 \times z \cdot ab + 16(a + b + c + d)^2 \times z \cdot abc + 16(z \cdot ab)^2 - 64abcd. \]

Again, if we multiply the expressions of \( t, t', t'' \), we shall get

\[ tt't'' = (a - c)(a^2 - c^2) + (b - d)(b^2 - d^2), \]

\[ = (a + c)(b - d)^2 - (b + d)(a - c)^2; \]

or,

\[ tt't'' = a^3 + b^3 + c^3 + d^3 + 2abc, \]

\[ = (a^3 + b^3 + c^3 + d^3 + 2abc) + (a^3 + b^3 + c^3 + d^3 + 2abc); \]

and finally, by means of the formulae in Sect. 9,

\[ tt't'' = (a + b + c + d)^3 + 8z \cdot abc - 4(a + b + c + d) \times z \cdot ab. \]

If now we substitute the values computed by the preceding formulae in the cubic equation

\[ 0 = m^3 - (\ell^2 + \ell'^2 + \ell''^2)m^2 + (\ell^2 \ell'^2 + \ell'^2 \ell''^2 + \ell''^2 \ell^2)m - \ell^2 \ell'^2 \ell''^2, \]

we shall obtain the values of \( \ell, \ell', \ell'' \), and consequently of \( t, t', t'' \), by solving that equation; and when \( t, t', t'' \), are known, we have

\[ a + b + c + d = a + b + c + d, \]

\[ t = a + b + c + d, \]

\[ \ell = a + b + c + d, \]

\[ \ell' = a + b + c + d, \]

\[ \ell'' = a + b + c + d; \]

wherefore, because \( 0 = 1 + t + t' + t'' \), we get

\[ a = \frac{1}{4} \{ a + b + c + d + t + t' + t'' \}, \]

\[ b = \frac{1}{4} \{ a + b + c + d + t + t' + t'' \}, \]

\[ c = \frac{1}{4} \{ a + b + c + d + t + t' + t'' \}, \]

\[ d = \frac{1}{4} \{ a + b + c + d + t + t' + t'' \}. \]

And finally, by making \( t = -1 \),

\[ a = \frac{1}{4} \{ a + b + c + d + t + t' + t'' \}, \]

\[ b = \frac{1}{4} \{ a + b + c + d - t + t' - t'' \}, \]

\[ c = \frac{1}{4} \{ a + b + c + d - t - t' + t'' \}, \]

\[ d = \frac{1}{4} \{ a + b + c + d - t - t' - t'' \}. \]

In applying these formulae, it is necessary to observe, Equations when \( -8q \) is positive, and the other set when the same quantity is negative. This procedure is not so simple as that we have followed, which requires only one set of formulae. It has even been the occasion of leading into error, in as much as it makes the signs of \( t, \epsilon, \epsilon' \), depend entirely upon the sign of the given quantity \( -8q \); whereas it is indispensable that, regard being had to the nature of the quantities \( t, \epsilon, \epsilon' \), their signs shall be determined so as to satisfy the equation \( t\epsilon = -8q \). This inadvertence of Euler has escaped the observation of most of the authors who have treated of biquadratic equations, and was first noticed by M. Bret, in the second volume of the Correspondance sur l'Ecole Polytechnique.

It may not be improper to notice briefly some of the other rules for biquadratic equations. These are chiefly two; the method of Descartes, which resolves the given equation into two quadratic factors; and the oldest method of all, invented by Louis Ferrari, a pupil of Cardan, which proceeds by transforming the given equation, so as to make it equal to the difference of two complete squares, and then extracting the square roots. However different from one another these two methods may at first seem, they are at bottom the same; and they are so far connected with that already investigated, that all the three lead to the same cubic equation.

Suppose that \( a, b, c, d \), are the roots of the biquadratic equation

\[ x^4 - Ax^2 + Bx^2 - Cx + D = 0; \]

then \( x^2 - (a + b)x + ab = 0 \), and \( x^2 - (c + d)x + cd = 0 \), are two quadratic factors, the product of which is equal to the given equation. Now,

\[ A = a + b + c + d, \] \[ t = a + b - c - d; \]

wherefore, if we put \( ab = p + y \), \( cd = p - y \), the two factors will become

\[ x^2 - \frac{1}{2}(A + t)x + p + y = 0, \] \[ x^2 - \frac{1}{2}(A - t)x + p - y = 0: \]

and if we multiply them, and equate the co-efficients of the product to the co-efficients of the given equation, we shall get

\[ 2p + \frac{1}{4}A^2 - \frac{1}{4}t^2 = B, \] \[ Ap + ty = C, \] \[ p^2 - y^2 = D. \]

And it is to be observed that, on account of the first two of these equations, \( p \) and \( y \) are both real quantities when \( t \) is a real quantity; so that, provided a real value of \( t \) can be found, the given equation is always resolved, by this method, into two quadratic factors free from imaginary expressions.

Now, by combining the equations just found, we shall get

\[ 0 = t^3 - (3A^2 - 8B)t^2 + (3A^4 - 16A^2B + 16B^2 + 16AC - 64D)t - (A^3 - 4AB + 8C)^2, \] \[ p = \frac{1}{2}B - \frac{1}{8}A^2 + \frac{1}{8}t^2, \] \[ y = \sqrt{\left(\frac{1}{2}B - \frac{1}{8}A^2 + \frac{1}{8}t^2\right)^2 - D}. \]

The first of these equations is a cubic, of which the root is \( t \); and it is precisely the same with the cubic of the former method. As the last term of this equation is essentially positive, it follows that there is always one positive value of \( t \), and one real value of \( t \); wherefore, in consequence of what has been proved, the values of \( p \) and \( y \), derived from the positive value of \( t \), are in every case real quantities, which is no doubt an advantage in the practical application of the method.

If we wish to follow the process of Louis Ferrari, we may assume \( p, t, y \), so as to render the expression

\[ (x^2 - \frac{1}{2}Ax + p)^2 - \left(\frac{1}{2}tx + y\right)^2 = 0 \]

identical with the given equation; and as this expression is no more than the product of the two quadratic factors of the last method, the quantities to be determined will be found by the formulae already given.

The theory of permutations, which is successful in solving cubic and biquadratic equations, applies likewise of the fifth to those of the fifth and higher orders. But, to use the degree words of Lagrange, "Passé le quatrième degré, la méthode, quoiqu'elle applicable en général, ne conduit plus qu'à des équations résolubles de degrés supérieurs à celui de la proposée." Thus, in the case of equations of the fifth degree, the theory leads to a biquadratic equation, of which the co-efficients are to be found by resolving an equation of the sixth order.

There is, however, no doubt that the doctrine of permutations contains the principles from which we are to expect the resolution of equations of the higher orders, if the problem be possible. It may be alleged, with great probability, that the theory succeeds in the less complicated cases, because when the number of the roots is small, their permutations are soon exhausted, and we speedily arrive at those combinations of them which remain invariable, whatever be the order of the quantities combined. But when the number of the roots is greater than four, their permutations are very numerous, and at the same time the functions produced by combining them are very complicated; on which accounts it is difficult to conduct the investigation so as to arrive at a satisfactory conclusion, either accomplishing the intended purpose, or proving that the undertaking is impossible.

In the twelfth volume of the Italian Society, and in a work published at Modena in 1813, M. Paolo Ruffini has proved that no function of five letters can exist that is susceptible of only three or four different values when the letters are interchanged among one another in all possible ways. M. Cauchy, in the sixteenth volume of the Journal de l'Ecole Polytechnique, has demonstrated that a function of \( n \) letters, unless it have no more than two different values, cannot have a number of different values less than the prime number next below \( n \). On these grounds it has been inferred that the resolution of equations of the fifth degree is in reality an impossible problem. (Lacroix, Compt. des Elem. d'Algèbre, p. 61.) And if it be admitted that, in the process of resolution, no equations can occur except such as have symmetrical functions of the five letters for their co-efficients, the inference founded on the labours of the eminent mathematicians we have mentioned would be indisputable. But it is not impossible that the resolution of equations of a high order must be effected by gradually depressing an equation at first of great dimensions; and in this procedure we may arrive at equations, the co-efficients of which, although functions of the roots of the proposed equation, are not symmetrical functions, but partial expressions, susceptible of several values, according as the order of the letters that denote the roots is made to vary. On this supposition, the resolution of equations above the fourth order, by means of equations inferior in degree, would not be inconsistent with what has been proved.

17. A method for solving equations of one order may be generalized so as to extend to a certain class in all particular orders. Thus De Moivre has found a species of equations classes of equations. Equations of every degree that have their roots similar to those of cubics, and which are solved by the formula

\[ x = \left( q + \sqrt{q^2 - p^2} \right)^n + \left( q - \sqrt{q^2 - p^2} \right)^n \]

differing in no respect from the expression for resolving cubics, except that \( n \) is written in place of 3.

An equation may be depressed to a lower order when it is known that the roots have a given relation to one another. An instance of this has already occurred in the case of equal roots; for, the equal roots having been first found, the equation can be lowered by division. Reciprocal equations furnish another example of depression to a lower order, on account of a relation subsisting among the roots. A reciprocal equation is one of even dimensions, such that half the roots are respectively the reciprocals of the other half, in which case no alteration is produced in the equation when \( \frac{1}{x} \) is substituted for \( x \). In equations of this kind, the same co-efficients occur in the same order, and with the same signs, reckoning from either end; a description that likewise applies to some equations of odd dimensions, which, however, do not constitute a new class, being merely reciprocal equations, as defined above, multiplied by the factor \( x + 1 \). A reciprocal equation may always be depressed to half the dimensions, by transforming it so that the new unknown quantity shall be equal to \( x + \frac{1}{x} \). It is sufficient to have mentioned these cases, which are fully treated of in all the elementary books.

Equations with only two terms, as \( x^p - 1 = 0 \), are the most extensive class that have been resolved by a general method. The successful application of analysis to this class of equations is extremely interesting, both in itself, and likewise because it is connected with the division of the circle into equal parts, and has occasioned the discovery of some curious and unexpected results respecting that problem. For these reasons, it appears proper to lay before our readers a short view of this branch of the doctrine of algebraic equations.

We have already shown, that, admitting the theory of angular sections, every equation with only two terms, as \( x^p - 1 = 0 \), may be completely resolved into its binomial and trinomial factors; and hence all its roots, possible and impossible, may be computed by means of the trigonometrical tables in common use. If we put \( \varphi = \frac{360^\circ}{p} \), and denote by \( k \) any number less than \( \frac{1}{2}p \), we have found that the equation \( x^p - 1 = 0 \) is divisible by the quadratic factor \( x^2 - 2x \cos \varphi + 1 \), and, consequently, that it has the two impossible roots,

\[ x = \cos k\varphi + \sin k\varphi \cdot \sqrt{-1}, \] \[ x = \cos k\varphi - \sin k\varphi \cdot \sqrt{-1}; \]

and, because \( \cos k\varphi = \cos (p-k)\varphi \), and \( -\sin k\varphi = \sin (p-k)\varphi \), the same two roots may be otherwise more symmetrically represented thus,

\[ x = \cos k\varphi + \sin k\varphi \cdot \sqrt{-1}, \] \[ x = \cos (p-k)\varphi + \sin (p-k)\varphi \cdot \sqrt{-1}. \]

Therefore, when \( p \) is odd, the equation \( x^p - 1 = 0 \) has one real root equal to 1; and when \( p \) is even, it has two real roots equal to \( \pm 1 \); and in both cases the remaining roots are all impossible, and are found from the formula

\[ x = \cos k\varphi + \sin k\varphi \cdot \sqrt{-1}, \]

by making \( k \) equal to all the integral numbers less than \( p \) in the one case, and less than \( p-1 \) in the other. Nothing, therefore, can be more simple than the computation of the roots of such equations by means of the trigonometrical tables. But in seeking a general solution, it is required to investigate the roots without resorting to the properties of the circle, unless in so far as this may be necessary for solving similar equations inferior in degree to the one proposed. In this view the resolution of the equation \( x^p - 1 = 0 \), is equivalent to the division of the circle into \( p \) equal parts, granting the like division for all numbers less than \( p \). And in order to render the investigation of the problem as simple as possible, it may be further observed, that it will be sufficient to consider the case when the exponent is a prime number; because, from this case, the other, when it is a composite number, can be readily deduced.

It will be proper to premise here a property of the roots of equations with only two terms, to which we shall have occasion continually to refer. The property in question depends upon this theorem, namely, when \( k \) is any number not a multiple of the prime number \( p \), the remainders of the terms of the series

\[ 1 \times k, 2 \times k, 3 \times k, \ldots (p-1) \times k, \]

when each is divided by \( p \), are all different from one another; and, consequently, without regard to the order, they will coincide with the numbers 1, 2, 3, &c., less than \( p \). If, therefore, we take any one of the impossible roots of the equation \( x^p - 1 = 0 \), viz.

\[ r = \cos k\varphi + \sin k\varphi \cdot \sqrt{-1}, \]

all its powers with indices less than \( p \), viz.

\[ r^2 = \cos 2k\varphi + \sin 2k\varphi \cdot \sqrt{-1}, \] \[ r^3 = \cos 3k\varphi + \sin 3k\varphi \cdot \sqrt{-1}, \] \[ \text{&c.} \]

will be different from one another; and likewise they will coincide, without regard to the order, with the like powers of any other impossible root of the same equation; because, whatever number \( k \) stands for, the arcs are all different from one another, and, neglecting whole circumferences, constitute the same series of terms, although in different orders. Therefore, \( p \) being a prime number, if \( r \) be one of the impossible roots of the equation \( x^p - 1 = 0 \), all the roots will be represented by the terms of the geometrical progression

\[ r^0, r^1, r^2, r^3, \ldots r^{p-1}; \]

for every one of these terms satisfies the given equation, and it has been shown that they are all different from one another.

When \( p \) is a composite number, the same property does not belong to all the roots of the equation \( x^p - 1 = 0 \), but only to some of them. It belongs generally to the root

\[ r = \cos k\varphi + \sin k\varphi \cdot \sqrt{-1}, \]

when \( k \) is either equal to unit, or to any number that has no common divisor with \( p \); in which cases all the powers of \( r \) are roots of the equation \( x^p - 1 = 0 \), and all different from one another, when the exponents are different and less than \( p \).

If the equation \( x^p - 1 = 0 \) be divided by the binomial factor \( x-1 \), we shall get

\[ x^{p-1} + x^{p-2} + x^{p-3} \ldots + x + 1 = 0; \]

and this being a reciprocal equation, it can be farther depressed to half the dimensions. In this manner we obtain the solution of \( x^2 - 1 = 0 \), which is reduced to a cubic; but, by the same procedure, the equation next in order, Equations, viz., $x^n - 1 = 0$, can be lowered only to the fifth degree, for equations of which class there is no rule. Nevertheless this last equation has been solved by Vandermonde, to whom, and to Lagrange, we are mainly indebted for disengaging the resolution of equations from the complicated operations of algebra, and for substituting, in their place, reasonings founded on the doctrine of combinations. The author has not explained particularly the process by which his solution was obtained; he gives it as a result of his theory, which, although it fails in general for equations above the fourth degree, succeeds in this instance on account of particular relations between the roots. Similar relations subsist between the roots of any other binomial equation when the exponent is a prime number; and, in consequence, a like mode of investigation will apply, as indeed the author has expressly said. But this procedure would unavoidably be attended in every new instance with very long calculations; and it appears hardly possible to arrive in this way at any general method that would apply to all equations of the class in a regular manner, and without considerations drawn from each particular case.

M. Gauss, in a work entitled *Disquisitiones Arithmeticae*, replete with original and important matter, applied a property of prime numbers to the solution of binomial equations, which removed every difficulty, and led to a theory that unites simplicity and generality. If we suppose that $p$ is a prime number, and resolve $p - 1$ into its component factors, so that $p - 1 = a^b \cdot b^c \cdot c^d \cdot \ldots$, &c., $a, b, c, \ldots$ being prime numbers, M. Gauss has proved that the solution of the equation $x^p - 1 = 0$, or, which is the same thing, the division of the circle into $p$ equal parts, can be effected by solving successively $\gamma$ equations of $a$ dimensions, $\beta$ equations of $b$ dimensions, $\gamma$ equations of $c$ dimensions, &c. Thus, if $p = 13$, then because $13 - 1 = 3 \times 2^2$, the roots of $x^{13} - 1 = 0$ can be found, or a polygon of 13 sides can be inscribed in a circle, by solving a cubic and two quadratic equations in succession. In certain cases, when a prime number comes under the form $2^n + 1$, as 17, 257, &c., the division of the circle will require the solution of equations no higher than the second order; whence this unexpected consequence has resulted from the theory of M. Gauss, that the inscription of a polygon of 17, or 257 sides in a circle, which are problems that have always been understood to transcend the limits of the elementary geometry, can, nevertheless, be constructed by the operations admitted in that science.

A work replete with so many interesting discoveries as the *Disquisitiones Arithmeticae* could not fail to excite the attention of mathematicians. Legendre, in republishing his *Essay on the Theory of Numbers*, has added to it an exposition of M. Gauss's theory of binomial equations; and the same theory is the subject of the 14th note in the second edition of Lagrange's *Treatise on Numerical Equations*. No part of the mathematics could pass through the hands of men of so much ability without receiving great improvement. Lagrange has shown that it is not necessary to go through the several intermediate equations that make so essential a part in the investigation of M. Gauss; and, by this means, he has reduced the solution of equations with two terms to the utmost simplicity of which it is capable. But, in one respect, it must be admitted that the procedure of the illustrious geometer is imperfect. Although it arrives, by a short investigation, at the partial quantities that by their additions form the expressions of the roots sought, it leaves indeterminate the order in which they are to be combined. M. Gauss has avoided ambiguity in this respect by deducing from one of the quantities all the other parts of the same expression; but, amidst a multiplicity of different systems of values that may be deduced from the partial quantities, Lagrange has given no clue to guide to the true one.

In laying before our reader some account of this interesting branch of the theory of algebraic equations, we shall view the subject in a light somewhat different from that in which it has hitherto been placed. Instead of seeking directly the roots of binomial equations, we shall apply the principles of M. Gauss's theory immediately to the division of the circle into equal parts, by taking the arcs of the circumference in that order to which the method owes all its success. This procedure is attended with some advantages. In the first place, the algebraic expressions of the quantities sought, represented by $\cos \frac{h}{p} \times 360^\circ$, are more simple than those of the imaginary roots of the corresponding binomial equation; and, in the second place, the same expressions, having always real values, are better fitted for application than the roots of binomial equations, which require to be further reduced to prepare them for calculation.

Before entering on the principal problem, it is necessary to say something of that property of numbers on which the whole theory depends. Supposing $p$ to be any prime number, Euler has distinguished by the name of a Primitive Root any number less than $p - 1$, such that, if we take the series of all its powers with indices less than $p$, and in each power reject the multiples of $p$ it contains, the several remainders are all different from one another, and, consequently, paying no regard to the order, they will coincide with the numbers 1, 2, 3, &c., less than $p$. It has been proved that, for every prime number, there are as many primitive roots as there are numbers less than $p - 1$, which have no common divisor with it. The existence of such numbers in every case is therefore demonstrated; but no direct method of finding them has yet been published with which we are acquainted.

We gladly seize the present occasion of laying down a rule for finding the primitive roots of a prime number. But first we must premise, that when any proposed number is said to satisfy the equation $x^n + 1 = 0$, it is always understood that the multiples of the prime number $p$ are rejected; and the meaning is, that, when the given number is substituted for $x$, the whole result is divisible by $p$ without any remainder.

Now, let $p$ be a prime number, and $a, b, c, \ldots$ the prime divisors of $p - 1$, so that $p - 1 = 2^a \cdot a^b \cdot b^c \cdot \ldots$, &c.: then every primitive root will satisfy the first of the following equations without satisfying any of the rest, viz.

\[ \begin{align*} \frac{p-1}{2} + 1 &= 0, \\ \frac{p-1}{2a} + 1 &= 0, \\ \frac{p-1}{2b} + 1 &= 0, \\ \frac{p-1}{2c} + 1 &= 0, \\ &\text{&c.} \end{align*} \]

And, on the other hand, every number not a primitive root, which satisfies the first equation, will at the same time satisfy one, or more, or all, of the other equations.

But the numbers which satisfy the first equation are exclusively those which are not found among the remainders of the series of square numbers divided by $p$. Equations. Wherefore, setting aside the first equation, if we seek among the non-residual numbers for such as satisfy none of the remaining equations, the numbers so found will be the primitive roots sought.

When one primitive root is found by this method, all the rest may be directly obtained from it. For, if \(1, w, w^2, w^3, \ldots, w^{n-1}\) represent all the numbers less than \(p - 1\) and prime to it; then \(a\) being one of the primitive roots, all the roots will be equal to the series of powers,

\[ a, a^w, a^{w^2}, a^{w^3}, \ldots, a^{w^{n-1}}, \]

rejecting always the multiples of \(p\).

The demonstration of these properties would lead us aside from our present purpose; and we shall be content with adding some examples for the sake of illustration.

Let \(p = 11\); then \(\frac{p-1}{2} = 5\), and \(\frac{p-1}{2} \cdot 5 = 1\); so that in this case, the only equation of exclusion is \(x + 1 = 0\), which admits only one solution, viz. \(x = p - 1 = 10\). Therefore all the non-residual numbers except 10 are the primitive roots; namely, 2, 6, 7, 8. We may extend this conclusion to every case when \(\frac{p-1}{2}\) is a prime number, as 7, 23, 47, &c.; in all which instances all the non-residuals, except \(p - 1\), are the primitive roots.

Next, let \(p = 17\); then \(\frac{p-1}{2} = 8 = 2^3\); and there are no equations of exclusion. In this case, therefore, all the non-residuals, without exception, are primitive roots; and the same thing is true of every prime number of the form \(2^n + 1\), such as 5, 257, &c.

Let \(p = 13\); then \(\frac{p-1}{2} = 2 \times 3\); and the only equation of exclusion is

\[ x^2 + 1 = 0, \]

which admits only two solutions, viz. \(x = 5\) and \(x = 8\). In this instance, therefore, all the non-residual numbers, except 5 and 8, are the primitive roots.

Let \(p = 31\); then \(\frac{p-1}{2} = 3 \times 5\); and we have two equations of exclusion, viz.

\[ x^2 + 1 = 0, \]

The non-residual numbers are

3, 6, 11, 12, 13, 15, 17, 21, 22, 23, 24, 26, 27, 29, 30.

Of these numbers the first, viz. 3, is a primitive root, since it satisfies neither of the two equations; and as the numbers less than 30, and prime to it, are 1, 7, 11, 13, 17, 19, 23, 29; all the primitive roots of 31 are as follows: viz. \(3^1 = 3\), \(3^2 = 17\), \(3^3 = 13\), \(3^4 = 24\), \(3^5 = 22\), \(3^6 = 12\), \(3^7 = 11\), \(3^8 = 21\). With respect to the other non-residual numbers, it will be found on trial that the first equation is satisfied by 6 and 26; the second by 15, 23, 27, 29; and both equations by 30.

We are now prepared to enter upon the solution of the problem for dividing the circle into as many equal parts as there are units in the prime number \(p = 2n + 1\). If we conceive a polygon of \(p\) sides, to be inscribed in a circle, it will be admitted that the centre of gravity of the polygon coincides with the centre of the circle. Wherefore, if perpendiculars be drawn to any diameter of the circle from all the angles of the polygon, it follows, from the nature of the centre of gravity, that the sum of the cosines lying on one side of the centre of the circle will be equal to the sum of the cosines lying on the other side. Let \(\varphi = \frac{360^\circ}{p}\); and put \(u\) for the arc intercepted between the diameter and any angle of the polygon, then we shall have this equation, viz.

\[ 0 = \cos u + \cos (\varphi + u) + \cos (2\varphi + u) + \cdots + \cos (2n\varphi + u), \]

which is no more than the analytical expression of the geometrical property just mentioned. Now, suppose that the diameter passes through one of the angles of the polygon; then \(u = 0\), and the equation becomes

\[ 0 = 1 + \cos \varphi + \cos 2\varphi + \cos 3\varphi + \cdots + \cos 2n\varphi. \]

Let \(a\) be one of the primitive roots of the prime number \(p\); then rejecting multiples of \(p\), and paying no regard to the order, the terms of the geometrical progression,

\[ a, a^2, a^3, a^4, \ldots, a^{n-1}, \]

will be equal to the several numbers less than \(p\). Wherefore, in the two series of arcs,

\[ a\varphi, a^2\varphi, a^3\varphi, a^4\varphi, \ldots, a^{n-1}\varphi, \]

every arc in the geometrical progression will either be equal to some one in the arithmetical progression, or will differ from it by a whole circumference, or circumferences. Hence the cosines of the first series of arcs may be substituted in the last equation for the cosines of the other series; and thus we have

\[ 1 = \cos a\varphi + \cos a^2\varphi + \cos a^3\varphi + \cdots + \cos a^{n-1}\varphi. \]

Again, by Fermat's theorem, \(a^n - 1 = (a^n + 1)(a^n - 1)\) is a multiple of \(p\); and because no primitive root of a prime number is the remainder of a square divided by that number, we have \(a^n + 1 = a\) a multiple of \(p\); and consequently \(a^n + a^2 + a^3 = a\) a multiple of \(p\). It follows, therefore, that \(a^n + a^2 + a^3\) is equal to a multiple of the circumference of the circle; and hence,

\[ \cos a^n + a^2 + a^3 = \cos a^n + a^2 + a^3 \quad (A) \]

From this it appears that the cosines in the last equation may be distributed into two equal sums; one containing the cosines of all arcs from \(a\varphi\) to \(a^n\varphi\) inclusively, and the other the remaining cosines; consequently

\[ \frac{1}{2} = \cos a\varphi + \cos a^2\varphi + \cos a^3\varphi + \cdots + \cos a^{n-1}\varphi; \]

and because \(\cos a^n\varphi = \cos \varphi\),

\[ \frac{1}{2} = \cos \varphi + \cos a\varphi + \cos a^2\varphi + \cdots + \cos a^{n-1}\varphi \quad (1) \]

Let \(\tau = \frac{360^\circ}{n}\); and put

\[ e = \cos \tau + \sin \tau \sqrt{-1}; \]

then all the powers of \(e\) with indices less than \(n\) will be different from one another, and all of them roots of the equation \(e^n - 1 = 0\), the solution of which requires the division of the circle into only \(n\), or \(\frac{p-1}{2}\), equal parts.

In what follows, we shall have continual occasion to consider the expression

\[ \cos a^n\varphi + e^m\cos a^{n+1}\varphi + e^{m+1}\cos a^{n+2}\varphi + \cdots + e^{(n-1)m}\cos a^n\varphi; \]

and it will therefore be convenient to adopt an abridged mode of writing it. Now, the expression will be wholly known, and can be constructed when the two indices \(a\) and \(m\) are given; and we may therefore denote it by the symbol \(f(a, m)\), placing always the index of \(a\) before the other. We shall invariably make the index of \(a\) positive, and suppose it reduced below \(n\) by means of the formula (A). In like manner we shall suppose that the index of \(e\) is always reduced below \(n\) by Equations suppressing the multiples of \( n \); and we shall write it sometimes positive and sometimes negative, observing that the negative indices may be always rendered positive by supplying the proper multiples of \( n \); thus,

\[ e^{-im} = e^{n-im} = e^{m-in} = e^{m-in}, \text{ &c.} \]

According to the notation just explained, we have

\[ f(o, m) = \cos \varphi + e^{m} \cos a\varphi + e^{2m} \cos a^2\varphi + \ldots + e^{(n-1)m} \cos a^{n-1}\varphi; \]

\[ f(o, -m) = \cos \varphi + e^{-m} \cos a\varphi + e^{-2m} \cos a^2\varphi + \ldots + e^{-(n-1)m} \cos a^{n-1}\varphi. \]

And because \( e^m = e^{-m} = 1 \), the symbols \( f(o, o) \), \( f(o, n) \), \( f(o, -n) \), will represent the series of cosines in the equation (1); so that we have

\[ \frac{1}{2} = f(o, o) = f(o, n) = f(o, -n). \]

The following formula is no more than a corollary from the preceding notation, viz.

\[ e^{-\lambda m} \times f(o, m) = f(\lambda, m). \quad (B) \]

By means of the trigonometrical formula in common use, any powers and products of the cosines of the arc \( \varphi \) and its multiples may be reduced to a series of terms containing the like cosines multiplied by given co-efficients. Therefore, because \( \cos p\varphi = 1 \), and likewise because the cosines of all arcs greater than \( p\varphi, 2p\varphi, 3p\varphi, \text{ &c.} \) may be reduced to the cosines of arcs less than \( p\varphi \), it follows that every rational and integral function of \( \cos \varphi, \cos 2\varphi, \cos 3\varphi, \text{ &c.} \) may be brought under this form of expression, viz.

\[ A + B \cos \varphi + C \cos 2\varphi + D \cos 3\varphi + \ldots + N \cos 2mp. \]

Now, if we suppose the function we are considering to be such, that it retains the same value when any of the multiple arcs \( 2p, 3p, \text{ &c.} \) is substituted for \( \varphi \), the transformed expression will be possessed of the same property. But if we actually substitute the arcs \( 2p, 3p, \text{ &c.} \) for \( \varphi \) in the foregoing expression, it will become successively

\[ A + B \cos 2\varphi + C \cos 4\varphi + D \cos 6\varphi + \ldots + N \cos 2mp, \]

each line containing the same cosines, although in a different order, because the series of arcs is the same when whole circumferences, or the multiples of \( p\varphi \), are rejected; and all these expressions cannot have the same value unless \( B = C = D = \text{ &c.} \); that is, unless the expression be of this form, viz.

\[ A + B (\cos \varphi + \cos 2\varphi + \cos 3\varphi + \ldots + \cos 2np), \]

which, in consequence of what was before proved, is equal to \( A - B \). It is therefore demonstrated that every rational and integral function of \( \cos \varphi, \cos 2\varphi, \cos 3\varphi, \text{ &c.} \), which remains unchanged when any of the multiple arcs \( 2p, 3p, \text{ &c.} \) is substituted for \( \varphi \), has for its value an expression without cosines, and depending only upon the nature of the function.

If we introduce the arcs in geometrical instead of those in arithmetical progression, it is obvious that the substitution of the multiple arcs \( 2p, 3p, \text{ &c.} \) for \( \varphi \), is equivalent to the changing of \( \varphi \) into \( a\varphi, a^2\varphi, a^3\varphi, \text{ &c.} \); and hence any rational and integral function of the cosines of \( \varphi \) and its multiples, which remains invariable when \( \varphi \) is changed into \( a\varphi, a^2\varphi, a^3\varphi, \text{ &c.} \) is a quantity independent of the cosines, or has its value expressed by a function from which the cosines are eliminated.

What has now been proved will enable us to appreciate the advantage arising from the introduction of the arcs in geometrical place of those in arithmetical progression, in which principally consists the improvement that this Equations theory owes to M. Gauss. The solution of the problem turns upon finding those functions of \( \cos \varphi, \cos 2\varphi, \cos 3\varphi, \text{ &c.} \) which have determinate values independent of the cosines; which functions, it has been proved, remain invariable when any of the multiple arcs \( 2p, 3p, \text{ &c.} \) is substituted for \( \varphi \). Now, although the substitution of any multiple arc, in place of the arc itself, always reproduces the same series of cosines, yet the order is irregular, and varies with every different multiple arc; and this circumstance makes it difficult to investigate what change the substitution will effect in a given function. On the other hand, by introducing the arcs in geometrical progression, the same order is still preserved, whatever substitution be made; and by this means every facility possible is obtained for investigating the functions sought.

The following properties are deducible from what has been proved. First, if \( m, m', m'', \text{ &c.} \) be any numbers, none of which is equal to zero, or a multiple of \( n \), and such that their sum is equal to \( n \), or to a multiple of \( n \), the product

\[ f(o, m) \times f(o, m') \times f(o, m''), \text{ &c.} \]

will be independent of the cosines of \( \varphi \) and its multiples, or will be an expression containing only the powers of \( e \) multiplied by numeral co-efficients.

For by the formula (B) we have

\[ e^{-\lambda m} \times f(o, m) = f(\lambda, m) \]

\[ e^{-\lambda m'} \times f(o, m') = f(\lambda, m') \]

\[ e^{-\lambda m''} \times f(o, m'') = f(\lambda, m'') \]

&c. &c.

Therefore, by multiplying and observing that \( e^{-\lambda m} \times e^{-\lambda m'} \times e^{-\lambda m''} \times \text{ &c.} = 1 \), because \( \lambda \times (m + m' + m'' + \text{ &c.}) \) is a multiple of \( n \), we get

\[ f(o, m) \times f(o, m') \times f(o, m'') = f(\lambda, m) \times f(\lambda, m') \times f(\lambda, m''), \text{ &c.} \]

which shows that the product in question is not altered when \( \varphi \) is changed into \( a\varphi \). Consequently, according to what was before proved, the product is independent of the cosines.

It follows, as a corollary, that the product

\[ f(o, m) \times f(o, -m) \]

is independent of the cosines.

Next, if \( m, m', m'', \text{ &c.} \) be any numbers, and \( s = m + m' + m'' + \text{ &c.} \); and if neither \( s \) nor any of the numbers \( m, m', m'', \text{ &c.} \) be a multiple of \( n \), we shall have

\[ f(o, m) \times f(o, m') \times f(o, m'') = M \times f(o, s), \]

the quantity \( M \) being independent of the cosines, and containing only the powers of \( e \) multiplied by numeral co-efficients.

For, by the property already demonstrated, and its corollary, we have

\[ f(o, m) \times f(o, m') \times f(o, m'') \times f(o, -s) = A \]

\[ f(o, s) \times f(o, -s) = A'; \]

\( A \) and \( A' \) being quantities independent of the cosines. Therefore, by exterminating \( f(o, -s) \), we get

\[ f(o, m) \times f(o, m') \times f(o, m'') \times \text{ &c.} = \frac{A}{A'} \cdot f(o, s). \]

The foregoing properties are the foundations of the theory. But it is not enough to establish the principles by a general demonstration; it is also necessary to be able to compute the numerical values that occur in the application to particular problems. Therefore, supposing that \( m \) and \( m' \) are two numbers, and \( s = m + m' \), none of the Equations, three numbers \( s, m, m' \), being a multiple of \( n \), it is proposed to find the value of \( A \) in the equation

\[ f(o, m) \times f(o, m') = A \times f(o, s). \]

For this purpose, set down the several terms of \( f(o, m') \) in their order; and below them write the terms of \( f(o, m) \), placing first any term, as \( e^{2m} \cos a^2 \phi \), and the rest in their order, in this manner:

\[ \begin{align*} &\cos \phi + e^{2m} \cos a^2 \phi + e^{2m'} \cos a^2 \phi + \ldots + e^{(n-1)m'} \cos a^{n-1} \phi, \\ &e^{2m} \cos a^2 \phi + (e^{(n+1)m} \cos a^{n+1} \phi + e^{(n+2)m} \cos a^{n+2} \phi + \ldots + e^{(n+s-1)m} \cos a^{n+s-1} \phi). \end{align*} \]

Now, let every term in the lower line be multiplied into that which stands above it; and, separating the factor \( e^{2m} \), which is common to each product, let the symbol \( e^{2m} \times \Psi(\lambda) \) represent the sum of all the products; then

\[ \Psi(\lambda) = \cos \phi \cos a^2 \phi + e^s \cos a \phi \cos a^{n+1} \phi + \ldots + e^{(n-1)s} \cos a^{n-1} \phi. \]

If we repeat this operation, so as to make every term of the lower line stand first in succession, it is evident that, by this means, every term of \( f(o, m') \) will be multiplied by all the terms of \( f(o, m) \); so that the sum of all the results will be the product sought. We therefore obtain

\[ f(o, m) \times f(o, m') = \Psi(o) + e^{2m} \Psi(1) + e^{2m} \Psi(2) + \ldots + e^{(n-1)m} \Psi(n-1). \]

Let \( a^2 + 1 = w \), and \( a^2 - 1 = w' \); then, because the product of the cosines of the two arcs is equal to half the sum of the cosines of the sum and difference of the two arcs, we shall have

\[ \Psi(\lambda) = \frac{1}{2} \left\{ \cos \phi \cos a^2 \phi + e^s \cos a \phi \cos a^{n+1} \phi + \ldots + e^{(n-1)s} \cos a^{n-1} \phi \right\}. \]

In the first place, when \( \lambda = 0, w = 2, w' = 0 \); therefore

\[ \Psi(o) = \frac{1}{2} \left\{ \cos \phi \cos a^2 \phi + e^s \cos a \phi \cos a^{n+1} \phi + \ldots + e^{(n-1)s} \cos a^{n-1} \phi \right\}. \]

But \( e^s - 1 = 0 \); and hence \( e^s - 1 = 0 \); or

\[ 0 = (1 - e^s) \left\{ 1 + e^s + e^{2s} + \ldots + e^{(n-1)s} \right\}; \]

and, according to the value assumed for \( e \), the equation

\[ 1 - e^s = 0 \text{ cannot take place when } s \text{ is not a multiple of } n; \]

therefore

\[ 0 = 1 + e^s + e^{2s} + \ldots + e^{(n-1)s}. \]

Now, if we put \( a^2 = 2 \), we shall get

\[ \Psi(o) = \frac{1}{2} f(o, s) = \]

\[ \frac{1}{2} \left\{ \cos a^2 \phi + e^s \cos a^{n+1} \phi + e^{2s} \cos a^{n+2} \phi + \ldots + e^{(n-1)s} \cos a^{n-1} \phi \right\}. \]

Therefore, on account of the formula (B), we finally get

\[ \Psi(o) = \frac{1}{2} e^{-i\phi} \times f(o, s). \]

Next, when \( \lambda \) is not equal to zero, let \( h(\lambda) \) and \( h(\lambda) \) denote the numbers derived from \( \lambda \) by means of the equations

\[ a^2 + 1 = a^h(\lambda), \]

\[ a^2 - 1 = a^{h(\lambda)}; \]

then, by substituting \( a^h(\lambda) \) and \( a^{h(\lambda)} \) for \( w \) and \( w' \), we shall get

\[ \Psi(\lambda) = \frac{1}{2} f(h(\lambda), s) + \frac{1}{2} f(h(\lambda), s); \]

and, on account of the formula (B),

\[ \Psi(\lambda) = \left\{ \frac{1}{2} e^{-i\phi} + \frac{1}{2} e^{-h(\lambda)\phi} \right\} f(o, s). \]

Now, collecting all the parts in the expression of \( f(o, m) \times f(o, m') \), we shall get these formulas, viz.:

\[ \begin{align*} &f(o, m) \times f(o, m') = A \times f(o, s) \\ &A = \frac{1}{2} e^{-i\phi} + \frac{1}{2} e^{-h(\lambda)\phi} + \frac{1}{2} e^{-h(2)\phi} \\ &+ \frac{1}{2} e^{-h(3)\phi} + \ldots + \frac{1}{2} e^{-h(n)\phi} \\ &\text{etc.} \end{align*} \]

As nothing changes in the expression of \( A \) except the indices \( m \) and \( s \), it may be denoted by the abridged symbol \( (m, s) \), in which it is obvious that \( m' \) may be substituted for \( m \); so that

\[ A = (m, s) = (m', s). \]

When \( s \) is equal to \( n \), and \( m' = n - m \), the product in question becomes \( f(o, m) \times f(o, -m) \), which has been proved to be a quantity independent of the cosines. In this case, therefore, we shall have

\[ f(o, m) \times f(o, -m) = B; \]

\( B \) being a quantity from which the cosines are eliminated, and which is now to be investigated.

If, in the foregoing case, we suppose \( m' = n - m \) and \( s = n \), we shall get

\[ f(o, m) \times f(o, -m) = \]

\[ \Psi(o) + e^{2m} \Psi(1) + e^{2m} \Psi(2) + \ldots + e^{(n-1)m} \Psi(n-1); \]

but here, because \( e^s = e^n = 1 \), \( e \) and its powers disappear from the expression of \( \Psi(\lambda) \), and we have

\[ \Psi(\lambda) = \]

\[ \cos \phi \cos a^2 \phi + \cos a \phi \cos a^{n+1} \phi + \cos a^{n+2} \phi + \ldots + \cos a^{n-1} \phi; \]

and, by expanding the products of the cosines, as before,

\[ \Psi(\lambda) = \]

\[ \frac{1}{2} \left\{ \cos \phi \cos a^2 \phi + \cos a \phi \cos a^{n+1} \phi + \cos a^{n+2} \phi + \ldots + \cos a^{n-1} \phi \right\}. \]

When \( \lambda = 0, w = 2, w' = 0 \); therefore

\[ \Psi(o) = \]

\[ \frac{1}{2} \left\{ \cos 2\phi + \cos a^2 \phi + \cos a^{n+1} \phi + \ldots + \cos a^{n-1} \phi \right\}. \] Equations.

\[ \frac{1}{2} \left\{ 1 + 1 + 1 + 1 + 1 + \ldots + 1 \right\}. \]

But no alteration is made in equation (1) when we substitute, instead of the arc \( \varphi \), any one of its multiples, or, which is the same thing, change \( \varphi \) into \( e\varphi, e^2\varphi, \ldots \); because such substitution or change continually reproduces the same cosines. Thus it appears that the sum of the \( n \) cosines in \( \Psi(\varphi) \) is equal to \( -\frac{1}{2} \); and we have

\[ \Psi(\varphi) = \frac{n}{2} - \frac{1}{4}. \]

For every other value of \( \lambda, w \) and \( w' \) are neither of them equal to zero, nor to a multiple of \( n \); therefore, according to what has just been said, the sum of the \( n \) cosines in each of the two parts of \( \Psi(\lambda) \) is equal to \( -\frac{1}{2} \); and thus, when \( \lambda \) is not equal to zero, we have

\[ \Psi(\lambda) = \frac{1}{2} \times \frac{1}{2} + \frac{1}{2} \times \frac{1}{2} = -\frac{1}{2}. \]

By substituting the values of \( \Psi(\varphi) \) and \( \Psi(\lambda) \), we get

\[ f(o, m) \times f(o, -m) = \frac{n}{2} - \frac{1}{4} \left( e^m + e^{2m} + e^{3m} + \ldots + e^{(n-1)m} \right). \]

But, as was already proved,

\[ -1 = e^m + e^{2m} + e^{3m} + \ldots + e^{(n-1)m}; \]

wherefore

\[ f(o, m) \times f(o, -m) = \frac{n}{2} - \frac{1}{4} + \frac{1}{2} = \frac{2n + 1}{4} = \frac{1}{4} p. \]

Now, if we put \( k^2 = \frac{1}{4} p \), we have finally

\[ f(o, m) \times f(o, -m) = k^2 \ldots (3). \]

When \( n \) is an even number, it is obvious that \( f(o, \frac{n}{2}) = f(o, -\frac{n}{2}) \); therefore it follows as a corollary, that in this case

\[ f(o, \frac{n}{2}) = f(o, -\frac{n}{2}) = \pm k = \pm \frac{1}{2} \sqrt{p}. \]

By applying the equation (2), first to the indices \( m \) and \( m' \), and then to the indices \( n-m \) and \( n-m' \), or to \( m \) and \( m' \) taken negatively, we deduce

\[ f(o, m) \times f(o, m') = (m, s) \times f(o, s), \] \[ f(o, -m) \times f(o, -m') = (-m, -s) \times f(o, -s); \]

and by multiplying we shall get, on account of equation (3), this remarkable formula, viz.

\[ (m, s) \times (-m, -s) = k^2 \ldots (4). \]

By successive applications of the equation (2), we get

\[ f(o, 1) \times f(o, 1) = (1, 2) \times f(o, 2), \] \[ f(o, 1) \times f(o, 2) = (1, 3) \times f(o, 3), \] \[ f(o, 1) \times f(o, 3) = (1, 4) \times f(o, 4), \] \[ \text{etc.} \]

By combining these equations, and writing \( P \) for \( f(o, 1) \), we deduce

\[ P^2 = (1, 2) \cdot f(o, 2), \] \[ P^3 = (1, 2) \cdot (1, 3) \cdot f(o, 3), \] \[ P^4 = (1, 2) \cdot (1, 3) \cdot (1, 4) \cdot f(o, 4), \] \[ \text{etc.} \]

Wherefore, when \( n \) is an even number,

\[ P^{\frac{n}{2}} = (1, 2) \cdot (1, 3) \cdot (1, 4) \ldots (1, \frac{n}{2}) \cdot f(o, \frac{n}{2}); \]

and, by squaring and observing that, by equation (3), \( f(o, \frac{n}{2}) = k^2 \), we get

\[ P^n = (1, 2)^2 \cdot (1, 3)^2 \cdot (1, 4)^2 \ldots (1, \frac{n}{2})^2 \cdot k^2. \quad (5). \]

When \( n \) is an odd number, we have, in like manner,

\[ P^{\frac{n-1}{2}} = (1, 2) \cdot (1, 3) \cdot (1, 4) \ldots (1, \frac{n-1}{2}) \times \] \[ f(o, \frac{n-1}{2}), \] \[ P^{\frac{n+1}{2}} = (1, 2) \cdot (1, 3) \cdot (1, 4) \ldots (1, \frac{n+1}{2}) \times \] \[ f(o, \frac{n+1}{2}); \]

but, by equation (3), \( f(o, \frac{n-1}{2}) \times f(o, \frac{n+1}{2}) = k^2; \)

wherefore \( P^n = (1, 2)^2 \cdot (1, 3)^2 \cdot (1, 4)^2 \ldots (1, \frac{n-1}{2})^2 \) \[ \left(1, \frac{n+1}{2}\right) \cdot k^2 \ldots (6). \]

Again, from the preceding expressions we get

\[ f(o, 2) = \frac{1}{(1, 2)} \cdot P^o; \]

and, by equation (4),

\[ f(o, 2) = \frac{-1, -2}{k^2} \cdot P^o. \]

In like manner,

\[ f(o, 3) = \frac{-1, -2}{k^2} \cdot \frac{-1, -3}{k^2} \cdot P^o, \] \[ f(o, 4) = \frac{-1, -2}{k^2} \cdot \frac{-1, -3}{k^2} \cdot \frac{-1, -4}{k^2} \cdot P^o, \] \[ \text{etc.} \]

These formulæ need only be continued till we obtain the value of the function \( f(o, \frac{n-2}{2}) \) when \( n \) is even, and of \( f(o, \frac{n-1}{2}) \) when \( n \) is odd; the remaining functions \( f(o, n-2), f(o, n-3), \ldots \), etc., or, which is the same thing, \( f(o, -2), f(o, -3), \ldots \), etc., being derived from the preceding values merely by changing the signs of the different indices of \( e \). Thus, if we write \( P' \) for \( f(o, -1) \), we shall have

\[ f(o, -2) = \frac{(1, 2)}{k^2} \cdot P^o, \] \[ f(o, -3) = \frac{(1, 2)}{k^2} \cdot \frac{(1, 3)}{k^2} \cdot P^o, \] \[ f(o, -4) = \frac{(1, 2)}{k^2} \cdot \frac{(1, 3)}{k^2} \cdot \frac{(1, 4)}{k^2} \cdot P^o, \] \[ \text{etc.} \]

Now, \( g \) being any number less than \( n \), it has been shown that

\[ o = 1 + e^g + e^{2g} + e^{3g} + \ldots + e^{(n-1)g}; \]

and hence if we attend to the nature of functions \( f(o, o), f(o, 1), f(o, 2), \ldots \), etc., we shall readily get

\[ \cos a^t \varphi = \frac{f(o, o)}{n} + \frac{1}{n} \left\{ e^{-t} f(o, 1) + e^{-2t} f(o, 2) + e^{-3t} f(o, 3) + \ldots \right\}; \] Equations or, by arranging the terms differently, and because

\[ f(o,o) = \frac{1}{2} \]

\[ \cos a^2 \varphi = -\frac{1}{2n} + \frac{1}{n} \left\{ e^{-\varphi} f(o,1) + e^{2\varphi} f(o,-1) \right\} \]

\[ + \frac{1}{n} \left\{ e^{-3\varphi} f(o,2) + e^{4\varphi} f(o,-2) \right\} \]

\[ + \frac{1}{n} \left\{ e^{-5\varphi} f(o,3) + e^{6\varphi} f(o,-3) \right\} \]

\[ \text{etc.} \]

and it is to be observed that, when \( n \) is even, the last term is the single quantity \( \frac{1}{n} \times e^{-\frac{n}{2}} \times f(o,\frac{n}{2}) \), which has no corresponding part. Now, this quantity is entirely known. For, since \( e^n = 1 \), we have \( e^{\frac{n}{2}} = e^{-\frac{n}{2}} = \pm 1 \); but \( e \) has been so assumed, that none of its powers with indices less than \( n \) are equal to unit; and, therefore,

\[ e^{-\frac{n}{2}} = -1, \quad \text{and} \quad e^{-\frac{n}{2}} = (-1)^k. \]

Again, by equation (3), \( f(o,\frac{n}{2}) = \pm k \); wherefore we have

\[ \frac{1}{n} \times e^{-\frac{n}{2}} \times f(o,\frac{n}{2}) = \frac{1}{n} \times (-1)^k \times \pm k. \]

On the whole, the preceding analysis brings us to the following formulæ, which contain the solution of the problem, viz.

when \( n \) is even, by equation (5),

\[ P^2 = (1,2) \cdot (1,3) \cdot (1,4) \cdots (1,\frac{n}{2}) \times \pm k; \]

when \( n \) is odd, by equation (6),

\[ P^n = (1,2)^k \cdot (1,3)^k \cdot (1,4)^k \cdots (1,\frac{n-1}{2})^k \cdot (1,\frac{n+1}{2})^k \cdot k^k; \]

and by equation (2), \( PP' = k^2 \).

Finally, by substituting the values of \( f(o,2), f(o,3), \ldots \) in the expression of \( \cos a^2 \varphi \), we get

\[ \cos a^2 \varphi = -\frac{1}{2n} + \frac{k}{n} \left\{ e^{-\varphi} P + e^{2\varphi} P \right\} \]

\[ + \frac{k}{n} \left\{ e^{-3\varphi} P + e^{4\varphi} P \right\} \]

\[ + \frac{k}{n} \left\{ e^{-5\varphi} P + e^{6\varphi} P \right\} \]

\[ + \frac{(1,2)}{k} \cdot (1,3) \left( e^{2\varphi} P \right)^2 \]

\[ \text{etc.} \]

the series of terms must be continued till the last index of \( e^{-\varphi} P \) and \( e^{2\varphi} P \) is \( \frac{n-1}{2} \) when \( n \) is odd; and \( \frac{n-2}{2} \) when \( n \) is even; and, in this last case, the quantity \( \frac{1}{n} \times (-1)^k \times \pm k \), must be added, prefixing to \( k \) the same sign that is given to it in the value of \( P^2 \).

The solution of the problem is thus reduced to the computation of the functions \( (1,2), (1,3), \ldots \), which requires no more than the substitution of \( 1 \) for \( m \), and of \( 2, 3, 4, \ldots \) successively for \( s \), in the expression of \( A \), equation (2). Equations.

The half of these functions that have negative indices are deduced from the other half, merely by changing the signs of the several indices of \( e \), or by means of equation (4).

All the cosines sought are found by substituting \( o, 1, 2, 3, \ldots \) successively for \( \varphi \). Although the function \( P \) is susceptible of \( n \) different values, represented by \( x, ex, ex^2, \ldots \); yet the same cosines are deduced from any one of these values. By this means all ambiguity is avoided with regard to the system of values that represent the cosines; but the numerical value that must be attached to each particular cosine remains quite indeterminate, because \( \varphi \) may equally stand for \( \frac{360^\circ}{p}, 2 \times \frac{360^\circ}{p}, 3 \times \frac{360^\circ}{p}, \ldots \).

The adaptation of the numerical quantities to the geometrical cosines must be made out by means of their relative magnitudes; the largest number answering to the greatest cosine. But when the value of one cosine is fixed, the rest are unambiguously determined by means of their indices.

In the formula for \( \cos a^2 \varphi \) all the terms in which two quantities are combined have real values, although their forms are imaginary. But it is not difficult to transform them into equivalent quantities without the imaginary sign.

It is manifest that the functions \( (1,2) \) and \( (-1,-2) \) are of this form, viz.

\[ (1,2) = A + Be + Ce^2 + De^3 \ldots + Ne^{n-1}, \]

\[ (-1,-2) = A + Be^{-1} + Ce^{-2} + De^{-3} \ldots + Ne^{-(n-1)}, \]

\( A, B, C, \ldots \) denoting given co-efficients.

But we have generally

\[ e^{\lambda} = \cos \lambda r + \sin \lambda r \sqrt{-1}, \]

\[ e^{-\lambda} = \cos \lambda r - \sin \lambda r \sqrt{-1}; \]

wherefore, by combining the two expressions of \( (1,2) \) and \( (-1,-2) \), we shall readily get

\[ \frac{(1,2) + (-1,-2)}{2} = A + B \cos r + C \cos 2r + \ldots, \]

\[ \frac{(1,2) - (-1,-2)}{2} = B \sin r + C \sin 2r + \ldots, \]

But, on account of equation (4), we may assume

\[ (1,2) = k(\cos \beta + \sin \beta \sqrt{-1}), \]

\[ (-1,-2) = k(\cos \beta - \sin \beta \sqrt{-1}); \]

and, by substituting these values in the last expressions, we get

\[ k \cos \beta = A + B \cos r + C \cos 2r + \ldots, \]

\[ k \sin \beta = B \sin r + C \sin 2r + \ldots, \]

by which means the arc \( \beta \) is determined without ambiguity, since both its sine and cosine are ascertained. In like manner are determined the several arcs in the formulæ

\[ (1,3) = k(\cos \beta + \sin \beta \sqrt{-1}), \]

\[ (-1,-3) = k(\cos \beta - \sin \beta \sqrt{-1}), \]

\[ (1,4) = k(\cos \beta + \sin \beta \sqrt{-1}), \]

\[ (-1,-4) = k(\cos \beta - \sin \beta \sqrt{-1}), \]

\[ \text{etc.} \]

Again, because \( PP' = k^2 \), we may assume

\[ P = k(\cos w + \sin w \sqrt{-1}), \]

\[ P' = k(\cos w - \sin w \sqrt{-1}). \]

And if these values, and the similar values of the functions \( (1,2), (1,3), \ldots \), be substituted in the value of \( P^2 \), we shall readily deduce, when \( n \) is an even number, When \( n \) is an odd number, we must separate the function \( \left(1 + \frac{n}{2}\right) \) from the rest, by supposing \( \left(1 + \frac{n}{2}\right) = k \).

\[ \cos(\gamma + \sin(\gamma \sqrt{-1}) = 1; \]

and then, by means of equation (6), we shall easily obtain

\[ m = 2(\beta + \beta' + \beta'' + \ldots) + \gamma. \]

The two last formulae determine the arc \( w \); and we likewise have

\[ \frac{e^{iP}}{k} = \cos(w - \gamma) + \sin(w - \gamma) \sqrt{-1}, \]

\[ \frac{e^{iP'}}{k} = \cos(w - \gamma) - \sin(w - \gamma) \sqrt{-1}; \]

and, by putting \( w(t) = w - \gamma \),

\[ \frac{e^{iP}}{k} = \cos(w(t)) + \sin(w(t)) \sqrt{-1}, \]

\[ \frac{e^{iP'}}{k} = \cos(w(t)) - \sin(w(t)) \sqrt{-1}. \]

Finally, by substituting the different values exhibited above in the formula for \( \cos(\alpha \phi) \), we shall get

\[ \cos(\alpha \phi) = \frac{1}{2n} + \frac{2k}{n} \cos(w(t)) \]

\[ + \frac{2k}{n} \cos(2w(t) - \beta) \]

\[ + \frac{2k}{n} \cos(3w(t) - \beta - \beta') \]

\[ + \frac{2k}{n} \cos(4w(t) - \beta - \beta' - \beta'') + \ldots \]

the series of terms being continued till all the arcs \( \beta, \beta', \beta'', \ldots \) are taken in when \( n \) is odd; and till they are all taken in except the last when \( n \) is even, in which case also the quantity \( (-1)^{\frac{n}{2}} \cdot \frac{k}{n} \) must be added.

By the preceding analysis the division of the circle into \( p \) equal parts is accomplished, when \( p \) is a prime number, by dividing a given arc into \( n \) or \( \frac{p-1}{2} \) equal parts. And this conclusion agrees with the general proposition of M. Gauss. For the \( n \)th part of a given arc is found by bisecting as often as \( n \) is divisible by 2, trisecting as often as it is divisible by 3, and so on. When \( n \) is a power of two, as in the case of the polygon of 17 sides, the solution is effected by repeated bisections, and thus comes under the elementary geometry. Supposing the division of the circle to be accomplished, we must further resolve the quadratic equation

\[ x + \frac{1}{x} = 2 \cos \frac{\lambda \times 360^\circ}{p}, \]

in order to find the roots of the binomial equation \( x^p - 1 = 0 \).

The following examples are subjoined for the sake of illustrating the method of calculation. And, in the first place, we may take the case of \( p = 11 \) equivalent to finding the roots of the equation \( x^{11} - 1 = 0 \), which was first solved by Vandermonde, and has been considered both by Lagrange and Legendre. Here, \( n = 5; \ k = \frac{1}{2} \sqrt{11}; \)

\[ r = \frac{360^\circ}{5} = 72^\circ = \cos r + \sin r \sqrt{-1}; \]

and, as 2 is a primitive root of 11, we may suppose \( a = 2 \). In order to Equations. find the numbers \( h(x) \) and \( h(\lambda) \), write down the series 1, 2, 3, &c. as far as \( n \) or 5; and, above each number, write the power of \( a \) equal to it when the multiples of 11 are rejected, taking always the least remainder, whether positive or negative: thus,

\[ \begin{array}{cccc} a^0 & a^1 & a^2 & a^3 \\ 1 & 2 & 3 & 4 & 5. \end{array} \]

In this arrangement of the powers of \( a \), it is evident that, \( \lambda \) denoting any index, \( h(\lambda) \) is the next on the right hand, and \( h(\lambda) \) the next on the left hand: we have, therefore,

\[ \begin{array}{cc} h(1) = 3, & h'(1) = 0, \\ h(2) = 4, & h'(2) = 3, \\ h(3) = 2, & h'(3) = 1, \\ h(4) = 4, & h'(4) = 2. \end{array} \]

Now, substitute these numbers in the expression of \( A \), equation (2), and likewise put \( m = 1 \); then

\[ A = \frac{1}{2} e^{-t} + \frac{1}{2} e^{-3t} + \frac{1}{2} e^{-5t} + \frac{1}{2} e^{-7t} + \frac{1}{2} e^{-9t} + \frac{1}{2} e^{-11t}. \]

In order to find (1, 2) and (1, 3), we have only to substitute 2 and 3 for \( s \) in the expression of \( A \); hence

\[ (1, 2) = 1 + 2e + \frac{1}{2} e^3 + e^5, \]

\[ (1, 3) = 1 + \frac{1}{2} e + 2e^5 + e^9; \]

which values will, in this case, be rendered somewhat more simple by combining them with the equation \( 0 = 1 + e + e^3 + e^5 + e^9 \); and thus we get

\[ (1, 2) = e - e^3 - \frac{1}{2} e^5 = m, \]

\[ (1, 3) = e^5 - e^9 - \frac{1}{2} e^{11} = n. \]

The functions \((-1, -2)\) and \((-1, -3)\) are found by subtracting the indices of \( e \) in the values of (1, 2) and (1, 3) from 5, which is equivalent to changing the signs of the indices; therefore

\[ (-1, -2) = e^4 - e^3 - \frac{1}{2} e^5 = m, \]

\[ (-1, -3) = e^3 - e^2 - \frac{1}{2} e^4 = n'. \]

And it will be found, by actually multiplying, that

\[ mm' = \frac{11}{4} \text{ and } nn' = \frac{11}{3}. \]

These values being found, we have, according to the foregoing method,

\[ P^0 = (1, 2)^e \cdot (1, 3)^f \cdot k^g = m^a \cdot n^b \cdot k^c, \]

\[ P^0 = \frac{11}{4} \text{ and } P^0 = \frac{11}{3}. \]

and hence

\[ \frac{P^0}{k} = \frac{1}{k}(m^a \cdot n^b \cdot k^c), \]

\[ \frac{P^0}{k} = \frac{1}{k}(m^a \cdot n^b \cdot k^c), \]

\[ \frac{m^a \cdot n^b}{k^c} = \frac{1}{k}(m^a \cdot n^b \cdot k^c), \]

\[ \frac{m^a \cdot n^b}{k^c} = \frac{1}{k}(m^a \cdot n^b \cdot k^c). \] Equations. Therefore we have

\[ \cos \alpha = -\frac{1}{10} + \frac{e^{-t}}{5} \cdot (m^2 \mu^3 k^3) + \frac{e^{t}}{5} \cdot (m^2 \mu^3 k^3) + \frac{e^{-2t}}{5} \cdot (m^2 \mu^3 k^3) + \frac{e^{2t}}{5} \cdot (m^2 \mu^3 k^3). \]

If, in this expression, we make \( t = 0 \), and substitute the numerical values of \( k^3 \), and of \( e \) and its powers, in the quantities under the radical sign, the result will coincide with the formula of Vandermonde, and with the calculation of Lagrange.

The expression just found being imaginary, if it be required to reduce it to a form fit for calculation, we must begin with substituting the values of \( e \) and its powers in \( m \) and \( \mu \); then

\[ m = (\cos r - \cos 2r - \frac{1}{2} \cos 3r) + (\sin r - \sin 2r - \frac{1}{2} \sin 3r) \cdot \sqrt{-1}, \]

\[ \mu = (\cos 2r - \cos 4r - \frac{1}{2} \cos r) + (\sin 2r - \sin 4r - \frac{1}{2} \sin r) \cdot \sqrt{-1}. \]

Now, \( \cos r = \cos 4r = -\frac{1}{4} + \frac{1}{4} \sqrt{5} \), and \( \cos 2r = \cos 3r = -\frac{1}{4} - \frac{1}{4} \sqrt{5} \); also \( \sin r = -\sin 4r \), and \( \sin 2r = -\sin 3r \); therefore

\[ m = (\cos r - \frac{3}{2} \cos 2r) + (\sin r - \frac{1}{2} \sin 2r) \cdot \sqrt{-1}, \]

\[ \mu = (\cos 2r - \frac{3}{2} \cos r) + (\sin 2r + \frac{1}{2} \sin r) \cdot \sqrt{-1}. \]

Again,

\[ m = h(\cos \beta + \sin \beta \cdot \sqrt{-1}), \]

\[ \mu = h(\cos \gamma + \sin \gamma \cdot \sqrt{-1}); \]

consequently

\[ \cos \beta = \frac{1}{h} (\cos r - \frac{3}{2} \cos 2r) = \frac{1 + 5 \sqrt{5}}{4 \sqrt{11}}, \]

\[ \sin \beta = \frac{1}{h} (\sin r - \frac{1}{2} \sin 2r), \]

\[ \cos \gamma = \frac{1}{h} (\cos 2r - \frac{3}{2} \cos r) = \frac{1 + 5 \sqrt{5}}{4 \sqrt{11}}, \]

\[ \sin \gamma = \frac{1}{h} (\sin 2r + \frac{1}{2} \sin r). \]

Hence

\[ \beta = 23^\circ 20' 46'', \]

\[ \gamma = 140^\circ 7' 63'', \]

\[ 5\omega = 23^\circ + 186^\circ 48' 38''. \]

\[ \omega = 37^\circ 21' 44''. \]

\[ \omega(t) = \omega - \frac{t}{72^\circ}: \]

\[ \cos \alpha = -\frac{1}{10} + \frac{\sqrt{11}}{5} \left( \cos \omega(t) + \cos (2\omega - \beta) \right). \]

By making \( t \) successively equal to 0, 1, 2, 3, 4, the formula will give all the ten cosines of a polygon of eleven sides inscribed in a circle; because \( \cos \frac{360^\circ}{11} = \cos 10^\circ \cdot \frac{360^\circ}{11}, \cos 2 \cdot \frac{360^\circ}{11} = \cos 9^\circ \cdot \frac{360^\circ}{11}, \) &c. It determines also the order of the arcs to which the numerical quantities belong; so that when the value of one cosine is fixed, the values of all the rest are likewise ascertained.

This last formula coincides with the calculation of Le- Equations, gendre.

The next example shall be the case of \( p = 17 \). Then,

\[ n = 8, \quad k = \frac{1}{2} \sqrt{17}, \quad r = \frac{360^\circ}{8} = 45^\circ, \quad \text{and} \quad e = \cos r + \sin r \cdot \sqrt{-1}; \]

\( r \), and 3 being one of the primitive roots of 17, we may take \( a = 3 \). Now, arranging the powers of \( a \) as in the last example, we have

\[ a^0, a^1, a^2, a^3, a^4, a^5, a^6, a^7, a^8; \]

\[ 1, 2, 3, 4, 5, 6, 7, 8; \]

and hence,

\[ i = 6 \]

\[ h(1) = 4, \quad h(1) = 6, \]

\[ h(2) = 2, \quad h(2) = 3, \]

\[ h(3) = 2, \quad h(3) = 7, \]

\[ h(4) = 5, \quad h(4) = 1, \]

\[ h(5) = 7, \quad h(5) = 4, \]

\[ h(6) = 1, \quad h(6) = 0, \]

\[ h(7) = 3, \quad h(7) = 5. \]

By substituting these numbers in the expression of \( A \), and likewise by putting \( m = 1 \), we get

\[ A = \frac{1}{2} e^{-6s} + \frac{1}{2} e^{1-s} + \frac{1}{2} e^{1-6s} + \frac{1}{2} e^{2-2s} + \frac{1}{2} e^{2-3s} + \frac{1}{2} e^{3-2s} + \frac{1}{2} e^{3-7s} + \frac{1}{2} e^{4-5s} + \frac{1}{2} e^{4-4s} + \frac{1}{2} e^{5-7s} + \frac{1}{2} e^{5-4s} + \frac{1}{2} e^{6-5s} + \frac{1}{2} e^{6-6s} + \frac{1}{2} e^{7-3s} + \frac{1}{2} e^{7-6s}. \]

In order to have the functions (1, 2), (1, 3), (1, 4), nothing more is necessary than to substitute 2, 3, 4 for \( s \) in the expression of \( A \); then, observing that \( e + e^2 = 0, \)

\[ e^3 + e^5 = 0, \quad e^6 + e^7 = 0, \quad \text{we readily get} \]

\[ (1, 2) = \frac{3}{2} e^1 + e^2 + e^3 = \frac{3}{2} - \sqrt{-2} = -m, \]

\[ (1, 3) = 1 + \frac{1}{2} e^1 + 2e^2 = \frac{1}{2} - 2\sqrt{-1} = n, \]

\[ (1, 4) = \frac{3}{2} + e^1 + e^2 = \frac{3}{2} + \sqrt{-2} = m; \]

and hence

\[ (-1, -2) = \frac{3}{2} e^1 + e^2 + e^3 = \frac{3}{2} + \sqrt{-2} = -m, \]

\[ (-1, -3) = 1 + \frac{1}{2} e^1 + 2e^2 = \frac{1}{2} + 2\sqrt{-1} = n', \]

\[ (-1, -4) = \frac{3}{2} + e^1 + e^2 = \frac{3}{2} - \sqrt{-2} = m'. \]

These values being found, we next have

\[ P^4 = (1, 2) \cdot (1, 3) \cdot (1, 4) \cdot f(0, 4); \]

\[ f(0, 4) = -k; \]

therefore, making \( f(0, 4) = -k, \)

\[ P^4 = m^2nk; \]

\[ P^4 = \frac{k^2}{P^4} = m^2nk; \]

and hence

\[ \frac{1}{K} P = \frac{1}{K} (m^2nk)^{\frac{1}{2}}; \quad \frac{1}{K} P = \frac{1}{K} (m^2nk)^{\frac{1}{2}}; \]