%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Project Gutenberg's Number-System Of Algebra, Treated Theoretically %% %% and Historically, by Henry B. Fine. %% %% %% %% This eBook is for the use of anyone anywhere at no cost and with %% %% almost no restrictions whatsoever. You may copy it, give it away or %% %% re-use it under the terms of the Project Gutenberg License included %% %% with this eBook or online at www.gutenberg.org %% %% %% %% %% %% Packages and substitutions: %% %% %% %% book: Basic book class. Required. %% %% amsmath: Basic AMS math package. Required. %% %% amssymb: Basic AMS symbols package. Required. %% %% graphicx: Basic graphics package. Required. %% %% makeidx: Basic index creation package. Required. %% %% %% %% %% %% Producer's Comments: %% %% %% %% The page numbers in the static table of contents are gathered %% %% with LaTeX page references, hence the file should be compiled %% %% twice to get them right. %% %% %% %% latex (to generate dvi, and from this with appropriate tools %% %% other formats) should work. The book contains 7 illustrations, %% %% all in eps format. %% %% %% %% %% %% Things to Check: %% %% %% %% Spellcheck: OK %% %% LaCheck: OK %% %% Lprep/gutcheck: OK %% %% PDF pages, excl. Gutenberg boilerplate: 95 %% %% PDF pages, incl. Gutenberg boilerplate: 104 %% %% ToC page numbers: OK %% %% Images: 7 image files (one is used twice in text) %% %% Fonts: OK %% %% %% %% %% %% Compile History: %% %% %% %% Feb 06: ss. Compiled with latex and dvipdf %% %% Latex Version 3.14159 (te-latex 2.0.2-198.4) %% %% latex numbersystem.tex %% %% latex numbersystem.tex %% %% dvipdf numbersystem.tex %% %% %% %% Mar 06: jt. Compiled with latex and dvipdfm. MiKTeX (XP) %% %% latex 17920-t %% %% latex 17920-t %% %% latex 17920-t %% %% dvipdfm 17920-t %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass{book} \listfiles \usepackage{amsfonts, amsmath, amssymb, graphicx, makeidx} \usepackage[latin1]{inputenc} \pagestyle{plain} \renewcommand\chapter{\secdef\mychaptera\mychapterb} \newcommand\mychaptera[2][default]{% \clearpage% \newpage% \refstepcounter{chapter}% \label{chapter:\thechapter}% \markboth{\S~\thechapter. #1}{\S~\thechapter. #1}% \addcontentsline{toc}{chapter}{% \protect\numberline{\thechapter.}\hspace{2em}#1}% \vspace{2em}% \begin{center}\begin{Large}% \textbf{\thechapter. #2}% \end{Large}\end{center}% \vspace{1em}% } \newcommand\mychapterb[1]{% \clearpage% \newpage% \markboth{#1}{#1}% \vspace{2em}% \begin{center}\begin{Large}% \textbf{#1}% \end{Large}\end{center}% \vspace{1em}% } \renewcommand\thechapter{\arabic{chapter}} \begin{document} \thispagestyle{empty} \small \begin{verbatim} Project Gutenberg's Number-System of Algebra, by Henry Fine This eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org Title: The Number-System of Algebra (2nd edition) Treated Theoretically and Historically Author: Henry Fine Release Date: March 4, 2006 [EBook #17920] Language: English Character set encoding: TeX *** START OF THIS PROJECT GUTENBERG EBOOK NUMBER-SYSTEM OF ALGEBRA *** Produced by Jonathan Ingram, Susan Skinner and the Online Distributed Proofreading Team at https://www.pgdp.net \end{verbatim} \normalsize \newpage \begin{titlepage} \begin{center} THE\\[2.5cm] {\LARGE\bfseries NUMBER-SYSTEM OF ALGEBRA}\\[2.5cm] TREATED THEORETICALLY AND HISTORICALLY\\[2.5cm] BY\\[2.5cm] {\large HENRY B. FINE, PH.~D.}\\ {\small PROFESSOR OF MATHEMATICS IN PRINCETON UNIVERSITY}\\[2.5cm] {\small \textit{SECOND EDITION, WITH CORRECTIONS}}\\[2.5cm] BOSTON, U.~S.~A.\\ D.~C.~HEATH \& CO., PUBLISHERS\\ 1907 \end{center} \end{titlepage} \frontmatter \begin{center} COPYRIGHT, 1890,\\ BY HENRY B.~FINE. \end{center} \newpage \section*{PREFACE.} \small{The theoretical part of this little book is an elementary exposition of the nature of the number concept, of the positive integer, and of the four artificial forms of number which, with the positive integer, constitute the ``number-system'' of algebra, viz.\ the negative, the fraction, the irrational, and the imaginary. The discussion of the artificial numbers follows, in general, the same lines as my pamphlet: \textit{On the Forms of Number arising in Common Algebra}, but it is much more exhaustive and thorough-going. The point of view is the one first suggested by Peacock and Gregory, and accepted by mathematicians generally since the discovery of quaternions and the Ausdehnungslehre of Grassmann, that algebra is completely defined formally by the laws of combination to which its fundamental operations are subject; that, speaking generally, these laws alone define the operations, and the operations the various artificial numbers, as their formal or symbolic results. This doctrine was fully developed for the negative, the fraction, and the imaginary by Hankel, in his \textit{Complexe Zahlensystemen}, in 1867, and made complete by Cantor's beautiful theory of the irrational in 1871, but it has not as yet received adequate treatment in English. Any large degree of originality in work of this kind is naturally out of the question. I have borrowed from a great many sources, especially from Peacock, Grassmann, Hankel, Weierstrass, Cantor, and Thomae (\textit{Theorie der analytischen Functionen einer complexen Veränderlichen}). I may mention, however, as more or less distinctive features of my discussion, the treatment of number, counting (§§~1--5), and the equation (§§~4,~12), and the prominence given the laws of the determinateness of subtraction and division. Much care and labor have been expended on the historical chapters of the book. These were meant at the outset to contain only a brief account of the origin and history of the artificial numbers. But I could not bring myself to ignore primitive counting and the development of numeral notation, and I soon found that a clear and connected account of the origin of the negative and imaginary is possible only when embodied in a sketch of the early history of the equation. I have thus been led to write a \textit{résumé} of the history of the most important parts of elementary arithmetic and algebra. Moritz Cantor's \textit{Vorlesungen über die Geschichte der Mathematik}, Vol. I, has been my principal authority for the entire period which it covers, \textit{i.~e.} to 1200 \textsc{a.~d.~} For the little I have to say on the period 1200 to 1600, I have depended chiefly, though by no means absolutely, on Hankel: \textit{Zur Geschichte der Mathematik in Altertum und Mittelalter}. The remainder of my sketch is for the most part based on the original sources. \begin{flushright} HENRY B. FINE. \end{flushright} \textsc{Princeton}, April, 1891. \begin{center} \rule{5.0cm}{0.1mm} \end{center} In this second edition a number of important corrections have been made. But there has been no attempt at a complete revision of the book. \begin{flushright} HENRY B. FINE. \end{flushright} \textsc{Princeton}, September, 1902.} \tableofcontents \bigskip \begin{center} \begin{tabular}{lr} \multicolumn{2}{l} {\textbf{PRINCIPAL FOOTNOTES}} \\ Instances of quinary and vigesimal systems of notation \dotfill& \pageref{Instances of quinary and vigesimal systems of notation}\\ Instances of digit numerals \dotfill& \pageref{Instances of digit numerals}\\ Summary of the history of Greek mathematics \dotfill &\pageref{Summary of the history of Greek mathematics}\\ Old Greek demonstration that the side and diagonal of a square are incommensurable \dotfill & \pageref{Old Greek demonstration that the side and diagonal of a square are incommensurable}\\ Greek methods of approximation \dotfill &\pageref{Greek methods of approximation}\\ Diophantine equations\dotfill & \pageref{Diophantine equations}\\ Alchayyâmî's method of solving cubics by the intersections of conics\dotfill&\pageref{Alchayyami method of solving cubics by the intersections of conics}\\ Jordanus Nemorarius \dotfill & \pageref{Jordanus Nemorarius}\\ The \textit{summa} of Luca Pacioli \dotfill & \pageref{The summa of Luca Pacioli}\\ Regiomontanus \dotfill & \pageref{Regiomontanus}\\ Algebraic symbolism \dotfill & \pageref{Jordanus Nemorarius}, \pageref{Algebraic symbolism}\\ The irrationality of $e$ and $\pi$. Lindemann \dotfill & \pageref{irrationality} \end{tabular} \end{center} \mainmatter \part{THEORETICAL} \chapter{THE POSITIVE INTEGER,\\ AND THE LAWS WHICH REGULATE THE ADDITION AND MULTIPLICATION OF POSITIVE INTEGERS.} \addcontentsline{toc}{section}{\numberline{}The number concept} \textbf{1. Number}. We say of certain distinct things that they form a group\footnote{By group we mean \textit{finite} group, that is, one which cannot be brought into one-to-one correspondence (§~2) with any part of itself.} when we make them collectively a single object of our attention. The \textit{number of things} in a group is that property of the group which remains unchanged during every change in the group which does not destroy the separateness of the things from one another or their common separateness from all other things. Such changes may be changes in the characteristics of the things or in their arrangement within the group. Again, changes of arrangement may be changes either in the order of the things or in the manner in which they are associated with one another in smaller groups. We may therefore say: \textit{The number of things in any group of distinct things is independent of the characters of these things, of the order in which they may be arranged in the group, and of the manner in which they may be associated with one another in smaller groups.} \addcontentsline{toc}{section}{\numberline{}Numerical equality } \textbf{2. Numerical Equality}. The number of things in any two groups of distinct things is the same, when for each thing in the first group there is one in the second, and reciprocally, for each thing in the second group, one in the first. Thus, the number of letters in the two groups, $A$, $B$, $C$; $D$, $E$, $F$, is the same. In the second group there is a letter which may be assigned to each of the letters in the first: as $D$ to $A$, $E$ to $B$, $F$ to $C$; and reciprocally, a letter in the first which may be assigned to each in the second: as $A$ to $D$, $B$ to $E$, $C$ to $F$. Two groups thus related are said to be in \textit{one-to-one} (1--1) \textit{correspondence}. Underlying the statement just made is the assumption that if the two groups correspond in the manner described for one order of the things in each, they will correspond if the things be taken in any other order also; thus, in the example given, that if $E$ instead of $D$ be assigned to $A$, there will again be a letter in the group $D$, $E$, $F$, viz. $D$ or $F$, for each of the remaining letters $B$ and $C$, and reciprocally. This is an immediate consequence of §~1, foot-note. The number of things in the first group is \textit{greater than} that in the second, or the number of things in the second \textit{less than} that in the first, when there is one thing in the first group for each thing in the second, but \textit{not} reciprocally one in the second for each in the first. \addcontentsline{toc}{section}{\numberline{}Numeral symbols } \textbf{3. Numeral Symbols}. As regards the number of things which it contains, therefore, a group may be represented by any other group, \textit{e.~g.} of the fingers or of simple marks, $|$'s, which stands to it in the relation of correspondence described in §~2. This is the primitive method of representing the number of things in a group and, like the modern method, makes it possible to compare numerically groups which are separated in time or space. The modern method of representing the number of things in a group differs from the primitive only in the substitution of symbols, as 1, 2, 3, etc., or numeral words, as \textit{one, two, three}, etc., for the various groups of marks $|$, $||$, $|||$, etc. These symbols are the positive integers of arithmetic. \textit{A positive integer is a symbol for the number of things in a group of distinct things}. For convenience we shall call the positive integer which represents the number of things in any group its numeral symbol, or when not likely to cause confusion, its number simply,---this being, in fact, the primary use of the word ``number'' in arithmetic. In the following discussion, for the sake of giving our statements a general form, we shall represent these numeral symbols by letters, $a$, $b$, $c$, etc. \addcontentsline{toc}{section}{\numberline{}The numerical equation} \textbf{4. The Equation.} The numeral symbols of two groups being $a$ and $b$; when the number of things in the groups is the same, this relation is expressed by the \textit{equation} \[ a = b; \] when the first group is greater than the second, by the \textit{inequality} \[ a > b; \] when the first group is less than the second, by the \textit{inequality} \[ a < b. \] \textit{A numerical equation is thus a declaration in terms of the numeral symbols of two groups and the symbol} = \textit{that these groups are in one-to-one correspondence} (§2). \addcontentsline{toc}{section}{\numberline{}Counting} \textbf{5. Counting.} The fundamental operation of arithmetic is counting. To count a group is to set up a one-to-one correspondence between the individuals of this group and the individuals of some representative group. Counting leads to an expression for the number of things in any group in terms of the representative group: if the representative group be the fingers, to a group of fingers; if marks, to a group of marks; if the numeral words or symbols in common use, to one of these words or symbols. There is a difference between counting with numeral words and the earlier methods of counting, due to the fact that the numeral words have a certain recognized order. As in finger-counting one finger is attached to each thing counted, so here one word; but that word represents numerically not the thing to which it is attached, but the entire group of which this is the last. The same sort of counting may be done on the fingers when there is an agreement as to the order in which the fingers are to be used; thus if it were understood that the fingers were always to be taken in normal order from thumb to little finger, the little finger would be as good a symbol for 5 as the entire hand. \addcontentsline{toc}{section}{\numberline{}Addition and its laws} \textbf{6. Addition.} If two or more groups of things be brought together so as to form a single group, the numeral symbol of this group is called the \textit{sum} of the numbers of the separate groups. If the sum be $s$, and the numbers of the separate groups $a$, $b$, $c$, etc., respectively, the relation between them is symbolically expressed by the equation \[ s = a + b + c + \textrm{etc.,} \] where the sum-group is supposed to be formed by joining the second group---to which $b$ belongs---to the first, the third group---to which $c$ belongs---to the resulting group, and so on. The operation of finding $s$ when $a$, $b$, $c$, etc., are known, is \textit{addition}. Addition is abbreviated counting. Addition is subject to the two following laws, called the \textit{commutative} and \textit{associative} laws respectively, viz.: \[ \begin{array}{rl} \textrm{I.} & a + b = b + a.\\ \textrm{II.} &a + (b + c) = a + b + c. \end{array} \] Or, \begin{tabular}{rl} I. & To add $b$ to $a$ is the same as to add $a$ to $b$.\\ II. & To add the sum of $b$ and $c$ to $a$ is the same as to add $c$ to the sum of $a$ and $b$. \end{tabular} Both these laws are immediate consequences of the fact that the sum-group will consist of the same individual things, and the number of things in it therefore be the same, whatever the order or the combinations in which the separate groups are brought together (§1). \addcontentsline{toc}{section}{\numberline{}Multiplication and its laws} \textbf{7. Multiplication.} The sum of $b$ numbers each of which is $a$ is called the \textit{product} of $a$ by $b$, and is written $a \times b$, or $a \cdot b$, or simply $ab$. The operation by which the product of $a$ by $b$ is found, when $a$ and $b$ are known, is called \textit{multiplication}. Multiplication is an abbreviated addition. Multiplication is subject to the three following laws, called respectively the \textit{commutative, associative}, and \textit{distributive} laws for multiplication, viz.: \begin{tabular}{rl} III. & $ab = ba$.\\ IV. & $a(bc) = abc$.\\ V. & $a(b + c) = ab +ac$. \end{tabular} Or, \begin{tabular}{rl} III. &The product of $a$ by $b$ is the same as the product of $b$ by $a$.\\ IV. & The product of $a$ by $bc$ is the same as the product of $ab$ by $c$.\\ V. & The product of $a$ by the sum of $b$ and $c$ is the same as the sum of the product of $a$ by $b$ and of $a$ by $c$. \end{tabular} These laws are consequences of the commutative and associative laws for addition. Thus, III. \textit{The Commutative Law}. The units of the group which corresponds to the sum of $b$ numbers each equal to $a$ may be arranged in $b$ rows containing $a$ units each. But in such an arrangement there are $a$ columns containing $b$ units each; so that if this same set of units be grouped by columns instead of rows, the sum becomes that of $a$ numbers each equal to $b$, or $ba$. Therefore $ab = ba$, by the commutative and associative laws for addition. IV. \textit{The Associative Law}. \begin{align*} abc & = c \ \textrm{sums such as} \ (a + a + \cdots \ \textrm{to} \ b \ \textrm{terms}) \\ & = a + a + a + \cdots \ \textrm{to} \ bc \ \textrm{terms (by the associative law for addition)} \\ & = a(bc). \end{align*} V. \textit{The Distributive Law}. \begin{align*} a(b + c) & = a + a + a + \cdots \textrm{to} \ (b + c) \ \textrm{terms} \\ & = a + a + \cdots \ \textrm{to} \ b \ \textrm{terms}) + (a + a + \cdots \ \textrm{to} \ c \ \textrm{terms}) \\ & \qquad \textrm{(by the associative law for addition),} \\ & = ab + ac. \end{align*} The commutative, associative, and distributive laws for sums of any number of terms and products of any number of factors follow immediately from I--V. Thus the product of the factors $a$, $b$, $c$, $d$, taken in any two orders, is the same, since the one order can be transformed into the other by successive interchanges of consecutive letters. \chapter{SUBTRACTION AND THE NEGATIVE INTEGER.} \addcontentsline{toc}{section}{\numberline{}Numerical subtraction} \textbf{8. Numerical Subtraction.} Corresponding to every mathematical operation there is another, commonly called its \textit{inverse}, which exactly undoes what the operation itself does. Subtraction stands in this relation to addition, and division to multiplication. To \textit{subtract b} from $a$ is to find a number to which if $b$ be added, the sum will be $a$. The result is written $a - b$; by definition, it identically satisfies the equation VI. $(a - b) + b = a$; that is to say, $a - b$ is the number belonging to the group which with the $b$-group makes up the $a$-group. Obviously subtraction is always possible when $b$ is less than $a$, but then only. Unlike addition, in each application of this operation regard must be had to the relative size of the two numbers concerned. \addcontentsline{toc}{section}{\numberline{}Determinateness of numerical subtraction} \textbf{9. Determinateness of Numerical Subtraction}. Subtraction, when possible, is a \textit{determinate} operation. There is but \textit{one} number which will satisfy the equation $x + b = a$, but one number the sum of which and $b$ is $a$. In other words, $a - b$ is one-valued. For if $c$ and $d$ both satisfy the equation $x + b = a$, since then $c + b = a$ and $d + b = a$, $c + b = d + b$; that is, a one-to-one correspondence may be set up between the individuals of the $(c + b)$ and $(d + b)$ groups (§4). The same sort of correspondence, however, exists between any $b$ individuals of the first group and any $b$ individuals of the second; it must, therefore, exist between the remaining $c$ of the first and the remaining $d$ of the second, or $c = d$. This characteristic of subtraction is of the same order of importance as the commutative and associative laws, and we shall add to the group of laws I--V and definition VI---as being, like them, a fundamental principle in the following discussion---the theorem VII. $ \qquad \left\{ \begin{array}{rcl} \textrm{If} \ a + c & = & b + c \\ a & = & b, \end{array} \right.$ which may also be stated in the form: If one term of a sum changes while the other remains constant, the sum changes. The same reasoning proves, also, that VIII. $ \qquad \left\{ \begin{array}{rcl} \textrm{As} \ a + c > & \textrm{or} & < b + c \\ a & \textrm{or} & b, \end{array} \right.$ \addcontentsline{toc}{section}{\numberline{}Formal rules of subtraction} \textbf{10. Formal Rules of Subtraction.} All the rules of subtraction are purely \textit{formal} consequences of the fundamental laws I--V, VII, and definition VI\@. They must follow, whatever the meaning of the symbols $a$, $b$, $c$, $+$, $-$, $=$; a fact which has an important bearing on the following discussion. It will be sufficient to consider the equations which follow. For, properly combined, they determine the result of any series of subtractions or of any complex operation made up of additions, subtractions, and multiplications. \begin{enumerate} \item $a - (b + c) = a - b - c = a - c - b$. \item $a - (b - c) = a - b + c$. \item $a + b - b = a$. \item $a + (b - c) = a + b - c = a - c + b$. \item $a(b - c) = ab - ac$. \end{enumerate} For \begin{enumerate} \item $a - b - c$ is the form to which if first $c$ and then $b$ be added; or, what is the same thing (by I), first $b$ and then $c$; or, what is again the same thing (by II), $b + c$ at once,---the sum produced is $a$ (by VI). $a - b - c$ is therefore the same as $a - c - b$, which is as it stands the form to which if $b$, then $c$, be added the sum is $a$; also the same as $a - (b + c)$, which is the form to which if $b + c$ be added the sum is $a$. \item \[ \begin{array}{rlr} a - (b - c) &= a - (b - c) - c + c, &\textrm{Def. VI.}\\ &= a - (b - c + c) + c, & \textrm{ Eq. 1.}\\ &= a - b - c. & \textrm{Def. VI.}\\ \end{array} \] \item \[ \begin{array}{rlr} a + b - b + b &= a + b.& \textrm{Def. VI.}\\ \textrm{ But }\quad a + b &= a + b.\\ \therefore a + b - b &= a. & \textrm{Law VII.} \end{array} \] \item \[ \begin{array}{rlr} a + b - c &= a + (b - c + c) - c,& \textrm{Def. VI.}\\ &= a + (b - c). & \textrm{Law II, Eq. 3.} \end{array} \] \item \[ \begin{array}{rlr} ab - ac &= a(b - c + c)- ac, &\textrm{Def. VI.}\\ &= a(b - c) + ac - ac,& \textrm{Law V.}\\ &= a(b - c).& \textrm{Eq. 3.} \end{array} \] \end{enumerate} Equation 3 is particularly interesting in that it defines addition as the inverse of subtraction. Equation 1 declares that two consecutive subtractions may change places, are commutative. Equations 1, 2, 4 together supplement law II, constituting with it a complete associative law of addition and subtraction; and equation 5 in like manner supplements law V. \addcontentsline{toc}{section}{\numberline{}Limitations of numerical subtraction} \textbf{11. Limitations of Numerical Subtraction}. Judged by the equations 1--5, subtraction is the exact counterpart of addition. It conforms to the same general laws as that operation, and the two could with fairness be made to interchange their rôles of direct and inverse operation. But this equality proves to be only apparent when we attempt to interpret these equations. The requirement that subtrahend be less than minuend then becomes a serious restriction. It makes the range of subtraction much narrower than that of addition. It renders the equations 1--5 available for special classes of values of $a$, $b$, $c$ only. If it must be insisted on, even so simple an inference as that $a - (a + b) + 2b$ is equal to $b$ cannot be drawn, and the use of subtraction in any reckoning with symbols whose relative values are not at all times known must be pronounced unwarranted. One is thus naturally led to ask whether to be valid an algebraic reckoning must be interpretable numerically and, if not, to seek to free subtraction and the rules of reckoning with the results of subtraction from a restriction which we have found to be so serious. \addcontentsline{toc}{section}{\numberline{}Symbolic equations} \addcontentsline{toc}{section}{\numberline{}Principle of permanence. Symbolic subtraction} \textbf{12. Symbolic Equations. Principle of Permanence. Symbolic Subtraction.} In pursuance of this inquiry one turns first to the equation $(a - b) + b = a$, which serves as a definition of subtraction when $b$ is less than $a$. This is an equation in the primary sense (§~4) only when $a - b$ is a number. But in the broader sense, that \textit{An equation is any declaration of the equivalence of definite combinations of symbols---equivalence in the sense that one may be substituted for the other,---} $(a - b) + b = a$ may be an equation, whatever the values of $a$ and $b$. And if no different meaning has been attached to $a - b$, and it is declared that $a - b$ is the symbol which associated with $b$ in the combination $(a - b) + b$ is equivalent to $a$, this declaration, or the \textit{equation} \[ (a - b) + b = a, \] is a \textit{definition}\footnote{A definition in terms of symbolic, not numerical addition. The sign + can, of course, indicate numerical addition only when both the symbols which it connects are numbers.} of this symbol. By the assumption of the \textit{permanence of form} of the numerical equation in which the definition of subtraction resulted, one is thus put immediately in possession of a \textit{symbolic} definition of subtraction which is general. The numerical definition is subordinate to the symbolic definition, being the interpretation of which it admits when $b$ is less than $a$. But from the standpoint of the symbolic definition, interpretability---the question whether $a - b$ is a number or not---is irrelevant; only such properties may be attached to $a - b$, by itself considered, as flow immediately from the generalized equation \[ (a - b) + b = a. \] In like manner each of the fundamental laws I--V, VII, on the assumption of the \textit{permanence of its form} after it has ceased to be interpretable numerically, becomes a declaration of the equivalence of certain definite combinations of symbols, and the formal consequences of these laws---the equations 1--5 of §~10---become definitions of addition, subtraction, multiplication, and their mutual relations---definitions which are purely symbolic, it may be, but which are unrestricted in their application. These definitions are legitimate from a logical point of view. For they are merely the laws I--VII, and we may assume that these laws are \textit{mutually consistent} since we have proved that they hold good for positive integers. Hence, if \textit{used correctly}, there is no more possibility of their leading to false results than there is of the more tangible numerical definitions leading to false results. The laws of correct thinking are as applicable to mere symbols as to numbers. What the value of these symbolic definitions is, to what extent they add to the power to draw inferences concerning numbers, the elementary algebra abundantly illustrates. One of their immediate consequences is the introduction into algebra of two new symbols, \textit{zero} and the \textit{negative}, which contribute greatly to increase the simplicity, comprehensiveness, and power of its operations. \addcontentsline{toc}{section}{\numberline{}Zero} \textbf{13. Zero.} When $b$ is set equal to $a$ in the general equation \[ (a - b) + b = a, \] it takes one of the forms \[ (a - a) + a = a, \] \[ (b - b) + b = b. \] It may be proved that \[ \begin{array}{rlr} a - a &= b - b.\\ \textrm{ For} \quad (a - a) + (a + b) &= (a - a) + a + b, & \textrm{Law II.}\\ &= a + b,\\ \textrm{since} \quad (a - a) + a &= a.\\ \textrm{And}\quad (b - b) + (a + b) &= (b - b) + b + a, &\textrm{ Laws I, II.}\\ & = b + a,\\ \textrm{since}\quad (b - b) + b &= b.\\ \textrm{Therefore}\quad a - a &= b - b. & \textrm{Law VII.} \end{array} \] $a - a$ is therefore altogether independent of $a$ and may properly be represented by a symbol unrelated to a. The symbol which has been chosen for it is 0, called \textit{zero}. \textit{Addition} is defined for this symbol by the equations \begin{enumerate} \item \[ \begin{array}{rlr} 0 + a &= a, & \textrm{definition of 0.}\\ a + 0 &= a. & \textrm{Law I.} \end{array} \] \textit{Subtraction} (partially), by the equation \item \[ \begin{array}{rlr} a - 0 &= a.\\ \textrm{For}\quad (a - 0) + 0 &= a. & \textrm{Def. VI.} \end{array} \] \textit{Multiplication} (partially), by the equations \item \[ \begin{array}{rlr} a \times 0 &= 0 \times a = 0.\\ \textrm{For}\quad a \times 0 &= a (b - b), & \textrm{definition of 0.}\\ &= ab - ab, & \textrm{§~10, 5.}\\ &= 0. & \textrm{definition of 0.} \end{array} \] \end{enumerate} \addcontentsline{toc}{section}{\numberline{}The negative} \textbf{14. The Negative.} When $b$ is greater than $a$, equal say to $a + d$, so that $b - a = d$, then \[ \begin{array}{rlr} a - b &= a - (a + d),\\ &= a - a - d, & \textrm{§~10, 1.}\\ &= 0 - d. & \textrm{definition of 0.} \end{array} \] For $0 - d$ the briefer symbol $-d$ has been substituted; with propriety, certainly, in view of the lack of significance of 0 in relation to addition and subtraction. The equation $0 - d = -d$, moreover, supplies the missing rule of subtraction for 0. (Compare §~13, 2.) The symbol $-d$ is called the \textit{negative}, and in opposition to it, the number $d$ is called \textit{positive}. Though in its origin a sign of operation (subtraction from 0), the sign $-$ is here to be regarded merely as part of the symbol $-d$. $-d$ is as serviceable a substitute for $a - b$ when $a < b$, as is a single numeral symbol when $a > b$. The rules for reckoning with the new symbol---definitions of its addition, subtraction, multiplication---are readily deduced from the laws I--V, VII, definition VI, and the equations 1--5 of §~10, as follows: \begin{enumerate} \item \[ \begin{array}{rlr} b + (-b) &= -b + b = 0.\\ \textrm{For} \quad -b + b &= (0 - b) + b,& \textrm{definition of negative.}\\ &= 0. & \textrm{Def. VI.} \end{array} \] $-b$ may therefore be defined as the symbol the sum of which and $b$ is 0. \item \[ \begin{array}{rlr} a + (-b) &= -b + a = a -b.\\ \textrm {For} \quad a + (-b) &= a + (0-b),& \textrm{definition of negative.}\\ &= a + 0 - b, & \textrm{ §~10, 4.}\\ &= a - b. & \textrm {§~13, 1.} \end{array} \] \item \[ \begin{array}{rlr} -a + (-b) &= - (a + b).\\ \textrm{For} \quad -a+ (-b) &= 0 -a-b, & \textrm{by the reasoning in §~14, 2.}\\ &= 0 - (a + b), & \textrm{ §10,1.}\\ &= -(a + b).& \textrm{ definition of negative.} \end{array} \] \item \[ \begin{array}{rlr} a-(-b) &= a + b.\\ \textrm{ For} \quad a-(-b) &= a - (0-b), & \textrm{definition of negative.}\\ & = a -0 + b, &\textrm{ §~10, 2.}\\ & = a + b. & \textrm{§13, 2.} \end{array} \] \item \[ \begin{array}{rlr} (-a) - (-b) &= b - a.\\ \textrm{ For} \quad -a - (-b) &= -a + b,& \textrm{ by the reasoning in §~14, 4.}\\ & = b - a. & \textrm{ §14, 2.}\\ \textrm{COR.} \quad (-a) - (-a) &= 0. \end{array} \] \item \[ \begin{array}{rlr} a(-b) &= (-b)a = -ab.\\ \textrm{For} \quad 0 &= a(b - b), &\textrm{ §13, 3.}\\ &= ab + a(-b).& \textrm{ Law V.}\\ \therefore a(-b) &= -ab. & \textrm { §~14, 1; Law VII.} \end{array} \] \item \[ \begin{array}{lrlr} &(-a)\times0 & = 0\times(-a)=0. \\ \text{For} &(-a)\times0 & =(-a)(b-b), & \text{definition of 0}. \\ && =(-a)b-(-a)b, & \S~10,5. \\ && =0. & \S~14, 6, \text{and} \; 5, \text{Cor}.\\ \end{array} \] \item \[ \begin{array}{lrlr} & (-a)(-b) & =ab. \\ \text{For} & 0 & =(-a)(b-b), & \S~14, 7.\\ && = (-a)b+(-a)(-b), & \text{Law V}.\\ && = -ab+(-a)(-b). & \S~14, 6.\\ &\therefore (-a)(-b) & = ab. & \S~14, 1; \text{Law VII.} \end{array} \] By this method one is led, also, to definitions of \textit{equality} and greater or lesser \textit{inequality} of negatives. Thus \item \[ \begin{array}{lrlr} & -a >, & = \; \text{or} \; < -b,\\ \text{according as } & b >, & = \; \text{or} \; < a.\footnotemark[1]\\ \text{For as} & b>, & =,, & =,<-b+b+a, & \S~14, 1; \S~13, 1.\\ \text{or} & -a>, & =,<-b, & \text{Law VII or VII$^\prime$}.\\ \text{In like manner} &-a&<0 &\textrm{ or } < db,\\ c > &\textrm{ or } < d. \end{array} \right. \] \addcontentsline{toc}{section}{\numberline{}Formal rules of division } \textbf{18. Formal Rules of Division.} The fundamental laws of the multiplication of numbers are \[ \begin{array}{lrl} \textrm{III.} & ab&=ba,\\ \textrm{IV.} & a(bc)&=abc,\\ \textrm{V.} & a(b+c)&=ab+ac. \end{array} \] Of these, the definition \[ \textrm{VIII.} \qquad \left(\frac{a}{b}\right)b=a, \] the theorem \[ \textrm{IX.} \qquad \left\{ \begin{array}{rlr} \textrm{If } ac&=bc,\\ a&=b, &\textrm{unless } c=0, \end{array} \right. \] and the corresponding laws of addition and subtraction, the rules of division are purely \textit{formal} consequences, deducible precisely as the rules of subtraction 1--5 of §10 in the preceding chapter. They follow without regard to the meaning of the symbols $a$, $b$, $c$, $=$, $+$, $-$, $ab$, $\frac{a}{b}$. Thus: \begin{enumerate} \item \[ \begin{array}{rlr} \dfrac{a}{b} \cdot \dfrac{c}{d}& = \dfrac{ac}{bd}.\\ \textrm{ For} \quad \dfrac{a}{b} \cdot \dfrac{c}{d} \cdot bd &= \dfrac{a}{b}b \cdot \dfrac{c}{d}d, & \textrm{ Laws IV, III.}\\ &=ac, &\textrm{Def. VIII.}\\ \textrm{and} \quad \dfrac{ac}{bd} \cdot bd &=ac. & \textrm{Def VIII.} \end{array} \] The theorem follows by law IX. \item \[ \begin{array}{rlr} \dfrac{\frac{a}{b}}{\frac{c}{d}}&=d\frac{ad}{bc}.\\ \textrm{ For} \quad \dfrac{\frac{a}{b}}{\frac{c}{d}} \cdot \dfrac{c}{d}&=\dfrac{a}{b}, & \textrm{Def. VIII.}\\ \textrm{and} \quad \dfrac{ad}{bc} \cdot \dfrac{c}{d} &= \dfrac{a}{b} \cdot \dfrac{dc}{cd}, & \textrm{§18, 1; Law IV.}\\ &=\dfrac{a}{b},\\ \textrm {since} \quad \dfrac{dc}{cd}&= dc=1 \times cd. &\textrm{Def. VIII, Law IX.} \end{array} \] The theorem follows by law IX. \item \[ \begin{array}{rlr} \dfrac{a}{b}± \dfrac{c}{d}&=\dfrac{ad±bc}{bd}.\\ \textrm{For} \quad \left(\dfrac{a}{b}±\dfrac{c}{d}\right)bd &=\dfrac{a}{b}b \cdot d± \dfrac{c}{d}d \cdot b, & \textrm{ Laws III--V: §10, 5.}\\ &=ad±bc, & \textrm{Def. VIII.}\\ \textrm{and} \quad \left(\dfrac{ad±bc}{bd}\right)bd& =ad±bc. &\textrm{Def. VIII.} \end{array} \] The theorem follows by law IX. By the same method it may be inferred that \item \[ \begin{array}{rlr} \dfrac{a}{b} > , &= , < \dfrac{c}{d},\\ \textrm{as} \quad ad > , &= , < bc.& \textrm{Def. VIII, Laws III, IV, IX, IX'.} \end{array} \] \end{enumerate} \addcontentsline{toc}{section}{\numberline{}Limitations of numerical division } \addcontentsline{toc}{section}{\numberline{}Symbolic division. The fraction} \textbf{19. Limitations of Numerical Division. Symbolic Division. The Fraction.} General as is the form of the preceding equations, they are capable of numerical interpretation only when $\frac{a}{b}$, $\frac{c}{d}$ are numbers, a case of comparatively rare occurrence. The narrow limits set the quotient in the numerical definition render division an unimportant operation as compared with addition, multiplication, or the generalized subtraction discussed in the preceding chapter. But the way which led to an unrestricted subtraction lies open also to the removal of this restriction; and the reasons for following it there are even more cogent here. We accept as the quotient of $a$ divided by any number $b$, which is not 0, the symbol $\frac{a}{b}$ defined by the equation \[ \left(\frac{a}{b}\right) b = a, \] regarding this equation merely as a declaration of the equivalence of the symbols $(\frac{a}{b}) b$ and $a$, of the right to substitute one for the other in any reckoning. Whether $\frac{a}{b}$ be a number or not is to this definition irrelevant. When a mere symbol, $\frac{a}{b}$ is called a \textit{fraction}, and in opposition to this a number is called an \textit{integer}. We then put ourselves in immediate possession of definitions of the addition, subtraction, multiplication, and division of this symbol, as well as of the relations of equality and greater and lesser inequality---definitions which are consistent with the corresponding numerical definitions and with one another---by assuming the permanence of form of the equations 1, 2, 3 and of the test 4 of §~18 as symbolic statements, when they cease to be interpretable as numerical statements. The purely symbolic character of $\frac{a}{b}$ and its operations detracts nothing from their legitimacy, and they establish division on a footing of at least formal equality with the other three fundamental operations of arithmetic.\footnote{The doctrine of symbolic division admits of being presented in the very same form as that of symbolic subtraction. The equations of Chapter II immediately pass over into theorems respecting division when the signs of multiplication and division are substituted for those of addition and subtraction; so, for instance, \[ a - (b + c) = a - b - c = a - c - b \text{ gives } \frac{a}{bc} = \frac{(\frac{a}{b})}{c}=\frac{(\frac{a}{c})}{b} \] In particular, to $(a - a) + a = a$ corresponds $\frac{a}{a}a = a$. Thus a purely symbolic definition may be given 1. It plays the same rôle in multiplication as 0 in addition. Again, it has the same exceptional character in involution---an operation related to multiplication quite as multiplication to addition---as 0 in multiplication; for $1^m = 1^n$, whatever the values of $m$ and $n$. Similarly, to the equation $(- a) + a = 0$, or $(0 - a) + a = 0$, corresponds $(\frac{1}{a})a = 1$, which answers as a definition of the unit fraction $\frac{1}{a}$; and in terms of these unit fractions and integers all other fractions may be expressed.} \addcontentsline{toc}{section}{\numberline{}Negative fractions} \textbf{20. Negative Fractions.} Inasmuch as negatives conform to the laws and definitions I--IX, the equations 1, 2, 3 and the test 4 of §18 are valid when any of the numbers $a$, $b$, $c$, $d$ are replaced by negatives. In particular, it follows from the definition of quotient and its determinateness, that \[ \dfrac{a}{-b} = -\dfrac{a}{b}; \dfrac{-a}{b} = -\dfrac{a}{b}; \dfrac{-a}{-b} = \dfrac{a}{b}. \] It ought, perhaps, to be said that the determinateness of division of negatives has not been formally demonstrated. The theorem, however, that if $(\pm a)(\pm c) = (\pm b)(\pm c), \pm a = \pm b$, follows for every selection of the signs $\pm$ from the one selection $+$, $+$, $+$, $+$ by §14, 6, 8. \addcontentsline{toc}{section}{\numberline{}General test of the equality or inequality of fractions} \textbf{21. General Test of the Equality or Inequality of Fractions.} Given any two fractions $\pm \frac{a}{b},\pm \frac{c}{d}$. \[ \begin{array}{rlr} \pm \frac{a}{b} > , &= \text{ or } < \pm \frac{c}{d},\\ \text{ according as } \pm ad > , &= \text{ or } < \pm bc. \\ &\text{Laws IX, IX'.} & \text{ Compare §4, §14, 9.} \end{array} \] \addcontentsline{toc}{section}{\numberline{}Indeterminateness of division by zero} \textbf{22. Indeterminateness of Division by Zero.} Division by 0 does not conform to the law of determinateness; the equations 1, 2, 3 and the test 4 of \S \, 18 are, therefore, not valid when 0 is one of the divisors. The symbols $\displaystyle\frac{0}{0},\, \frac{a}{0},$ of which some use is made in mathematics, are indeterminate.\footnote{In this connection see \S \, 32.} 1. $\displaystyle\frac{0}{0}$ is indeterminate. For $\displaystyle\frac{0}{0} $ is completely defined by the equation $\displaystyle\left(\frac{0}{0}\right)0 = 0$; but since $ x \; \text{x} \; 0 = 0$, whatever the value of $x$, any number whatsoever will satisfy this equation. 2. $\displaystyle\frac{a}{0}$ is indeterminate. For, by definition, $\displaystyle\left(\frac{a}{0}\right)0 = a$. Were $\displaystyle\frac{a}{0}$ determinate, therefore,---since then $\displaystyle \left(\frac{a}{0}\right)0 $ would, by \S \,18, 1, be equal to $\displaystyle\frac{a \, \text{x} \, 0 }{0},$ or to $\displaystyle\frac{0}{0}$,---the number $a$ would be equal to $\displaystyle\frac{0}{0}$, or indeterminate. \emph{Division by 0 is not an admissible operation.} \addcontentsline{toc}{section}{\numberline{}Determinateness of symbolic division} \textbf{23. Determinateness of Symbolic Division.} This exception to the determinateness of division may seem to raise an objection to the legitimacy of assuming---as is done when the demonstrations 1--4 of \S \, 18 are made to apply to symbolic quotients---that symbolic division is determinate. It must be observed, however, that $\displaystyle\frac{0}{0}$, $\displaystyle\frac{a}{0}$ are indeterminate in the \textit{numerical} sense, whereas by the determinateness of symbolic division is, of course, not meant actual numerical determinateness, but ``symbolic determinateness,'' conformity to law IX, taken merely as a symbolic statement. For, as has been already frequently said, from the present standpoint the \emph{fraction} $\displaystyle\frac{a}{b}$ is a mere symbol, altogether without numerical meaning apart from the equation $\displaystyle\left(\frac{a}{b}\right)b=a$, with which, therefore, the property of numerical determinateness has no possible connection. The same is true of the product, sum or difference of two fractions, and of the quotient of one fraction by another. As for symbolic determinateness, it needs no justification when assumed, as in the case of the fraction and the demonstrations 1--4, of symbols whose definitions do not preclude it. The inference, for instance, that because \begin{align*} \left( \frac{a}{b}\frac{c}{d} \right)bd & = \left(\frac{ac}{bd}\right)bd, \\ \frac{a}{b} \frac{c}{d} & = \frac{ac}{bd}, \end{align*} \noindent which depends on this principle of symbolic determinateness, is of precisely the same character as the inference that \[ \left(\frac{a}{b}\frac{c}{d}\right)=\frac{a}{b}b \cdot \frac{c}{d}d, \] \noindent which depends on the associative and commutative laws. Both are pure assumptions made of the \textit{undefined} symbol $\displaystyle \frac{a}{b} \frac{c}{d}$ for the sake of securing it a definition identical in form with that of the product of two numerical quotients.\footnote{These remarks, \textit{mutatis mutandis}, apply with equal force to subtraction.} \addcontentsline{toc}{section}{\numberline{}The vanishing of a product} \textbf{24. The Vanishing of a Product.} It has already been shown (\S~13, 3, \S~14, 7, \S~18, 1) that the sufficient condition for the vanishing of a product is the vanishing of one of its factors. From the determinateness of division it follows that this is also the necessary condition, that is to say: \textit{If a product vanish, one of its factors must vanish.} Let $xy = 0$, where $x$, $y$ may represent numbers or any of the symbols we have been considering. \begin{flalign*} &\text{\indent Since }& xy &= 0, && \\ && xy + xz &= xz, &\text{ \S 13, 1.}& \\ &\text{or }& x(y + z) &= xz, &\text{ Law V.}& \\ &\text{whence, if $x$ be not $0$, }& y + z &= z, &\text{ Law IX.}& \\ &\text{or }& y &= 0. &\text{ Law VII.}& \end{flalign*} \addcontentsline{toc}{section}{\numberline{}The system of rational numbers } \textbf{25. The System of Rational Numbers.} Three symbols, $0$, $-d$, $\frac{a}{b}$, have thus been found which can be reckoned with by the same rules as numbers, and in terms of which it is possible to express the result of every addition, subtraction, multiplication or division, whether performed on numbers or on these symbols themselves; therefore, also, the result of any complex operation which can be resolved into a finite combination of these four operations. Inasmuch as these symbols play the same r\^ole as numbers in relation to the fundamental operations of arithmetic, it is natural to class them with numbers. The word ``number,'' originally applicable to the positive integer only, has come to apply to zero, the negative integer, the positive and negative fraction also, this entire group of symbols being called the system of \emph{rational numbers}.\footnote{It hardly need be said that the fraction, zero, and the negative actually made their way into the number-system for quite a different reason from this;---because they admitted of certain ``real'' interpretations, the fraction in measurements of lines, the negative in debit where the corresponding positive meant credit or in a length measured to the left where the corresponding positive meant a length measured to the right. Such interpretations, or correspondences to existing things which lie entirely outside of pure arithmetic, are ignored in the present discussion as being irrelevant to a pure arithmetical doctrine of the artificial forms of number.} This involves, of course, a radical change of the number concept, in consequence of which numbers become merely part of the symbolic equipment of certain operations, admitting, for the most part, of only such definitions as these operations lend them. In accepting these symbols as its numbers, arithmetic ceases to be occupied exclusively or even principally with the properties of numbers in the strict sense. It becomes an \emph{algebra}, whose immediate concern is with certain operations defined, as addition by the equations $a + b = b + a$, $a + (b + c) = a + b + c$, formally only, without reference to the meaning of the symbols operated on.\footnote{The word ``algebra'' is here used in the general sense, the sense in which \emph{quaternions} and the \textit{Ausdehungslehre} (see \S\S~127, 128) are algebras. Inasmuch as elementary arithmetic, as actually constituted, accepts the fraction, there is no essential difference between it and elementary algebra with respect to the kinds of number with which it deals; algebra merely goes further in the use of artificial numbers. The elementary algebra differs from arithmetic in employing literal symbols for numbers, but chiefly in making the equation an object of investigation.} \chapter{THE IRRATIONAL.} \addcontentsline{toc}{section}{\numberline{}Inadequateness of the system of rational numbers } \textbf{26. The System of Rational Numbers Inadequate.} The system of rational numbers, while it suffices for the four fundamental operations of arithmetic and finite combinations of these operations, does not fully meet the needs of algebra. The great central problem of algebra is the equation, and that only is an adequate number-system for algebra which supplies the means of expressing the roots of all possible equations. The system of rational numbers, however, is equal to the requirements of equations of the first degree only; it contains symbols not even for the roots of such elementary equations of higher degrees as $x^2 = 2$, $x^2 = -1$. But how is the system of rational numbers to be enlarged into an algebraic system which shall be adequate and at the same time sufficiently simple? The roots of the equation \[ x^{n} + p_{1}x^{n-1} + p_{2}x^{n-2} + \dotsb + p_{n-1}x + p_{n} = 0 \] are not the results of single elementary operations, as are the negative of subtraction and the fraction of division; for though the roots of the quadratic are results of ``evolution,'' and the same operation often enough repeated yields the roots of the cubic and biquadratic also, it fails to yield the roots of higher equations. A system built up as the rational system was built, by accepting indiscriminately every new symbol which could show cause for recognition, would, therefore, fall in pieces of its own weight. The most general characteristics of the roots must be discovered and defined and embodied in symbols---by a method which does not depend on processes for solving equations. These symbols, of course, however characterized otherwise, must stand in consistent relations with the system of rational numbers and their operations. An investigation shows that the forms of number necessary to complete the algebraic system may be reduced to two: the symbol $\displaystyle\sqrt{-1}$, called the \textit{imaginary} (an indicated root of the equation $x^2 + 1 = 0$), and the class of symbols called \textit{irrational}, to which the roots of the equation $x^2-2=0$ belong. \addcontentsline{toc}{section}{\numberline{}Numbers defined by ``regular sequences.'' The irrational} \textbf{27. Numbers Defined by Regular Sequences. The Irrational.} On applying to 2 the ordinary method for extracting the square root of a number, there is obtained the following sequence of numbers, the results of carrying the reckoning out to 0, 1, 2, 3, 4, \ldots places of decimals, viz.: \[ 1, 1.4, 1.41, 1.414, 1.4142,\; \ldots \] These numbers are rational; the first of them differs from each that follows it by less than 1, the second by less than $\displaystyle\frac{1}{10}$, the third by less than $\displaystyle\frac{1}{100}$, \ldots the $n$th by less than $\displaystyle\frac{1}{10^{n-1}}$. And $\displaystyle\frac{1}{10^{n-1}}$ is a fraction which may be made less than any assignable number whatsoever by taking $n$ great enough. This sequence may be regarded as a definition of the square root of 2. It is such in the sense that a term may be found in it the square of which, as well as of each following term, differs from 2 by less than any assignable number. \textit{Any sequence of rational numbers} \[\alpha_1,\alpha_2,\alpha_3,\cdots,\alpha_{\mu},\alpha_{\mu+1},\cdots\alpha_{\mu+\nu},\cdots\] \textit{in which, as in the above sequence, the term $\alpha_{\mu}$ may, by taking $\mu$ great enough, be made to differ numerically from each term that follows it by less than any assignable number, so that, for all values of $\nu$, the difference, $\alpha_{\mu+\nu}-\alpha_{\mu}$, is numerically less than $\delta$, however small $\delta$ be taken, is called a regular sequence.} The entire class of operations which lead to regular sequences may be called \textit{regular sequence-building}. Evolution is only one of many operations belonging to this class. \textit{Any regular sequence is said to ``define a number,''}---this ``number'' being merely the symbolic, ideal, result of the operation which led to the sequence. It will sometimes be convenient to represent numbers thus defined by the single letters $a$, $b$, $c$, etc., which have heretofore represented positive integers only. After some particular term all terms of the sequence $\alpha_1$, $\alpha_2,\cdots$ may be the same, say $\alpha$. The number defined by the sequence is then $\alpha$ itself. A place is thus provided for rational numbers in the general scheme of numbers which the definition contemplates. When not a rational, the number defined by a regular sequence is called \textit{irrational}. The regular sequence .3, .33, \ldots, has a \textit{limiting value}, viz., $\displaystyle\frac{1}{3}$; which is to say that a term can be found in this sequence which itself, as well as each term which follows it, differs from $\displaystyle\frac{1}{3}$ by less than any assignable number. In other words, the difference between $\displaystyle\frac{1}{3}$ and the $\mu$th term of the sequence may be made less than any assignable number whatsoever by taking $\mu$ great enough. It will be shown presently that the number defined by any regular sequence, $\alpha_{1}$, $\alpha_{2},\cdots$ stands in this same relation to its term $\alpha_{\mu}$. \addcontentsline{toc}{section}{\numberline{}Generalized definitions of zero, positive, negative } \textbf{28. Zero, Positive, Negative.} In any regular sequence $\alpha_{1}, \alpha_{2}, \cdots$ a term $\alpha_{\mu}$ may always be found which itself, as well as each term which follows it, is either (1) numerically less than any assignable number,\\ or (2) greater than some definite positive rational number,\\ or (3) less than some definite negative rational number.\\ In the first case the number $a$, which the sequence defines, is said to be \emph{zero}, in the second \emph{positive}, in the third \emph{negative}. \addcontentsline{toc}{section}{\numberline{}Of the four fundamental operations} \textbf{29. The Four Fundamental Operations.} \textit{Of the numbers defined by the two sequences:} \begin{align*} &\alpha_{1},\alpha_{2},\alpha_{3},\cdots,\alpha_{\mu}, \alpha_{\mu+1},\cdots,\alpha_{\mu+\nu},\cdots, \\ &\beta_{1},\beta_{2},\beta_{3},\cdots,\beta_{\mu}, \beta_{\mu+1},\cdots,\beta_{\mu+\nu},\cdots \end{align*} (1) \textit{The sum is the number defined by the sequence:} \[\alpha_{1}+\beta_{1},\alpha_{2}+\beta_{2},\cdots \alpha_{\mu}+\beta_{\mu},\alpha_{\mu+1}+\beta_{\mu+1},\cdots \alpha_{\mu+\nu}+\beta_{\mu+\nu},\cdots\] (2) \textit{The difference is the number defined by the sequence:} \[\alpha_{1}-\beta_{1},\alpha_{2}-\beta_{2},\cdots \alpha_{\mu}-\beta_{\mu},\alpha_{\mu+1}-\beta_{\mu+1},\cdots \alpha_{\mu+\nu}-\beta_{\mu+\nu},\cdots\] (3) \textit{The product is the number defined by the sequence:} \[\alpha_{1}\beta_{1},\alpha_{2}\beta_{2},\cdots \alpha_{\mu}\beta_{\mu},\alpha_{\mu+1}\beta_{\mu+1},\cdots \alpha_{\mu+\nu}\beta_{\mu+\nu},\cdots\] (4) \textit{The quotient is the number defined by the sequence:} \[\frac{\alpha_{1}}{\beta_{1}}, \frac{\alpha_{2}}{\beta_{2}},\cdots \frac{\alpha_{\mu}}{\beta_{\mu}}, \frac{\alpha_{\mu+1}}{\beta_{\mu+1}},\cdots \frac{\alpha_{\mu+\nu}}{\beta_{\mu+\nu}},\cdots\] For these definitions are consistent with the corresponding definitions for rational numbers; they reduce to these elementary definitions, in fact, whenever the sequences $\alpha_1$, $\alpha_2, \ldots$; $\beta_1$, $\beta_2, \ldots$ either reduce to the forms $\alpha$, $\alpha,\ldots$; $\beta$, $\beta, \ldots$ or have rational limiting values. They conform to the fundamental laws I--IX\@. This is immediately obvious with respect to the commutative, associative, and distributive laws, the corresponding terms of the two sequences $\alpha_1\beta_1$, $\alpha_2\beta_2,\ldots$; $\beta_1\alpha_1$, $\beta_2\alpha_2, \ldots$, for instance, being identically equal, by the commutative law for rationals. But again division as just defined is determinate. For division can be indeterminate only when a product may vanish without either factor vanishing (cf. \S~24); whereas $\alpha_1\beta_1$, $\alpha_2\beta_2,\ldots$ can define 0, or its terms after the $n$th fall below any assignable number whatsoever, only when the same is true of one of the sequences $\alpha_1$, $\alpha_2, \ldots$; $\beta_1$, $\beta_2, \ldots$\footnote{It is worth noticing that the determinateness of division is here not an independent assumption, but a consequence of the definition of multiplication and the determinateness of the division of rationals. The same thing is true of the other fundamental laws I--V, VII. } It only remains to prove, therefore, that the sequences (1), (2), (3), (4) are qualified to define numbers (\S~27). (1) and (2) Since the sequences $\alpha_1$, $\alpha_2,\ldots$; $\beta_1$, $\beta_2,\ldots$ are, by hypothesis, such as define numbers, corresponding terms in the two, $\alpha_\mu$, $\beta_\mu$ may be found, such that \begin{tabular}{ll} & $\alpha_{\mu+\nu}-\alpha_\mu$ \; is numerically \; $< \delta$, \\ and & $ \; \beta_{\mu+\nu}-\beta_\mu$ \; is numerically \; $ < \delta$, \\ and, therefore, & $ \; (\alpha_{\mu+\nu} \pm \beta_{\mu+\nu})-(\alpha_\mu \pm \beta_\mu) < 2\delta$,\\ \end{tabular} \noindent for all values of $\nu$, and that however small $\delta$ may be. Therefore each of the sequences $\alpha_1+\beta_1$, $\alpha_2+\beta_2,\ldots$; $\alpha_1-\beta_1$, $\alpha_2-\beta_2,\ldots$ is regular. (3) Let $\alpha_\mu$ and $\beta_\mu$ be chosen as before. Then $\alpha_{\mu+\nu}\beta_{\mu+\nu} - \alpha_\mu \beta_\mu$, since it is identically equal to \[ \alpha_{\mu+\nu}(\beta_{\mu+\nu}-\beta_\mu) + \beta_\mu(\alpha_{\mu+\nu}-\alpha_\mu), \] is numerically less than $\alpha_{\mu+\nu}\delta+\beta_\mu\delta$, and may, therefore, be made less than any assignable number by taking $\delta$ small enough; and that for all values of $\nu$. Therefore the sequence $\alpha_1\beta_1, \alpha_2\beta_2,\ldots$ is regular. \[ (4) \qquad \frac{\alpha_{\mu+\nu}}{\beta_{\mu+\nu}}-\frac{\alpha_\mu}{\beta_\mu} = \frac{\alpha_{\mu+\nu}\beta_\mu-\beta_{\mu+\nu}\alpha_\mu}{\beta_{\mu+\nu}\beta_\mu}, \] which is identically equal to \[ \frac{\beta_{\mu+\nu}(\alpha_{\mu+\nu}-\alpha_\mu)-\alpha_{\mu+\nu}(\beta_{\mu+\nu}-\beta_\mu)}{\beta_{\mu+\nu}\beta_\mu}. \] By choosing $\alpha_\mu$ and $\beta_\mu$ as before the numerator of this fraction, and therefore the fraction itself, may be made less than any assignable number; and that for all values of $\nu$. Therefore the sequence $\displaystyle\frac{\alpha_1}{\beta_1}, \frac{\alpha_2}{\beta_2}, \ldots$ is regular. \addcontentsline{toc}{section}{\numberline{}Of equality and greater and lesser inequality } \textbf{30. Equality. Greater and Lesser Inequality.} \textit{Of two numbers, $a$ and $b$, defined by regular sequences $\alpha_1, \alpha_2,\ldots,$; $\beta_1,\beta_2, \ldots$, the first is greater than, equal to or less than the second, according as the number defined by $\alpha_1-\beta_1, \alpha_1-\beta_2,\ldots$ is greater than, equal to or less than $0$.} This definition is to be justified exactly as the definitions of the fundamental operations on numbers defined by regular sequences were justified in \S~29. From this definition, and the definition of $0$ in \S~28, it immediately follows that COR. \textit{Two numbers which differ by less than any assignable number are equal.} \addcontentsline{toc}{section}{\numberline{}The number defined by a regular sequence its limiting value } \textbf{31. The Number Defined by a Regular Sequence is its Limiting Value.} The difference between a number $a$ and the term $\alpha_{\mu}$ of the sequence by which it is defined may be made less than any assignable number by taking $\mu$ great enough. For it is only a restatement of the definition of a regular sequence $\alpha_1,\alpha_2,\ldots$ to say that the sequence \[ \alpha_1-\alpha_{\mu},\alpha_2-\alpha_{\mu},\ldots,\alpha_{\mu+\nu}-\alpha_\mu,\ldots, \] which defines the difference $a-\alpha_{\mu}$ (\S~29, 2), is one whose terms after the $\mu$th can be made less than any assignable number by choosing $\mu$ great enough, and which, therefore, becomes, as $\mu$ is indefinitely increased, a sequence which defines 0 (\S~28). In other words, the \textit{limit} of $a-\alpha_{\mu}$ as $\mu$ is indefinitely increased is 0, or $a=\text{limit}\,(\alpha_{\mu})$. Hence \textit{The number defined by a regular sequence is the limit to which the $\mu$th term of this sequence approaches as $\mu$ is indefinitely increased.}\footnote{What the above demonstration proves is that $a$ stands in the same relation to $\alpha_{\mu}$ when irrational as when rational. The principle of permanence (cf. \S~12), therefore, justifies one in regarding $a$ as the ideal limit in the former case since it is the actual limit in the latter (\S~27). $a$, when irrational, is limit $(\alpha_{\mu})$ in precisely the same sense that $\displaystyle\frac{c}{d}$ is the quotient of $c$ by $d$, when $c$ is a positive integer not containing $d$. It follows from the demonstration that if there be a reality corresponding to $a$, as in geometry we assume there is (\S~40), that reality will be the actual limit of the reality of the same kind corresponding to $\alpha_{\mu}$. The notion of irrational limiting values was not immediately available because, prior to \S\S~28, 29, 30, the meaning of difference and greater and lesser inequality had not been determined for numbers defined by sequences.} The definitions (1), (2), (3), (4) of \S~29 may, therefore, be stated in the form: \begin{equation*} \begin{aligned} & \text{limit}\,(\alpha_{\mu}) \pm \text{limit}\,(\beta_{\mu}) &= &\text{limit}\,(\alpha_{\mu}\pm\beta_{\mu}),\\ & \text{limit}\,(\alpha_{\mu})\cdot\text{limit}\,(\beta_{\mu}) &= & \text{limit}\,(\alpha_{\mu}\beta_{\mu}),\\ & \frac{\text{limit}\,(\alpha_{\mu})}{\text{limit}\,(\beta_{\mu})} &=& \text{limit}\,\left(\frac{\alpha_{\mu}}{\beta_{\mu}}\right).\\ \end{aligned} \end{equation*} For limit ($\alpha_{\mu}$) the more complete symbol $\displaystyle\lim_{\mu\doteq\infty}(\alpha_{\mu})$ is also used, read ``the limit which $\alpha_{\mu}$ approaches as $\mu$ approaches infinity''; the phrase ``approaches infinity'' meaning only, ``becomes greater than any assignable number.'' \addcontentsline{toc}{section}{\numberline{}Division by zero } \textbf{32. Division by Zero.} (1) The sequence $\displaystyle\frac{\alpha_1}{\beta_1},\frac{\alpha_2}{\beta_2},\ldots$ cannot define a number when the number defined by $\beta_1,\beta_2,\ldots$ is 0, unless the number defined by $\alpha_1,\alpha_2,\ldots$ be also 0. In this case it may; $\displaystyle\frac{\alpha_{\mu}}{\beta_{\mu}}$ may approach a definite limit as $\mu$ increases, however small $\alpha_{\mu}$ and $\beta_{\mu}$ become. But this number is not to be regarded as the mere quotient $\displaystyle\frac{0}{0}$. Its value is not at all determined by the fact that the numbers defined by $\alpha_1,\alpha_2,\ldots$; $\beta_1,\beta_2,\ldots$ are 0; for there is an indefinite number of different sequences which define 0, and by properly choosing $\alpha_1,\alpha_2,\ldots$; $\beta_1,\beta_2,\ldots$ from among them, the terms of the sequence $\displaystyle\frac{\alpha_1}{\beta_1},\frac{\alpha_2}{\beta_2},\ldots$ may be made to take any value whatsoever. (2) The sequence $\displaystyle\frac{\alpha_1}{\beta_1},\frac{\alpha_2}{\beta_2},\ldots$ is not regular when $\beta_1,\beta_2,\ldots$ defines 0 and $\alpha_1,\alpha_2,\ldots$ defines a number different from 0. No term $\displaystyle\frac{\alpha_{\mu}}{\beta_{\mu}}$ can be found which differs from the terms following it by less than any assignable number; but rather, by taking $\mu$ great enough, $\displaystyle\frac{\alpha_{\mu}}{\beta_{\mu}}$ can be made greater than any assignable number whatsoever. Though not regular and though they do not define numbers, such sequences are found useful in the higher mathematics. They may be said to define \textit{infinity}. Their usefulness is due to their determinate form, which makes it possible to bring them into combination with other sequences of like character or even with regular sequences. Thus the quotient of any regular sequence $\gamma_1,\gamma_2,\ldots$ by $\displaystyle\frac{\alpha_1}{\beta_1}, \frac{\alpha_2}{\beta_2}, \ldots$ is a regular sequence and defines 0; and the quotient of $\displaystyle\frac{\alpha_1}{\beta_1}, \frac{\alpha_2}{\beta_2},\ldots$ by a similar sequence $\displaystyle\frac{\gamma_1}{\delta_1}, \frac{\gamma_2}{\delta_2}, \ldots$ may also be regular and serve---if $\alpha_i$, $\beta_i$, $\gamma_i$, $\delta_i$ ($i = 1, 2,\ldots$) be properly chosen---to define any number whatsoever. The term $\displaystyle\frac{\alpha_\mu}{\beta_\mu}$ ``approaches infinity'' (\textit{i.~e.} increases without limit) as $\mu$ is indefinitely increased, in a definite or determinate manner; so that the infinity which $\displaystyle\frac{\alpha_1}{\beta_1},\frac{\alpha_2}{\beta_2}, \ldots$ defines is not indeterminate like the mere symbol $\displaystyle\frac{a}{0}$ of \S~22. But here again it is to be said that this determinateness is not due to the mere fact that $\beta_1, \beta_2 \ldots$ defines 0, which is all that the unqualified symbol $\displaystyle\frac{a}{0}$ expresses. For there is an indefinite number of different sequences which like $\beta_1, \beta_2, \ldots$ define 0, and $\displaystyle\frac{a}{0}$ is a symbol for the quotient of $a$ by any one of them. \addcontentsline{toc}{section}{\numberline{}The number-system defined by regular sequences of rationals a closed and continuous system } \textbf{33. The System defined by Regular Sequences of Rationals, Closed and Continuous.} \textit{A regular sequence of irrationals \[ a_1, a_{2},\ldots a_m, a_{m+1},\ldots a_{m+n}, \ldots \] (in which the differences $a_{m+n}-a_{m}$ may be made numerically less than any assignable number by taking $m$ great enough) defines a number, but never a number which may not also be defined by a sequence of rational numbers.} For $\beta_1, \beta_2, \ldots$ being any sequence of rationals which defines 0, construct a sequence of rationals $\alpha_1, \alpha_2,\ldots$ such that $a_1-\alpha_1$ is numerically less than $\beta_1$ (\S~30), and in the same sense $a_2-\alpha_2<\beta_2$, $a_3-\alpha_3<\beta_3$ etc. Then limit $(a_m-\alpha_m) = 0$ (\S\S~28, 31), or limit $(a_m) = \text{limit}(\alpha_m)$. This theorem justifies the use of regular sequences of irrationals for defining numbers, and so makes possible a simple expression of the results of some very complex operations. Thus $a^m$, where $m$ is irrational, is a number; the number, namely, which the sequence $a^{\alpha_1},a^{\alpha_2},\ldots$ defines, when $\alpha_1,\alpha_2,\ldots$ is any sequence of rationals defining $m$. But the importance of the theorem in the present discussion lies in its declaration that the number-system defined by regular sequences of rationals contains all numbers which result from the operations of regular sequence-building in general. It is a \textit{closed} system with respect to the four fundamental operations and this new operation, exactly as the rational numbers constitute a closed system with respect to the four fundamental operations only (cf. \S~25). The system of numbers defined by regular sequences of rationals---\textit{real} numbers, as they are called---therefore possesses the following two properties: (1) between every two unequal, real numbers there are other real numbers; (2) a variable which runs through any regular sequence of real numbers, rational or irrational, will approach a real number as limit. We indicate all this by saying that the system of real numbers is \textbf{continuous}. \chapter{THE IMAGINARY\@. COMPLEX NUMBERS.} \addcontentsline{toc}{section}{\numberline{}The pure imaginary } \textbf{34. The Pure Imaginary.} The other symbol which is needed to complete the number-system of algebra, unlike the irrational but like the negative and the fraction, admits of definition by a single equation of a very simple form, viz., \[ x^2+1=0 \] It is the symbol whose square is $-1$, the symbol $\sqrt{-1}$, now commonly written $i$.\footnote{Gauss introduced the use of $i$ to represent $\sqrt{-1}$.} It is called the \textit{unit of imaginaries}. In contradistinction to $i$ all the forms of number hitherto considered are called \textit{real}. These names, ``real'' and ``imaginary,'' \, are unfortunate, for they suggest an opposition which does not exist. Judged by the only standards which are admissible in a pure doctrine of numbers $i$ is imaginary in the same sense as the negative, the fraction, and the irrational, but in no other sense; all are alike mere symbols devised for the sake of representing the results of operations even when these results are not numbers (positive integers). $i$ got the name imaginary from the difficulty once found in discovering some extra-arithmetical reality to correspond to it. As the only property attached to $i$ by definition is that its square is $-1$, nothing stands in the way of its being ``multiplied'' \, by any real number $a$; the product, $ia$, is called a \textit{pure imaginary}. An entire new system of numbers is thus created, coextensive with the system of real numbers, but distinct from it. Except $0$, there is no number in the one which is at the same time contained in the other.\footnote{Throughout this discussion $\infty$ is not regarded as belonging to the number-system, but as a limit of the system, lying without it, a symbol for something greater than any number of the system.} Numbers in either system may be compared with each other by the definitions of equality and greater and lesser inequality (\S~30), $ia$ being called $\displaystyle\gtreqqless ib$, as $\displaystyle a \gtreqqless b$; but a number in one system cannot be said to be either greater than, equal to or less than a number in the other system. \addcontentsline{toc}{section}{\numberline{}Complex numbers} \textbf{35. Complex Numbers.} The sum $a + ib$ is called a \textit{complex number}. Its terms belong to two distinct systems, of which the fundamental units are $1$ and $i$. The \textit{general} complex number $a + ib$ is defined by a \textit{complex sequence} \[ \alpha_1+i\beta_1, \, \alpha_2+i\beta_2, \ldots, \alpha_\mu+i\beta_\mu, \ldots, \] where $\alpha_1, \alpha_2, \ldots $; $\beta_1, \beta_2, \ldots $ are regular sequences. Since $a=a+i0$ (\S~36, 3, Cor.) and $ib=0+ib$, all real numbers, $a$, and pure imaginaries, $ib$, are contained in the system of complex numbers $a+ib$. $a+ib$ can vanish only when both $a=0$ and $b=0$. \addcontentsline{toc}{section}{\numberline{}The fundamental operations on complex numbers} \textbf{36. The Four Fundamental Operations on Complex Numbers.} The assumption of the permanence of the fundamental laws leads immediately to the following definitions of the addition, subtraction, multiplication, and division of complex numbers. \begin{equation*} \begin{aligned} 1. \qquad (a+ib)+(a'+ib') = \, & a+a'+i(b+b'). \\ \text{For} \quad (a+ib)+(a'+ib') = \, & a+ib+a'+ib', \qquad \text{Law II}.\\ = \, & a+a'+ib+ib', \qquad \text{Law I}.\\ = \, & a+a'+i(b+b'). \qquad \text{Laws II, V}.\\ 2. \qquad (a+ib)-(a'+ib') = \, & a-a'+i(b-b').\\ \end{aligned} \end{equation*} By definition of subtraction (VI) and \S~36, 1. COR. \textit{The necessary as well as the sufficient condition for the equality of two complex numbers $a+ib$, $a'+ib'$ is that $a=a'$ and $b=b'$.} \begin{equation*} \begin{aligned} \text{For if} \quad (a+ib)-(a'+ib')= \, & a-a'+i(b-b')=0,\\ a-a'=0, b-b'= \, & 0 \; (\S~35), \; \text{or} \; a=a', b=b'.\\ 3. \qquad (a+ib)(a'+ib')= \, & aa'-bb'+i(ab'+ba').\\ \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \text{For} \quad (a+ib)(a'+ib')= \, & (a+ib)a'+(a+ib)ib', \qquad \qquad \text{Law V}.\\ = \, & aa'+ib\cdot a'+a\cdot ib'+ib\cdot ib', \qquad \text{Law V}.\\ =\, & (aa'-bb')+i(ab'+ba'). \qquad \qquad \text{Laws I--V}.\\ \end{aligned} \end{equation*} COR. \textit{If either factor of a product vanish, the product vanishes.} \[ \text{For} \quad i\times 0=i(b-b)=ib-ib \; (\S~10, 5), =0 \; (\S~14, 1). \] \[ \text{Hence} \quad (a+ib)0=a\times 0+ib\times 0=a\times 0+i(b\times 0)=0.\] \begin{flushright} Laws V, IV, \S~28, \S~29, 3. \end{flushright} \[4. \qquad \frac{a+ib}{a'+ib'}=\frac{aa'+bb'}{a'^2+b'^2}+i\frac{ba'-ab'}{a'^2+b'^2}.\] For let the quotient of $a+ib$ by $a'+ib'$ be $x+iy$. By the definition of division (VIII), \begin{align*} & (x+iy)(a'+ib')=a+ib. \\ \therefore \quad & xa'-yb'+i(xb'+ya')=a+ib. \qquad \S~36, 3\\ \therefore \quad & xa'-yb'=a, \; xb'+ya'=b. \qquad \S~36, 2, Cor. \\ \end{align*} Hence, solving for $x$ and $y$ between these two equations, \[ x=\frac{aa'+bb'}{a'^2+b'^2}, \quad y=\frac{ba'-ab'}{a'^2+b'^2}.\] Therefore, as in the case of real numbers, division is a determinate operation, except when the divisor is 0; it is then indeterminate. For $x$ and $y$ are determinate (by IX) unless $a'^2+b'^2=0$, that is, unless $a'=b'=0$, or $a'+ib'=0$; for $a'$ and $b'$ being real, $a'^2$ and $b'^2$ are both positive, and one cannot destroy the other.\footnote{What is here proven is that in the system of complex numbers formed from the fundamental units 1 and $i$ there is one, and but one, number which is the quotient of $a+ib$ by $a'+ib'$; this being a consequence of the determinateness of the division of real numbers and the peculiar relation ($i^2=-1$) holding between the fundamental units. For the sake of the permanence of IX we make the assumption, otherwise irrelevant, that this is the only value of the quotient whether within or without the system formed from the units 1 and $i$.} Hence, by the reasoning in \S~24, COR. \textit{If a product of two complex numbers vanish, one of the factors must vanish.} \addcontentsline{toc}{section}{\numberline{}Numerical comparison of complex numbers} \textbf{37. Numerical Comparison of Complex Numbers.} Two complex numbers, $a+ib$, $a'+ib'$, do not, generally speaking, admit of direct comparison with each other, as do two real numbers or two pure imaginaries; for $a$ may be greater than $a'$, while $b$ is less than $b'$. They are compared \textit{numerically}, however, by means of their \textit{moduli} $\sqrt{a^2+b^2}$, $\sqrt{a'^2+b'^2}$; $a+ib$ being said to be numerically greater than, equal to or less than $a'+ib'$ according as $\sqrt{a^2+b^2}$ is greater than, equal to or less than $\sqrt{a'^2+b'^2}$. Compare \S~47. \addcontentsline{toc}{section}{\numberline{}Adequateness of the system of complex number} \textbf{38. The Complex System Adequate.} The system $a+ib$ is an adequate number-system for algebra. For, as will be shown (Chapter VII), all roots of algebraic equations are contained in this system. But more than this, the system $a+ib$ is a closed system with respect to all existing mathematical operations, as are the rational system with respect to all finite combinations of the four fundamental operations and the real system with respect to these operations and regular sequence-building. For the results of the four fundamental operations on complex numbers are complex numbers (\S~36, 1, 2, 3, 4). Any other operation may be resolved into either a finite combination of additions, subtractions, multiplications, divisions or such combinations indefinitely repeated. In either case the result, if determinate, is a complex number, as follows from the definitions 1, 2, 3, 4 of \S~36, and the nature of the real number-system as developed in the preceding chapter (see Chapter VIII). The most important class of these higher operations, and the class to which the rest may be reduced, consists of those operations which result in infinite series (Chapter VIII); among which are involution, evolution, and the taking of logarithms (Chapter IX), sometimes included among the fundamental operations of algebra. \addcontentsline{toc}{section}{\numberline{}Fundamental characteristics of the algebra of number} \textbf{39. Fundamental Characteristics of the Algebra of Number.} The algebra of number is completely characterized, formally considered, by the laws and definitions I--IX and the fact that its numbers are expressible linearly in terms of two fundamental units.\footnote{That is, in terms of the first powers of these units.} It is a linear, associative, distributive, commutative algebra. Moreover, the most general linear, associative, distributive, commutative algebra, whose numbers are complex numbers of the form $x_1e_1+x_2e_2+\cdots+x_ne_n$, built from $n$ fundamental units $e_1, e_2,\ldots, e_n$, is reducible to the algebra of the complex number $a+ib$. For Weierstrass\footnote{Zur Theorie der aus $n$ Haupteinheiten gebildeten complexen Gr\"{o}ssen. G\"{o}ttinger Nachrichten Nr. 10, 1884. Weierstrass finds that these general complex numbers differ in only one important respect from the complex number $a+ib$. If the number of fundamental units be greater than 2, there always exist numbers, different from 0, the product of which by certain other numbers is 0. Weierstrass calls them divisors of 0. The number of exceptions to the determinateness of division is infinite instead of one.} has shown that any two complex numbers $a$ and $b$ of the form $x_1e_1+x_2e_2+ \cdots +x_ne_n$, whose sum, difference, product, and quotient are numbers of this same form, and for which the laws and definitions I--IX hold good, may by suitable transformations be resolved into components $a_1, a_2,\ldots a_r$; $b_1, b_2,\ldots b_r$, such that \begin{align*} a= \, & a_1+a_2+ \cdots +a_r,\\ b= \, & b_1+b_2+\cdots+b_r,\\ a \pm b= \, & a_1 \pm b_1 + a_2 \pm b_2+\cdots+a_r \pm b_r,\\ ab= \, & a_1b_1+a_2b_2+\cdots+a_rb_r,\\ \frac{a}{b}= \, & \frac{a_1}{b_1}+\frac{a_2}{b_2}+\cdots+\frac{a_r}{b_r}.\\ \end{align*} \noindent The components $a_i$, $b_i$ are constructed either from one fundamental unit $g_i$ or from two fundamental units $g_i$, $k_i$.\footnote{These units are, generally speaking, not $e_1, e_2,\ldots, e_n$, but linear combinations of them, as $\gamma_1e_1+\gamma_2e_2+\cdots+\gamma_ne_n$, $\kappa_1e_1+\kappa_2e_2+\cdots+\kappa_ne_n$. Any set of $n$ independent linear combinations of the units $e_1, e_2,\ldots, e_n$ may be regarded as constituting a set of fundamental units, since all numbers of the form $\alpha_1e_1+\alpha_2e_2+\cdots+\alpha_ne_n$ may be expressed linearly in terms of them.} For components of the first kind the multiplication formula is \[(\alpha g_i)(\beta g_i)=(\alpha\beta)g_i.\] For components of the second kind the multiplication formula is \[ (\alpha g_i+\beta k_i)(\alpha'g_i+\beta'k_i) =(\alpha\alpha'-\beta\beta')g_i+(\alpha\beta'+\beta\alpha')k_i.\] And these formulas are evidently identical with the multiplication formulas \begin{align*} (\alpha1)(\beta1)= \, & (\alpha\beta)1,\\ (\alpha1+\beta i)(\alpha'1+\beta'i) = \, & (\alpha\alpha'-\beta\beta')1+(\alpha\beta'+\beta\alpha')i\\ \end{align*} \noindent of common algebra. \chapter{GRAPHICAL REPRESENTATION OF NUMBERS\@. THE VARIABLE.} \addcontentsline{toc}{section}{\numberline{}Correspondence between the real number-system and the points of a line } \textbf{40. Correspondence between the Real Number-System and the Points of a Line.} Let a right line be chosen, and on it a fixed point, to be called the null-point; also a fixed unit for the measurement of lengths. Lengths may be measured on this line either from left to right or from right to left, and equal lengths measured in opposite directions, when added, annul each other; opposite algebraic signs may, therefore, be properly attached to them. Let the sign {\Large $+$} be attached to lengths measured to the right, the sign {\Large $-$} to lengths measured to the left. \textit{The entire system of real numbers may be represented by the points of the line}, by taking to correspond to each number that point whose distance from the null-point is represented by the number. For, as we proceed to demonstrate, the distance of every point of the line from the null-point, measured in terms of the fixed unit, is a real number; and we may assume that for each real number there is such a point. 1. \textit{The distance of any point on the line from the null-point is a real number.} Let any point on the line be taken, and suppose the segment of the line lying between this point and the null-point to contain the unit line $\alpha$ times, with a remainder $d_1$, this remainder to contain the tenth part of the unit line $\beta$ times, with a remainder $d_2$, $d_2$ to contain the hundredth part of the unit line $\gamma$ times, with a remainder $d_3$, etc. The sequence of rational numbers thus constructed, viz., $\alpha,\alpha.\beta,\alpha.\beta\gamma,\ldots$ (adopting the decimal notation) is regular; for the difference between its $\mu$th term and each succeeding term is less than $\displaystyle\frac{1}{10^{\mu-1}}$, a fraction which may be made less than any assignable number by taking $\mu$ great enough; and, by construction, this number represents the distance of the point under consideration from the null-point. By the convention made respecting the algebraic signs of lengths this number will be positive when the point lies to the right of the null-point, negative when it lies to the left. 2. \textit{Corresponding to every real number there is a point on the line, whose distance and direction from the null-point are indicated by the number.} ($a$) If the number is rational, we can construct the point. For every rational number can be reduced to the form of a simple fraction. And if $\displaystyle\frac{\alpha}{\beta}$ denote the given number, when thus expressed, to find the corresponding point we have only to lay off the $\beta$th part of the unit segment $\alpha$ times along the line, from the null-point to the right, if $\displaystyle\frac{\alpha}{\beta}$ is positive, from the null-point to the left, if $\displaystyle\frac{\alpha}{\beta}$ is negative. ($b$) If the number is irrational, we usually cannot construct the point, or even prove that it exists. But let \textbf{a} denote the number, and $\alpha_1,\alpha_2,\ldots,\alpha_n,\ldots$ any regular sequence of rationals which defines it, so that $\alpha_n$ will approach \textbf{a} as limit when $n$ is indefinitely increased. Then, by ($a$), there is a sequence of points on the line corresponding to this sequence of rationals. Call this sequence of points $A_1, A_2,\cdots, A_n,\cdots$. It has the property that the length of the segment $A_nA_{n+m}$ will approach 0 as limit when $n$ is indefinitely increased. When $\alpha_n$ is made to run through the sequence of values $\alpha_1,\alpha_2,\ldots$, the corresponding point $A_n$ will run through the sequence of positions $A_1, A_2,\cdots$. And we \textit{assume} that just as there is in the real system a definite number \textbf{a} which $\alpha_n$ is approaching as a limit, so also is there on the line a definite point \textbf{A} which $A_n$ approaches as limit. It is this point \textbf{A} which we make correspond to \textbf{a}. Of course there are infinitely many regular sequences of rationals $\alpha_1,\alpha_2,\ldots$ defining \textbf{a}, and as many sequences of corresponding points $A_1, A_2,\cdots$. We assume that the limit point \textbf{A} is the same for all these sequences. \addcontentsline{toc}{section}{\numberline{}The continuous variable} \textbf{41. The Continuous Variable.} The relation of one-to-one correspondence between the system of real numbers and the points of a line is of great importance both to geometry and to algebra. It enables us, on the one hand, to express geometrical relations numerically, on the other, to picture complicated numerical relations geometrically. In particular, algebra is indebted to it for the very useful notion of the continuous variable. One of our most familiar intuitions is that of continuous motion. \begin{figure*}[htbp] \centering \includegraphics[scale=0.75]{images/figa.eps}\\ \end{figure*} Suppose the point $P$ to be moving continuously from $A$ to $B$ along the line $OAB$; and let \textbf{a}, \textbf{b}, and \textbf{x} denote the lengths of the segments $OA$, $OB$, and $OP$ respectively, $O$ being the null-point. It will then follow from our assumption that the segment $AB$ contains a point for every number between \textbf{a} and \textbf{b}, that as $P$ moves continuously from $A$ to $B$, \textbf{x} may be regarded as increasing from the value \textbf{a} to the value \textbf{b} through all intermediate values. To indicate this we call \textbf{x} a \textit{continuous variable}. \addcontentsline{toc}{section}{\numberline{}Correspondence between the complex number-system and the points of a plane} \textbf{42. Correspondence between the Complex Number-System and the Points of a Plane.} The entire system of complex numbers may be represented by the points of a plane, as follows: In the plane let two right lines $X'OX$ and $Y'OY$ be drawn intersecting at right angles at the point $O$. \begin{figure}[htbp] \centering \includegraphics[scale=0.75]{images/fig1.eps}\\ \textsc{Fig. 1.} \end{figure} Make $X'OX$ the ``axis'' of real numbers, using its points to represent real numbers, after the manner described in \S~40, and make $Y'OY$ the axis of pure imaginaries, representing $ib$ by the point of $OY$ whose distance from $O$ is $b$ when $b$ is positive, and by the corresponding point of $OY'$ when $b$ is negative. The point taken to represent the complex number $a+ib$ is $P$, constructed by drawing through $A$ and $B$, the points which represent $a$ and $ib$, parallels to $Y'OY$ and $X'OX$, respectively. The correspondence between the complex numbers and the points of the plane is a one-to-one correspondence. To every point of the plane there is a complex number corresponding, and but one, while to each number there corresponds a single point of the plane.\footnote{A reality has thus been found to correspond to the hitherto uninterpreted symbol $a+ib$. But this reality has no connection with the reality which gave rise to arithmetic, the number of things in a group of distinct things, and does not at all lessen the purely symbolic character of $a+ib$ when regarded from the standpoint of that reality, the standpoint which must be taken in a purely arithmetical study of the origin and nature of the number concept. The connection between the numbers $a+ib$ and the points of a plane is purely artificial. The tangible geometrical pictures of the relations among complex numbers to which it leads are nevertheless a valuable aid in the study of these relations.} \addcontentsline{toc}{section}{\numberline{}The complex variable} If the point $P$ be made to move along any curve in its plane, the corresponding number $x$ may be regarded as changing through a continuous system of complex values, and is called a \emph{continuous complex variable}. (Compare \S~41.) \addcontentsline{toc}{section}{\numberline{}Definitions of modulus and argument of a complex number and of sine, cosine, and circular measure of an angle} \textbf{43. Modulus.} The length of the line $OP$ (Fig.~1), \textit{i.~e.}\ $\sqrt{a^2+b^2}$, is called the \emph{modulus} of $a+ib$. Let it be represented by $\rho$. \textbf{44. Argument.} The angle $XOP$ made by $OP$ with the positive half of the axis of real numbers is called the \emph{angle} of $a+ib$, or its \emph{argument}. Let its numerical measure be represented by $\theta$. The angle is always to be measured ``counter-clockwise'' from the positive half of the axis of real numbers to the modulus line. \textbf{45. Sine.} The ratio of $PA$, the perpendicular from $P$ to the axis of real numbers, to $OP$, \textit{i.~e.} $\frac{b}{\rho}$, is called the \emph{sine} of $\theta$, written $\sin\theta$. $\sin\theta$ is by this definition positive when $P$ lies above the axis of real numbers, negative when $P$ lies below this line. \textbf{46. Cosine.} The ratio of $PB$, the perpendicular from $P$ to the axis of imaginaries, to $OP$, \textit{i.~e.}\ $\frac{a}{\rho}$, is called the \emph{cosine} of $theta$, written $\cos\theta$. $\cos\theta$ is positive or negative according as $P$ lies to the right or the left of the axis of imaginaries. \addcontentsline{toc}{section}{\numberline{}Demonstration that $a + ib = \rho (\cos \theta + i \sin \theta) = \rho e^{i\theta}$} \textbf{47. Theorem.} \emph{The expression of $a+ib$ in terms of its modulus and angle is $\rho(\cos\theta+i\sin\theta)$.} \begin{flalign*} &\text{\indent For by \S~46 }& \frac{a}{\rho} &= \cos\theta, \therefore a = \rho\cos\theta; && \\ &\text{and by \S~45, }& \frac{b}{\rho} &= \sin\theta, \therefore b = \rho\sin\theta. && \\ &\text{\indent Therefore }& a+ib &= \rho(\cos\theta+i\sin\theta). && \end{flalign*} The factor $\cos\theta + i\sin\theta$ has the same sort of geometrical meaning as the algebraic signs $+$ and $-$, which are indeed but particular cases of it: it indicates the \emph{direction} of the point which represents the number from the null-point. It is the other factor, the modulus $\rho$, the distance from the null-point of the point which corresponds to the number, which indicates the ``absolute value'' of the number, and may represent it when compared numerically with other numbers (\S~37),---that one of two numbers being numerically the greater whose corresponding point is the more distant from the null-point. \addcontentsline{toc}{section}{\numberline{}Construction of the points which represent the sum, difference, product, and quotient of two complex numbers} \textbf{48. Problem I.} \textit{Given the points $P$ and $P'$, representing $a + ib$ and $a' + ib'$ respectively; required the point representing $a + a' + i(b + b')$.} The point required is $P''$, the intersection of the parallel to $OP$ through $P'$ with the parallel to $OP'$ through $P$. For completing the construction indicated by the figure, we have $OD' = PE = DD''$, and therefore $OD'' = OD + OD'$; and similarly $P''D'' = PD + P'D'$. \textsc{Cor.}~I. To get the point corresponding to $a-a' + i(b-b')$, produce $OP'$ to $P'''$, making $OP''' = OP'$, and complete the parallelogram $OP$, $OP'''$. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{images/fig2.eps}\\ \textsc{Fig. 2.} \end{figure} \textsc{Cor.}~II. \textit{The modulus of the sum or difference of two complex numbers is less than (at greatest equal to) the sum of their moduli.} For $OP''$ is less than $OP + PP''$ and, therefore, than $OP + OP$, unless $O$, $P$, $P'$ are in the same straight line, when $OP'' = OP + OP'$. Similarly, $PP'$, which is equal to the modulus of the difference of the numbers represented by $P$ and $P'$, is less than, at greatest equal to, $OP + OP'$. \textbf{49. Problem II.} \textit{Given $P$ and $P'$, representing $a+ib$ and $a'+ib'$ respectively; required the point representing $(a+ib)(a'+ib')$.} \[ \begin{array}{rlr} \text{\indent Let } \quad a+ib &= \rho(\cos\theta + i\sin\theta), &\S~47\\ \text{and } \quad a'+ib' &= \rho'(\cos\theta' + i\sin\theta');\\ \text{then }\quad (a+ib)&(a'+ib')\\ &= \rho\rho'(\cos\theta+i\sin\theta) (\cos\theta'+i\sin\theta') \\ &= \rho\rho'[(\cos\theta\cos\theta' - \sin\theta\sin\theta') \\ &\mspace{80mu} +i(\sin\theta\cos\theta' + \cos\theta\sin\theta')].\\ \text{\indent But } \quad \cos\theta\cos\theta' & -\sin\theta\sin\theta' = \cos(\theta+\theta'),\footnotemark[1] \\ \text{and } \quad \sin\theta\cos\theta' &+ \cos\theta\sin\theta' = \sin(\theta+\theta').\footnotemark[1] \end{array} \] \footnotetext[1]{For the demonstration of these, the so-called addition theorems of trigonometry, see Wells' Trigonometry, \S~65, or any other text-book of trigonometry.} Therefore $(a+ib)(a'+ib') = \rho\rho'[\cos(\theta+\theta')+i\sin(\theta+\theta')]$; or, \emph{The modulus of the product of two complex numbers is the product of their moduli, its argument the sum of their arguments}. The required construction is, therefore, made by drawing through $O$ a line making an angle $\theta+\theta'$ with $OX$, and laying off on this line the length $\rho\rho'$. \textsc{Cor.}~I. Similarly the product of $n$ numbers having moduli $\rho$, $\rho'$, $\rho''$, $\dotsb$ $\rho^{(n)}$ respectively, and arguments $\theta$, $\theta'$, $\theta''$, $\dotsc$ $theta^{(n)}$, is the number \[ \begin{split} \rho\rho'\rho''\dotsm\rho^{(n)} [\cos(\theta+\theta'+\theta''+\dotsb\theta^{(n)}) \\ + i\sin(\theta+\theta'+\theta''+\dotsb\theta^{(n)})]. \end{split} \] In particular, therefore, by supposing the $n$ numbers equal, we may infer the theorem \[ [ \rho(\cos\theta + i\sin\theta) ]^n = \rho^n (\cos n\theta + i\sin n\theta), \] which is known as Demoivre's Theorem. \textsc{Cor.}~II\@. From the definition of division and the preceding demonstration it follows that \[ \frac{a+ib}{a'+ib'} = \frac{\rho}{\rho'} [\cos(\theta-\theta') + i\sin(\theta-\theta')]; \] the construction for the point representing $\dfrac{a+ib}{a'+ib'}$ is, therefore, obvious. \textbf{50. Circular Measure of Angle.} Let a circle of unit radius be constructed with the vertex of any angle for centre. The length of the arc of this circle which is intercepted between the legs of the angle is called the \emph{circular measure} of the angle. \textbf{51. Theorem.} \textit{Any complex number may be expressed in the form $\rho e^{i\theta}$; where $\rho$ is its modulus and $\theta$ the circular measure of its angle.} It has already been proven that a complex number may be written in the form $\rho(\cos\theta+i\sin\theta)$, where $\rho$ and $\theta$ have the meanings just given them. The theorem will be demonstrated, therefore, when it shall have been shown that \[ e^{i\theta}=\cos\theta+i\sin\theta. \] If $n$ be any positive integer, we have, by \S~36 and the binomial theorem, \begin{align*} \left( 1 + \frac{i\theta}{n} \right)^n &= 1 + n\frac{i\theta}{n} + \frac{n(n-1)}{2!}\frac{(i\theta)^2}{n^2} \\ &\phantom{= 1 + n\frac{i\theta}{n}} + \frac{n(n-1)(n-2)}{3!}\frac{(i\theta)^3}{n^3} + \dotsb \\ &= 1 + i\theta + \frac{1-\frac{1}{n}}{2!}(i\theta)^2 \\ &\phantom{= 1 + i\theta} + \frac{\left(1-\frac{1}{n}\right) \left(1-\frac{2}{n}\right)}{3!} (i\theta)^3 + \dotsb. \end{align*} Let $n$ be indefinitely increased; the limit of the right side of this equation will be the same as that of the left. But the limit of the right side is \[ 1+i\theta+\frac{(i\theta)^2}{2!}+\frac{(i\theta)^3}{3!}+\ldots; \; \text{i.~e.} \; e^{i\theta}.\footnote{This use of the symbol $\displaystyle e^{i\theta}$ will be fully justified in \S~73.}\] Therefore $\displaystyle e^{i\theta}$ is the limit of $\displaystyle\left(1+\frac{i\theta}{n}\right)^n$ as $n$ approaches $\infty$. To construct the point representing $\displaystyle\left(1+\frac{i\theta}{n}\right)^n$: \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{images/fig3.eps}\\ \textsc{Fig. 3.} \end{figure} On the axis of real numbers lay off $OA=1$. Draw $AP$ equal to $\theta$ and parallel to $OB$, and divide it into $n$ equal parts. Let $AA_1$ be one of these parts. Then $A_1$ is the point $\displaystyle 1+\frac{i\theta}{n}$. Through $A_1$ draw $A_1A_2$ at right angles to $OA_1$ and construct the triangle $OA_1A_2$ similar to $OAA_1$. $A_2$ is then the point $\displaystyle\left(1+\frac{i\theta}{n}\right)^2$. \begin{align*} \text{For} \qquad & AOA_2=2AOA_1;\\ \text{and since} \quad & OA_2:OA_1::OA_1:OA, \; \text{and} \; OA=1,\\ \text{the length} \quad & OA_2= \; \text{the square of length} \; OA_1. \qquad (see \S~49)\\ \end{align*} In like manner construct $A_3$ to represent $\displaystyle\left(1+\frac{i\theta}{n}\right)^3$, $A_4$ for $\displaystyle\left(1+\frac{i\theta}{n}\right)^4, \;\\ \cdots A_n \; \text{for} \; \left(1+\frac{i\theta}{n}\right)^n$. Let $n$ be indefinitely increased. The broken line $AA_1A_2 \cdots A_n$ will approach as limit an arc of length $\theta$ of the circle of radius $OA$ and, therefore, its extremity, $A_n$, will approach as limit the point representing $\cos\theta+i\sin\theta$ (\S~47). Therefore the limit of $\displaystyle\left(1 + \frac{i\theta}{n}\right)^n$ as $n$ is indefinitely increased is $\cos\theta + i\sin\theta$. But this same limit has already been proved to be $e^{i\theta}$. \[\text{Hence } \qquad e^{i\theta} = \cos\theta + i\sin\theta.\footnote{Dr. F. Franklin, American Journal of Mathematics, Vol. VII, p.~376. Also M\"obius, Collected Works, Vol. IV, p.~726.}\] \chapter{THE FUNDAMENTAL THEOREM OF ALGEBRA.} \addcontentsline{toc}{section}{\numberline{}Definitions of the algebraic equation and its roots} \textbf{52. The General Theorem.} If \[w = a_0z^n + a_1z^{n-1} + a_2z^{n-2} + \cdots + a_{n-1}z + a_n,\] where $n$ is a positive integer, and $a_0, a_1, \ldots, a_n$ any numbers, real or complex, independent of $z$, to each value of $z$ corresponds a single value of $w$. We proceed to demonstrate that conversely to each value of $w$ corresponds a set of $n$ values of $z$, \textit{i.~e.} that there are $n$ numbers which, substituted for $z$ in the polynomial $\displaystyle a_0z^n + a_1z^{n-1} +\cdots + a_n$, will give this polynomial any value, $w_0$, which may be assigned. It will be sufficient to prove that there are $n$ values of $z$ which render $\displaystyle a_0z^n + a_1z^{n-1} +\cdots + a_n$ equal to 0, inasmuch as from this it would immediately follow that the polynomial takes any other value, $w_0$, for $n$ values of $z$; viz., for the values which render the polynomial of the same degree, $\displaystyle a_0z^n + a_1z^{n-1} +\cdots + (a_n - w_0)$, equal to 0. \textbf{53. Root of an Equation.} A value of $z$ for which $\displaystyle a_0z^n + a_1z^{n-1} +\cdots + a_n$ is 0 is called a root of this polynomial, or more commonly a root of the \textit{algebraic equation} \[ a_0z^n + a_1z^{n-1} +\cdots + a_n = 0.\] \textbf{54. Theorem.} \textit{Every algebraic equation has a root.} Given $w=a_0z^n+a_1z^{n-1}+\dotsb+a_n$. Let $\lvert w\rvert$ denote the modulus of $w$. We shall assume, though this can be proved, that among the values of $\lvert w\rvert$ corresponding to all possible values of $z$ there is a \emph{least} value, and that this least value corresponds to a finite value of $z$. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{images/fig4.eps}\\ \textsc{Fig. 4.} \end{figure} Let $\lvert w_0 \rvert$ denote this least value of $\lvert w \rvert$, and $z_0$ the value of $z$ to which it corresponds. Then $\lvert w_0 \rvert = 0$. For if not, $w_0$ will be represented in the plane of complex numbers by some point $P$ distinct from the null-point $O$. Through $P$ draw a circle having its centre in the null-point $O$. Then, by the hypothesis made, no value can be given $z$ which will bring the corresponding $w$-point within this circle. But the $w$-point \emph{can be brought within this circle}. For, $z_0$ and $w_0$ being the values of $z$ and $w$ which correspond to $P$, change $z$ by adding to $z_0$ a small increment $\delta$, and let $\Delta$ represent the consequent change in $w$. Then $\Delta$ is defined by the equation \[ \begin{split} (w_0 &+ \Delta) = a_0(z_0+\delta)^n + a_1(z_0+\delta)^{n-1} \\ &+ a_2(z_0+\delta)^{n-2} + \dotsb + a_{n-1}(z_0+\delta) + a_n. \end{split} \] On applying the binominal theorem and arranging the terms with reference to powers of $\delta$, the right member of this equation becomes \[ \begin{split} a_0z_0^n &+ a_1z_0^{n-1} + \dotsb + a_{n-1}z_0 + a_n \\ &+ [na_0z_0^{n-1} + (n-1)a_1z_0^{n-2} + \dotsb + a_{n-1}]\delta \\ &+ \text{ terms involving $\delta^2$, $\delta^3$, etc.} \end{split} \] \begin{flalign*} &\text{\indent But }& w_0 &= a_0z_0^n + a_1z_0^{n-1} + \dotsb + a_{n-1}z_0 + a_n. && \\ && \therefore \Delta &= [na_0z_0^{n-1} + (n-1)a_1z_0^{n-2} + \dotsb + a_{n-1}]\delta && \\ && &\quad + \text{ terms involving $\delta^2$, $\delta^3$, etc.} && \end{flalign*} Let $\rho'(\cos\theta'+i\sin\theta')$ be the complex number \[na_0z_0^{n-1}+(n-1)a_1z_0^{n-2}+ \dotsb +a_{n-1},\] expressed in terms of its modulus and angle, and \[\rho(\cos\theta+i\sin\theta)\] the corresponding expression for $\delta$. Then \begin{align*} \Delta &= \rho'(\cos\theta'+i\sin\theta') \times \rho (\cos\theta +i\sin\theta ) \\ &\phantom{= \rho'(\cos\theta'} + \text{ terms involving $\rho^2$, $\rho^3$, etc.} \\ &= \rho\rho'[\cos(\theta+\theta') + i\sin(\theta+\theta')] \\ &\phantom{= \rho'(\cos\theta'} + \text{ terms involving $\rho^2$, $\rho^3$, etc. \qquad \S~49.} \end{align*} The point which represents $\rho\rho'[\cos(\theta+\theta')+i\sin(\theta+\theta')]$ for any particular value of $\rho$ can be made to describe a circle of radius $\rho\rho'$ about the null-point by causing $\theta$ to increase continuously from 0 to 4 right angles. In the same circumstances the point representing \[w_0+\rho\rho'[\cos(\theta+\theta')+i\sin(\theta+\theta')]\] will describe an equal circle about the point $P$ and, therefore, come within the circle $OP$. But by taking $\rho$ small enough, $\Delta$ may be made to differ as little as we please from $\rho\rho'[\cos(\theta+\theta') + i\sin(\theta+\theta')]$,\footnotemark[1] and, therefore, the curve traced out by $P'$ (which represents $w_0+\Delta$, as $\theta$ runs through its cycle of values), to differ as little as we please from the circle of centre $P$ and radius $\rho\rho'$. Therefore by assigning proper values to $\rho$ and $\theta$, the $w$-point ($P'$) may be brought within the circle $OP$. \footnotetext[1]{ In the series $A\rho+B\rho^2+C\rho^3+$ etc., the ratio of all the terms following the first to the first, \textit{i.~e.} \[ \frac{B\rho^2+c\rho^3+\text{ etc.}}{A\rho}, = \rho\times \frac{B+C\rho+\text{ etc.}}{A}; \] which by taking $\rho$ small enough may evidently be made as small as we please.} The $w$-point nearest the null-point must therefore be the null-point itself.\footnotemark[2] \footnotetext[2]{ In the above demonstration it is assumed that the coefficient of $\delta$ is not 0. If it be 0, let $A\delta^r$ denote the first term of $\Delta$ which is not 0. If $A=\rho''(\cos\theta''+i\sin\theta'')$, we then have \[ \Delta = \rho''\rho^r[ \cos(r\theta+\theta'') + i\sin(r\theta+\theta'') ] + \text{ terms in }\theta^{r+1}, \dots b, \] from which the same conclusion follows as above.} \textbf{55. Theorem.} \textit{If $\alpha$ be a root of $a_0z^n+a_1z^{n-1}+\dotsb+a_n$, this polynomial is divisible by $z-a$.} For divide $a_0z^n+a_1z^{n-1}+\dotsb+a_n$ by $z-a$, continuing the division until $z$ disappears from the remainder, and call this remainder $R$, the quotient $Q$, and, for convenience, the polynomial $f(z)$. Then we have immediately \[f(z)=(z-\alpha)Q+R,\] holding for all values of $z$. Let $z$ take the value $\alpha$; then $f(z)$ vanishes, as also the product $(z-\alpha)Q$. Therefore when $z=\alpha$, $R=0$, and being independent of $z$ it is hence always 0. \addcontentsline{toc}{section}{\numberline{}Demonstration that an algebraic equation of the $n$th degree has $n$ roots} \textbf{56. The Fundamental Theorem.} \textit{The number of the roots of the polynomial $a_0z^n+a_1z^{n-1}+\dotsb+a_n$ is $n$.} For, by \S~54, it has at least one root; call this $\alpha$; then, by \S~55, it is divisible by $z-\alpha$, the degree of the quotient being $n-1$. Therefore we have \[ a_0z^n + a_1z^{n-1} + \dotsb + a_n = (z-\alpha) (a_0z^{n-1} + b_1z^{n-2} + \dotsb + b_{n-1}). \] Again, by \S~54, the polynomial $a_0z^{n-1}+b_1z^{n-2}+\dotsb+b_{n-1}$ has a root; call this $\beta$, and dividing as before, we have \[ a_0z^n + a_1z^{n-1} + \dotsb + a_n = (z-\alpha)(z-\beta)(\alpha_1z^{n-2} + c_1z^{n-3} + \dotsb +c_{n-2}). \] Since the degree of the quotient is lowered by 1 by each repetition of this process, $n-1$ repetitions reduce it to the first degree, or we have \[ a_0z^n + a_1z^{n-1} + \cdots + a_n = a_0(z-\alpha)(z-\beta)(z-\gamma) \cdots (z-\nu), \] \noindent a product of $n$ factors, each of the first degree. Now a product vanishes when one of its factors vanishes (\S~36, 3, Cor.), and the factor $z-\alpha$ vanishes when $z=\alpha$, $z-\beta$ when $z=\beta, \ldots , z-\nu$ when $z=\nu$. Therefore $a_0z^n + a_0z^{n-1} + \cdots + a_n$ vanishes for the $n$ values, $\alpha, \beta, \gamma, \cdots \nu$, of $z$. Furthermore, a product cannot vanish unless one of its factors vanishes (\S~36, 4, Cor.), and not one of the factors $z-\alpha, z-\beta, \ldots , z-\nu$, vanishes unless $z$ equals one of the numbers $\alpha, \beta, \cdots \nu$. The polynomial has therefore $n$ and but $n$ roots. The theorem that the number of roots of an algebraic equation is the same as its degree is called the fundamental theorem of algebra. \chapter{INFINITE SERIES.} \textbf{57. Definition.} Any operation which is the limit of additions indefinitely repeated produces an infinite series. We are to determine the conditions which an infinite series must fulfil to represent a number. If the terms of a series are real numbers, it is called a \textit{real series}; if complex, a \textit{complex series.} \section{REAL SERIES.} \addcontentsline{toc}{section}{\numberline{}Definitions of sum, convergence, and divergence} \textbf{58. Sum. Convergence. Divergence.} An infinite series \[ a_1 + a_2 + a_3 + \cdots +a_n + \cdots \] \noindent represents a number or not, according as the sequence \[ s_1, s_2, s_3, \ldots s_m, s_{m+1}, \ldots s_{m+n}, \ldots , \] \[ \text{where } \qquad s_1=a_1, s_2=a_1 + a_2, \cdots , s_i=a_1 + a_2 + \cdots a_i, \] is \textit{regular} or not. If $s_{1}, s_{2}, \cdots,$ be a regular sequence, the number which it defines, or $\lim_{n \doteq \infty}(s_{n})$, is called the \textit{sum} of the infinite series \[a_{1}+a_{2}+a_{3}+\cdots+a_{n}+\cdots,\] \noindent and the series is said to be \textit{convergent}. If $s_{1}, s_{2},$ be not a regular sequence, $s_{n}$ either transcends any finite value whatsoever, as $n$ is indefinitely increased, or while remaining finite becomes altogether indeterminate. The infinite series then has no sum, and is said to be \textit{divergent}. The series $1+1+1+\cdots$ and $1-1+1-1+\cdots$ are examples of these two classes of divergent series. A divergent series cannot represent a number. \addcontentsline{toc}{section}{\numberline{}General test of convergence} \textbf{59. General Test of Convergence.} From these definitions and \S~27 it immediately follows that: \textit{The infinite series $a_{1}+a_{2}+\cdots+a_{m}+\cdots$ is convergent when $m$ may be so taken that the differences $s_{m+n}-s_{m}$ are numerically less than any assignable number $\delta$ for all values of $n$,} where $s_{m}$ and $s_{m+n}$ are the sum of the first $m$ and of the first $m+n$ terms of the series respectively. \textit{If these conditions be not fulfilled, the series is divergent.} The limit of the $n$th term of a convergent series is 0; for the condition of convergence requires that by taking $m$ great enough, $s_{m+1}-s_{m}$, \textit{i.~e.} $a_{m+1},$ may be found less than any assignable number. But it is not to be assumed conversely that a series is convergent, if the limit of its $n$th term is 0; other conditions have also to be fulfilled, $s_{m+n}-s_{m}$ must be less than $\delta$ for \textit{all} values of $n$. Thus the limit of the $n$th term of the series $\displaystyle 1+\frac{1}{2}+\frac{1}{3}+\cdots$ is 0; but, as will presently be shown, this is a divergent series. \addcontentsline{toc}{section}{\numberline{}Absolute and conditional convergence} \textbf{60. Absolute Convergence.} It is important to distinguish between convergent series which remain convergent when all the terms are given the same algebraic signs and convergent series which become divergent on this change of signs. Series of the first class are said to be \textit{absolutely} convergent; those of the second class, only \textit{conditionally} convergent. \textit{Absolutely convergent series have the character of ordinary sums; i.~e.\ the order of the terms may be changed without altering the sum of the series.} For consider the series $a_1 + a_{2} + a_{3} +\cdots$ supposed to be absolutely convergent and to have the sum $S$, when the terms are in the normal order of the indices. It is immediately obvious that no change can be made in the sum of the series by interchanging terms with finite indices; for $n$ may be taken greater than the index of any of the interchanged terms. Then $S_{n}$ has not been affected by the change, since it is a finite sum and it is immaterial in what order the terms of a finite sum are added; and as for the rest of the series, no change has been made in the order of its terms. But $a_{1} + a_{2} + a_{3} +\cdots$ may be separated into a number of infinite series, as, for instance, into the series $a_1 + a_3 + a_{5} +\cdots$ and $a_{2} + a_{4} + a_{6} +\cdots$, and these series summed separately. Let it be separated into $l$ such series, the sums of which---they must all be absolutely convergent, as being parts of an absolutely convergent series---are $S^{(1)}, S^{(2)},\cdots S^{(l)}$, respectively; it is to be proven that \[ S=S^{(1)}+S^{(2)}+S^{(3)}+\cdots+S^{(l)}. \] Let $S_m^{(1)},S_m^{(2)},\cdots $ be the sums of the first $m$ terms of the series $S^{(1)}, S^{(2)}, \cdots $, respectively. Then, by the hypothesis that the series $a_{1} + a_{2}+\cdots $ is absolutely convergent, $m$ may be taken so large that the sum \[ {S_{m+n}}^{(1)}+{S_{m+n}}^{(2)}+\cdots+{S_{m+n}}^{(l)} \] shall differ from $S$ by less than any assignable number $\delta$ for all values of $n$; therefore the limit of this sum is $S$. But again, $n$ may be so taken that ${S_{m+n}}^{(1)}$ shall differ from $S^{(1)}$ by less than $\displaystyle \frac{\delta}{l}$, ${S_{m+n}}^{(2)}$ from $S^{(2)}$ by less than $\displaystyle\frac{\delta}{l}, \ldots$; and therefore the sum ${S_{m+n}}^{(1)}+{S_{m+n}}^{(2)}+\cdots+{S_{m+n}}^{(l)}$ from $S^{(1)}+S^{(2)}+\cdots+S^{(l)}$ by less than $\displaystyle\left(\frac{\delta}{l}\right)l$; \textit{i.~e.} by less than $\delta$. Hence the limit of this sum is $S^{(1)}+S^{(2)}+\cdots+S^{(l)}$. Therefore $S$ and $S^{(1)}+S^{(2)}+\cdots+S^{(l)}$ are limits of the same finite sum and hence equal. (We omit the proof for the case $l$ infinite.) \textbf{61. Conditional Convergence.} On the other hand, \textit{the terms of a conditionally convergent series can be so arranged that the sum of the series may take any real value whatsoever.} In a conditionally convergent series the positive and the negative terms each constitute a divergent series having 0 for the limit of its last term. If, therefore, $C$ be any positive number, and $S_{n}$ be constructed by first adding positive terms (beginning with the first) until their sum is greater than $C$, to these negative terms until their sum is again less than $C$, then positive terms till the sum is again greater than $C$, and so on indefinitely; the limit of $S_{n}$, as $n$ is indefinitely increased, is $C$. \addcontentsline{toc}{section}{\numberline{}Special tests of convergence} \textbf{62. Special Tests of Convergence.} 1. \textit{If each of the terms of a series $a_{1} + a_{2} + \cdots$ be numerically less than (at greatest equal to) the corresponding term of an absolutely convergent series, or if the ratio of each term of $a_{1} + a_{2} + \cdots$ to the corresponding term of an absolutely convergent series never exceed some finite number $C$, the series $a_{1} + a_{2} + \cdots $ is absolutely convergent.} \textit {If, on the other hand, each term of $a_{1} + a_{2} + \cdots $ be numerically greater than (at the lowest equal to) the corresponding term of a divergent series, or if the ratio of each term of $a_{1} + a_{2} + \cdots $ to the corresponding term of a divergent series be never numerically less than some finite number $C'$, different from 0, the series $a_{1} + a_{2} + \cdots$ is divergent.} 2. \textit{The series $a_{1} - a_{2} + a_{3} - a_{4} + \cdots $, the terms of which are alternately positive and negative, is convergent, if after some term $a_{i}$ each term be numerically less or, at least, not greater than the term which immediately precedes it, and the limit of $a_{n}$, as $n$ is indefinitely increased, be 0.} For here \[ s_{m+n} - s_{m} = (-1)^{m}[a_{m+1} - a_{m+2} + \cdots (-1)^{n-1}a_{m+n}] \] The expression within brackets may be written in either of the forms \begin{align*} &(a_{m+1} - a_{m+2}) + (a_{m+3} - a_{m+4}) + \cdots \tag{1}\\ \text{or} \qquad & a_{m+1} - (a_{m+2} - a_{m+3}) - \cdots \tag{2} \end{align*} It is therefore positive, (1), and less than $a_{m+1}$, (2); and hence by taking $m$ large enough, may be made numerically less than any assignable number whatsoever. The series $\displaystyle 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots$ is, by this theorem, convergent. 3. \textit{The series $\displaystyle 1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots$ is divergent.} For the first $2^{\lambda}$ terms after the first may be written \begin{align*} \frac{1}{2}+\left(\frac{1}{2+1}+\frac{1}{2+2}\right) & +\left(\frac{1}{2^2+1}+\frac{1}{2^2+2}+\frac{1}{2^2+3}+\frac{1}{2^2+2^2}\right)+\cdots \\ & +\left(\frac{1}{2^{\lambda-1}+1}+\frac{1}{2^{\lambda-1}+2}+\cdots \frac{1}{2^{\lambda-1}+2^{\lambda-1}}\right), \end{align*} \noindent where, obviously, each of the expressions within parentheses is greater than $\displaystyle \frac{1}{2}$. The sum of the first $2^{\lambda}$ terms after the first is therefore greater than $\displaystyle\frac{\lambda}{2}$, and may be made to exceed any finite quantity whatsoever by taking $\lambda$ great enough. This series is commonly called the harmonic series. By a similar method of proof it may be shown that the series $\displaystyle 1+\frac{1}{2^p}+\frac{1}{3^p}+\cdots$ is convergent if $p>1$. \[\text{Here,} \qquad \frac{1}{2^p}+\frac{1}{3^p}<\frac{2}{2^p}, \; \frac{1}{4^p}+\frac{1}{5^p}+\frac{1}{6^p}+\frac{1}{7^p}<\frac{4}{4^p}, \; \textit{i.~e.} \; <\left(\frac{2}{2^p}\right)^2 \cdots, \] and the sum of the series is, therefore, less than that of the decreasing geometric series $\displaystyle 1+\frac{2}{2^p}+\left(\frac{2}{2^p}\right)^2+\cdots$. The series $\displaystyle 1+\frac{1}{2^p}+\frac{1}{3^p}+ \cdots $ is divergent if $p<1$, the terms being then greater than the corresponding terms of \[1+\frac{1}{2}+\frac{1}{3}+ \cdots. \] 4. \textit{The series $a_1+a_2+a_3+\cdots$ is absolutely convergent if after some term of finite index, $a_i$, the ratio of each term to that which immediately precedes it be numerically less than 1 and, as the index of the term is indefinitely increased, approach a limit which is less than $1$; but divergent, if this ratio and its limit be greater than $1$.} For---to consider the first hypothesis---suppose that after the term $a_i$ this ratio is always less than $\alpha$, where $\alpha$ denotes a certain positive number less than 1. \begin{align*} \text{Then,} \qquad \frac{a_{i+1}}{a_i} & \leqq \alpha, \; \therefore \; a_{i+1}\leqq a_i\alpha;\\ \frac{a_{i+2}}{a_{i+1}} & \leqq \alpha, \; \therefore \; a_{i+2}\leqq a_{i+1}\alpha\leqq a_i\alpha^2.\\ \cdot \qquad & \cdot \qquad \cdot \qquad \cdot \qquad \cdot \qquad \cdot \qquad \cdot\\ \frac{a_{i+k}}{a_{i+(k-1)}} & \leqq \alpha, \; \therefore \; a_{i+k}\leqq a_{i+(k-1)}\alpha\leqq \cdots \leqq a_i\alpha^k.\\ \cdot \qquad & \cdot \qquad \cdot \qquad \cdot \qquad \cdot \qquad \cdot \qquad \cdot \\ \end{align*} The given series is therefore $\leqq$ \[s_i+a_i[\alpha+\alpha^2+\alpha^3+\cdots \alpha^k+\cdots].\] And this is an absolutely convergent series. \begin{align*} \text{For} \qquad \alpha+\alpha^2+ \cdots \alpha^k+ \cdots & =\lim_{n\doteq\infty}(\alpha+\alpha^2+ \cdots +\alpha^n) \\ & =\lim_{n\doteq\infty}\left(\frac{\alpha-\alpha^{n+1}}{1-\alpha}\right)\\ & =\frac{\alpha}{1-\alpha}, \; \text {since $\alpha$ is a fraction.} \end{align*} The given series is therefore absolutely convergent, \S~62, 1. The same course of reasoning would prove that the series is divergent when after some term $\alpha_i$ the ratio of each term to that which precedes it is never less than some quantity, $\alpha$, which is itself greater than 1. When the limit of the ratio of each term of the series to the term immediately preceding it is 1, the series is sometimes convergent, sometimes divergent. The series considered in \S~62, 3 are illustrations of this statement. \addcontentsline{toc}{section}{\numberline{}Limits of convergence} \textbf{63. Limits of Convergence.} An important application of the theorem just demonstrated is in determining what are called the limits of convergence of infinite series of the form \[a_0+a_1x+a_2x^2+a_3x^3+\cdots ,\] where $x$ is supposed variable, but the coefficients $a_0$, $a_1$, etc., constants as in the preceding discussion. Such a series will be convergent for very small values of $x$, if the coefficients be all finite, as will be supposed, and generally divergent for very great values of $x$; and by the limits of convergence of the series are meant the values of $x$ for which it ceases to be convergent and becomes divergent. By the preceding theorem the series will be \textit{convergent} if the limit of the ratio of any term to that which precedes it be numerically less than 1; \textit{i.~e.} if \[ \lim_{n\doteq\infty}\left(\frac{a_{n+1}x^{n+1}}{a_nx^n}\right), \; \text{ or} \lim_{n\doteq\infty}\left(\frac{a_{n+1}}{a_n}x\right), \; <1; \] that is, \textit{if $x$ be numerically } $\displaystyle <\lim_{n\doteq\infty}\left(\frac{a_n}{a_{n+1}}\right)$; and \textit{divergent, if $x$ be numerically} $\displaystyle >\lim_{n\doteq\infty}\left(\frac{a_n}{a_{n+1}}\right)$. 1. Thus the infinite series \[a^m+ma^{m-1}x+\frac{m(m-1)}{2!}a^{m-2}x^2+\cdots,\] which is the expansion, by the binomial theorem, of $(a+x)^m$ for other than positive integral values of $m$, is convergent for values of $x$ numerically less than $a$, divergent for values of $x$ numerically greater than $a$. For in this case \begin{align*} \lim_{n\doteq\infty}\left(\frac{a_n}{a_{n+1}}\right) & =\lim_{n\doteq\infty} \left[a\times\frac{\frac{m(m-1)\cdots(m-n+1)}{(n)!}}{\frac{m(m-1)\cdots(m-n)}{(n+1)!}}\right] \\ & =\lim_{n\doteq\infty}\left(a\times\frac{n+1}{m-n}\right)\\ & =\lim_{n\doteq\infty}\left(\frac{a\left(1+\frac{1}{n}\right)}{-1+\frac{m}{n}}\right)=-a.\\ \end{align*} 2. Again, the expansion of $e^{x}$, i.~e. $\displaystyle 1+x+\frac{x^{2}}{2!}+\cdots$, is convergent for all finite values of $x$. \[ \text{For here} \quad \lim_{n\doteq\infty}\left(\frac{a_{n}}{a_{n+1}}\right)= \lim_{n\doteq\infty}\left(\frac{\frac{1}{(n)!}}{\frac{1}{(n+1)!}}\right)= \lim_{n\doteq\infty}(n+1)=\infty. \] The same is true for the series which is the expansion of $a^{x}$. \addcontentsline{toc}{section}{\numberline{}The fundamental operations on infinite series} \textbf{64. Operations on Infinite Series.} 1. \textit{The sum of two convergent series, $a_{1}+a_{2}+\cdots$ and $b_{1}+b_{2}+\cdots$, is the series $(a_{1}+b_{1})+(a_{2}+b_{2})+\cdots$; and their difference is the series $(a_{1}-b_{1})+(a_{2}-b_{2})+\cdots$.} The sum of the series $a_{1}+a_{2}+\cdots$ is the number defined by $s_{1},s_{2},\cdots$, and the sum of the series $b_{1}+b_{2}+\cdots$ is the number defined by $t_{1},t_{2},\cdots$, where $s_{i}=a_{1}+a_{2}+\cdots+a_{i}$ and $t_{i}=b_{1}+b_{2}+\cdots+b_{i}$. The sum of the two series is therefore the number defined by $s_{1}+t_{1},s_{2}+t_{2},\cdots$, \S~29, (1). But if $S_{i}=(a_{1}+b_{1})+(a_{2}+b_{2})+\cdots+(a_{i}+b_{i})$, we have $S_{i}=s_{i}+t_{i}$ for all values of $i$. This is immediately obvious for finite values of $i$, and there can be no difference between $S_{i}$ and $s_{i}+t_{i}$ as $i$ approaches $\infty$, since it would be a difference having 0 for its limit. Therefore the number defined by $s_{1}+t_{1},s_{2}+t_{2},\cdots $, is the sum of the series $(a_{1}+b_{1})+(a_{2}+b_{2})+\cdots$. 2. \textit{The product of two absolutely convergent series} \begin{align*} & a_{1}+a_{2}+\cdots \; \textit{and} \; b_{1}+b_{2}+\cdots \\ \textit{is the series} \quad a_{1}b_{1} & +(a_{1}b_{2}+a_{2}b_{1})+(a_{1}b_{3}+a_{2}b_{2}+a_{3}b_{1})+\cdots \\ & +(a_{1}b_{n}+a_{2}b_{n-1}+\cdots+a_{n-1}b_{2}+a_{n}b_{1})+\cdots. \end{align*} Each set of terms within parentheses is to be regarded as constituting a single term of the product; and it will be noticed that the first of them consists of the one partial product in which the sum of the indices is 2, the second of all in which the sum of the indices is 3, etc. By \S~29, (3), the product of $a_{1}+a_{2}+\cdots$ by $b_{1}+b_{2}+\cdots$ is $\displaystyle \lim_{n\doteq\infty}(s_{n}t_{n})$, where $s_{n}$ and $t_{n}$ represent the sums of the first $n$ terms of $a_{1}+a_{2}+\cdots$, $b_{1}+b_{2}+\cdots$, respectively. Suppose first that the terms of $a_{1}+a_{2}+\cdots$ and $b_{1}+b_{2}+\cdots$ are all positive. Then if $S_{n}$ be the sum of the first $n$ terms of $a_1b_1 + (a_1b_2 + a_2b_1) + \cdots$, and $m$ represent $\displaystyle \frac{n}{2}$ when $n$ is even and $\displaystyle \frac{n-1}{2}$ when $n$ is odd, \begin{align*} \text{evidently} \qquad s_nt_n > & S_n > s_mt_m.\\ \text{But} \qquad \lim_{n \doteq \infty}(s_nt_n) & = \lim_{n \doteq \infty}(s_mt_m).\\ \text{Therefore} \qquad \lim_{n \doteq \infty}(S_n) & = \lim_{n \doteq \infty}(s_nt_n). \end{align*} If the terms of $a_1 + a_2 + \cdots$, $b_1 + b_2 + \cdots$ be not all of the same sign, call the sums of the first $n$ terms of the series got by making all the signs plus, $s_n'$ and $t_n'$ respectively; also $S_n'$, the sum of the first $n$ terms of the series which is their product. Then by the demonstration just given \[ \lim_{n \doteq \infty}(S'_n) = \lim_{n \doteq \infty}(s'_nt'_n); \] but $S_n$ always differs from $s_nt_n$ by less than (at greatest by as much as) $S'_n$ from $s'_nt'_n$; therefore, as before, \[ \lim_{n \doteq \infty}(S_n) = \lim_{n \doteq \infty}(s_nt_n). \] 3. The \textit{quotient} of the series $a_0 + a_1x + \cdots$ by the series $b_0 + b_1x + \cdots$ ($b_0$ not 0) is a series of a similar form, as $c_0 + c_1x + \cdots$, which converges when $a_0 + a_1x + \cdots$ is absolutely convergent and $b_1x + \cdots$ is numerically less than $b_0$. \section{COMPLEX SERIES.} The terms \textit{sum, convergent, divergent}, have the same meanings in connection with complex as in connection with real series. \addcontentsline{toc}{section}{\numberline{}General test of convergence} \textbf{65. General Test of Convergence.} \textit{A complex series, $a_1 + a_2 + \cdots$, is convergent when the modulus of $s_{m+n} - s_m$ may be made less than any assignable number $\delta$ by taking $m$ great enough, and that for all values of $n$; divergent, when this condition is not satisfied.} See \S~48, Cor. II; \S~59. \addcontentsline{toc}{section}{\numberline{}Absolute and conditional convergence} \textbf{66. Of Absolute Convergence.} Let \begin{align*} & a_1 + a_2 + \cdots \; \text{ be a complex series,}\\ \text{and} \qquad & A_1 + A_2 + \cdots, \; \text{ the series of the moduli of its terms} \end{align*} \textit{If the series $A_{1}+A_{2}+\cdots$, be convergent, the series $a_{1}+a_{2}+\cdots$ will be convergent also.} For the modulus of the sum of a set of complex numbers is less than (at greatest equal to) the sum of their moduli (\S~48, Cor. II). By hypothesis, $S_{m+n}-S_{m}$ is less than any assignable number $\delta$, when $S_{m}=A_{1}+A_{2}+\cdots+A_{m}$, etc.; much more must the modulus of $s_{m+n}-s_{m}$ be less than $\delta$. The converse of this theorem is not necessarily true; and a convergent series, $a_{1}+a_{2}+\cdots$, is said to be \textit{absolutely} or only \textit{conditionally} convergent, according as the series $A_{1}+A_{2}+\cdots$ is convergent or divergent. \addcontentsline{toc}{section}{\numberline{}The region of convergence} \textbf{67. The Region of Convergence of a Complex Series.} \textit{If the complex series $a_{0}+a_{1}z+a_{2}z^{2}+\cdots$ is convergent when $z=Z$, %[*Transcriber's note: corrected a_{1}z^{2} to a_{2}z^{2}*] it is absolutely convergent for every value of $z$ which is numerically less than $Z$, that is, it converges absolutely at every point within that circle in the plane of complex numbers which has the null-point for centre and passes through the point $Z$.} For since the series $a_{0}+a_{1}Z+a_{2}Z^{2}+\cdots$ is convergent, its term $a_{n}Z^{n}$ approaches 0 as limit when $n$ is indefinitely increased. It is therefore possible to find a real number $M$ which is numerically greater than every term of this series. Assign to $z$ any value which is numerically less than $Z$, whose corresponding point, therefore, lies within the circle through the point $Z$. For this value of $z$ the terms of the series $a_{0}+a_{1}z+a_{2}z^{2}+\cdots$ will be numerically less than the corresponding terms of the series \[ M+M\frac{z}{Z}+M\left(\frac{z}{Z}\right)^{2}+\cdots. \tag{1} \] \noindent For, since $a_{n}Z^{n}z_{2}>z_{3}$, etc.; i.~e.\ that each is greater than the one following it. \begin{flalign*} &{\indent Since }& & a_{0} + a_{1}z + a_{2}z^{2} + \dotsb + a_{n}z^{n} + \dotsb &&\\ \intertext{converges absolutely for $z=z_{1}$, so also does} && & a_{1}z + a_{2}z^{2} + \dotsb + a_{n}z^{n} + \dotsb, &&\\ &\text{and, therefore, }& & a_{1} + a_{2}z + \dotsb + a_{n}z^{n-1} + \dotsb. &&\\ %[*Transcriber's note: %The last term has been corrected from a_{n}z^{n} to a_{n}z^{n-1} and similarly in the following 2 infinite sums.] &\text{\indent Hence }& & A_{1} + A_{2}z_{1} + \dotsb + A_{n}z_{1}^{n-1} + \dotsb \end{flalign*} (where $A_{i}= \text{ modulus } a_{i}$) is convergent, and a number $M$ can be found greater than its sum. And since for $z=z_{2}$, $z_{3}$, $\dotsc$ the individual terms of \[ A_{1}+A_{2}z+ \dotsb +A_{n}z^{n-1}+ \dotsb \] are less than the corresponding terms of $A_{1}+A_{2}z_{1}+ \dotsb +A_{n}z_{1}^{n-1}+ \dotsb$, this series and, therefore, $modulus (a_{1}+a_{2}z+ \dotsb)$ remain always less than $M$ as $z$ runs through the sequence of values $z_{2}$, $z_{3}$, $\dotsb$. Hence the values of $modulus (a_{1}z+a_{2}z^{2}+ \dotsb)$ which correspond to $z=z_{1}$, $z_{2} \dotsc$ constitute a regular sequence defining $0$, each term being numerically less than the corresponding term of the regular sequence $z_{1}M$, $z_{2}M$, $\dotsc$ which defines $0$. \textsc{Cor.} The same argument proves that if \begin{flalign*} && & a_{m}z^{m} + a_{m+1}z^{m+1} + \cdots, &&\\ &\text{or }& & z^{m} (a_{m} + a_{m+1}z + \cdots), && \end{flalign*} be the sum of all terms of the series from the $(m+1)$th on, the series $a_{m}+a_{m+1}z+\cdots$ can be made to differ as little as one may please from its first term $a_{m}$. \addcontentsline{toc}{section}{\numberline{}The fundamental operations on complex series} \textbf{69. Operations on Complex Series.} The definitions of \emph{sum}, \emph{difference}, and \emph{product} of two convergent complex series are the same as those already given for real series, viz.: 1. \emph{The sum of two convergent series, $a_{1}+a_{2}+\cdots$ and $b_{1}+b_{2}+\cdots$, is the series $(a_{1}+b_{1}) + (a_{2}+b_{2}) + \cdots$; their difference, the series $(a_{1}-b_{1}) + (a_{2}-b_{2}) + \cdots$.} \begin{flalign*} &\text{\indent For if }& & s_{i}=a_{1}+a_{2}+\cdots+a{i} \text{ and } t_{i}=b_{1}+b_{2}+\cdots+b{i}, &&\\ && & \text{ modulus } [(s_{m+n}\pm t_{m+n}) - (s_{m}\pm t_{m})] &&\\ && &\space{60mu} \leq \text{ modulus } (s_{m+n}-s_{m}) + \text{ modulus } (t_{m+n}-t_{m}), && \end{flalign*} and may, therefore, be made less than any assignable number by taking $m$ great enough. The theorem therefore follows by the reasoning of \S~64,~1. 2. \emph{The product of two absolutely convergent series,} \[ a_{1} + a_{2} + a_{2} + \cdots \text{ \emph{ and }} b_{1} + b_{2} + b_{3} + \cdots, \] \emph{is the series} $a_{1}b_{1} + (a_{1}b_{2}+a_{2}b_{1}) + (a_{1}b_{3}+a_{2}b_{2}+a_{3}b_{1}) \cdots$. For, letting $S_{i}=A_{1}+A_{2}+\cdots+A_{i}$ and $T_{i}=B_{1}+B_{2}+\cdots+B_{i}$, where $A_{i}$, $B_{i}$, are the moduli of $a_{i}$, $b_{i}$, respectively, and representing by $\sigma_{n}$ the sum of the first $n$ terms of the series \begin{flalign*} && & a_{1}b_{1} + (a_{1}b_{2}+a_{2}b_{1}) + \cdots &&\\ \intertext{and by $\Sigma_{n}$ sum of the first $n$ terms of the series } && & A_{1}B_{1} + (A_{1}B_{2}+A_{2}B_{1}) + \cdots, &&\\ &\text{we have }& & \text{ modulus } (s_{n}t_{n}-\sigma_{n})\leq S_{n}T_{n}-\Sigma_{n}. &&\\ \intertext{\indent But the limit of the right member of this inequality (or equation) is 0 (\S~64,~2); therefore } && & \lim_{n\doteq\infty}(\sigma_{n}) = \lim_{n\doteq\infty}(s_{n}t_{n}). && \end{flalign*} \chapter{THE EXPONENTIAL AND LOGARITHMIC FUNCTIONS\@. UNDETERMINED COEFFICIENTS\@. INVOLUTION AND EVOLUTION\@. THE BINOMIAL THEOREM.} \addcontentsline{toc}{section}{\numberline{}Definition of function} \textbf{70. Function.} A variable $w$ is said to be a \textit{function} of a second variable $z$ for the area $A$ of the $z$-plane (§42), when to the $z$ belonging to every point of $A$ there corresponds a determinate value or set of values of $w$. Thus if $w=2z$, $w$ is a function of $z$. For when $z=1$, $w=2$; when $z=2$, $w=4$; and there is in like manner a determinate value of $w$ for every value of $z$. In this case $A$ is coextensive with the entire $z$-plane. Similarly $w$ is a function of $z$, if \[w=a_0+a_1 z+a_2 z^2+\ldots+a_n z^n+\ldots,\] so long as this infinite series is convergent, \textit{i.~e.} for the portion of the $z$-plane bounded by a circle having the null-point for centre, and for radius the modulus of the smallest value of $z$ for which the series diverges. It is customary to use for $w$ when a function of $z$ the symbol $f(z)$, read ``function $z$.'' \addcontentsline{toc}{section}{\numberline{}Functional equation of the exponential function} \textbf{71. Functional Equation of the Exponential Function.} For positive integral values of $z$ and $t$, $a^z\cdot a^t=a^{z+t}$. The question naturally suggests itself, is there a function of $z$ which will satisfy the condition expressed by this equation, or the ``functional equation'' $f(z)f(t)=f(z+t)$, for \textit{all} values of $z$ and $t$? We proceed to the investigation of this question and another which it suggests, not only because they lead to definitions of the important functions $a^z$ and $\log_az$ for complex values of $a$ and $z$, and so give the operations of involution, evolution, and the taking of logarithms the perfectly general character already secured to the four fundamental operations,---but because they afford simple examples of a large class of mathematical investigations.\footnote{An application of the principle of permanence (§12) is involved in the use of functional equations to define functions. The equation $a^za^t=a^{z+t}$, for instance, only becomes a functional equation when its \textit{permanence is assumed} for other values of $z$ and $t$ than those for which it has been actually demonstrated. In this respect the methods of definition of the negative and the fraction on the one hand, and the functions $a^z$, $\log_az$, on the other, are identical; but, while the equation $(a-b)+b=a$ itself served as definition of $a-b$, there being no simpler symbols in terms of which $a-b$ could be expressed, from the equation $a^za^t=a^{z + t}$ a series (\S~73, (4)) may be deduced which defines $a^z$ in terms of numbers of the system $a+ib$.} \addcontentsline{toc}{section}{\numberline{}Undetermined coefficients} \textbf{72. Undetermined Coefficients.} In investigations of this sort, the method commonly used in one form or another is that of \textit{undetermined coefficients}. This method consists in assuming for the function sought an expression involving a series of unknown but constant quantities---coefficients,---in substituting this expression in the equation or equations which embody the conditions which the function must satisfy, and in so determining these unknown constants that these equations shall be \textit{identically} satisfied, that is to say, satisfied for all values of the variable or variables. The method is based on the following theorem, called ``the theorem of undetermined coefficients,'' \; viz.: \textit{If the series $A+Bz+Cz^2+\cdots$ be equal to the series $A'+B'z+C'z^2+\cdots$ for all values of $z$ which make both convergent, and the coefficients be independent of $z$, the coefficients of like powers of $z$ in the two are equal.} For, since \[A+Bz+Cz^2+\cdots =A'+B'z+C'z^2+\cdots,\] \[A-A'+(B-B')z+(C-C')z^2+\cdots=0\] throughout the circle of convergence common to the two given series (\S\S~67, 69, 1). And being convergent within this circle, the series \[A-A'+(B-B')z+(C-C')z^2+\cdots\] can be made to differ as little as we please from its first term, $A - A'$ (\S~68). \[ \therefore A - A' = 0 \; \text{(\S~30, Cor.), or} \; A = A'. \] Therefore \[ (B - B')z + (C - C')z^2 + \cdots = 0 \] throughout the common circle of convergence, and hence (at least, for values of $z$ different from 0) \[ B - B' + (C - C')z + \cdots = 0 \] Therefore by the reasoning which proved that \[ A - A' = 0, \; B - B' = 0, \; \text{or} \; B = B'. \] In like manner it may be proved that $C = C'$, $D = D'$, etc. \begin{align*} \text{COR. \textit{If}} & \quad A + Bz + Ct+Dz^2 + Ezt + Ft^2 + \cdots \\ & = A' + B'z + C't +D'z^2 + E'zt + F't^2 + \cdots \end{align*} \noindent \textit{for all values of z and t which make both series convergent, and z be independent of t, and the coefficients independent of both z and t, the coefficients of like powers of z and t in the two series are equal.} For, arrange both series with reference to the powers of either variable. The coefficients of like powers of this variable are then equal, by the preceding theorem. These coefficients are series in the other variable, and by applying the theorem to each equation between them the corollary is demonstrated. \addcontentsline{toc}{section}{\numberline{}The exponential function} \textbf{73. The Exponential Function.} To apply this method to the case in hand, assume \[ f(z) = A_0 + A_1z + A_2z^2 + \cdots + A_nz^n + \cdots, \] and determine whether values of the coefficients $A_i$ can be found capable of satisfying the ``functional equation,'' \[ f(z)f(t) = f(z + t), \tag{1} \] for all values of $z$ and $t$. On substituting in this equation, we have, for all values of $z$ and $t$ for which the series converge, \[ \begin{split} (A_{0} + A_{1}z + A_{2}z^{2} + \cdots A_{n}z^{n} + \cdots) (A_{0} + A_{1}t + A_{2}t^{2} + \cdots A_{n}t^{n} + \cdots) \\ = A_{0} + A_{1}(z+t) + A_{2}(z+t)^{2} + \cdots A_{n}(z+t)^{n} + \cdots; \end{split} \] or, expanding and arranging the terms with reference to the powers of $z$ and $t$, \begin{align*} A_{0}A_{0} & + A_{1}A_{0}z + A_{0}A_{1}t + A_{2}A_{0}z^{2} + A_{1}A_{1}zt + A_{0}A_{2}t^{2} + \cdots\\ & + A_{n}A_{0}z^{n} + A_{n-1}A_{1}z^{n-1}t + \cdots + A_{n-k}A_{k}z^{n-k}t^{k} + \cdots + A_{0}A_{n}t^{n} \\ & + \cdots \\ & = A_{0} + A_{1}z + A_{1}t + A_{2}z^{2} + 2A_{2}zt + A_{2}t^{2} + \cdots \\ & + A_{n}z^{n} + A_{n}nz^{n-1}t + \cdots +A_{n}n_{k}z^{n-k}t^{k} +\cdots+ A_{n}t^{n} + \cdots,\\ \text{where} & \qquad n_{k} = \frac{(n(n-1) \cdots (n-k+1)}{k!} \end{align*} Equating the coefficients of like powers of $z$ and $t$ in the two members of this equation, we get \begin{align*} & A_{n-1}A_{k} \; \text{ equal always to} \; A_{n}n_{k}.\\ \text{In particular} \; & A_{0}A_{0} = A_{0}, \text{therefore} \; A_{0} = 1. \quad \text{Also}\\ & A_{1}A_{1} = 2A_{2}, \quad A_{2}A_{1} = 3A_{3}, \\ & A_{3}A_{1} = 4A_{4}, \; \cdots , \; A_{n-1}A_{1} = nA_{n}; \end{align*} or, multiplying these equations together member by member, \[ A_{1}^{n} = A_{n}n!, \; \text{or} \; A_{n} = \frac{A_{1}^{n}}{n!}. \] A part of the equations among the coefficients are, therefore, sufficient to determine the values of all of them in terms of the one coefficient $A_{1}$. But these values will satisfy the remaining equations; for substituting them in the general equation \begin{align*} & A_{n-k}A_{k} = A_{n}n_{k},\\ \text{we get} \qquad & \frac{A_{1}^{n-k}}{(n-k)!} \times \frac{A_{1}^{k}}{k!} = \frac{A_{1}^{n}}{n!} \times \frac{n(n-1) \cdots (n-k+1)}{k!}, \end{align*} which is obviously an identical equation. The coefficient $A_{1}$ or, more simply written, $A$, remains undetermined. It has been demonstrated, therefore, that to satisfy equation (1), it is only necessary that, $f(z)$ be the sum of an infinite series of the form \[ 1 + Az + \frac{A^{2}}{2!}z^{2} + \frac{A^{3}}{3!}z^{3} + \cdots, \tag{2} \] where $A$ is undetermined; a series which has a sum, i.~e.\ is convergent, for all finite values of $z$ and $A$. (\S~63, 2, \S~66.) By properly determining $A$, $f(z)$ may be identified with $a^{z}$, for any particular value of $a$. If $a^{z}$ is to be identically equal to the series (2), $A$ must have such a value that \begin{align*} & a = 1 + A + \frac{A^{2}}{2!} + \frac{A^{3}}{3!} + \cdots. \\ \text{Let} \qquad & e^{z} = 1 + z + \frac{z^{2}}{2!} + \frac{z^{3}}{3!} + \cdots , \qquad \qquad (3) \\ \text{where} \qquad & e = 1 + 1 + \frac{1}{2!} + \frac{1}{3!} + \cdots;\footnotemark \\ \text{Then} \qquad & e^{A} = 1 + A + \frac{A^{2}}{2!} + \frac{A^{3}}{3!} + \cdots. \\ \text{Therefore} \qquad & a = e^{A}; \end{align*} or, calling any number which satisfies the equation \[e^{z} = a\] \noindent the \textit{logarithm} of a to the base $e$ and writing it $\log_{e}a$, \[ A = \log_{e}a.\] \footnotetext{\label{irrationality}This number $e$, the base of the Naperian system of logarithms, is a ``transcendental'' \, irrational, transcendental in the sense that there is no algebraic equation with integral coefficients of which it can be a root (see Hermite, Comptes Rendus, LXXVII). $\pi$ has the same character, as Lindemann proved in 1882, deducing at the same time the first actual demonstration of the impossibility of the famous old problem of squaring the circle by aid of the straight edge and compasses only (see Mathematische Annalen, XX).} Whence finally, \[ a^z = 1 + (\log_{e}a)z + \frac{(\log_{e}a)^2z^2}{2!} + \frac{(\log_{e}a)^3z^3}{3!} + \cdots, \tag{4} \] a definition of $a^z$, valid for all finite complex values of $a$ and $z$, if it may be assumed that $\log_e a$ is a number, whatever the value of $a$. The series (3) is commonly called the \emph{exponential series}, and its sum $e^z$ the \emph{exponential function}. It is much more useful than the more general series (2), or (4), because of its greater simplicity; its coefficients do not involve the logarithm, a function not yet fully justified and, as will be shown, to a certain extent indeterminate. Inasmuch, however, as $e^z$ is a particular function of the class $a^z$, $a^z$ is sometimes called the general exponential function, and series (4) the general exponential series. \addcontentsline{toc}{section}{\numberline{}The functions sine and cosine} \textbf{74. The Functions Sine and Cosine.} It was shown in \S~51 that when $\theta$ is a real number, \begin{align*} e^{i\theta} & = \cos\theta + i\sin\theta. \\ \text{But} \qquad e^{i\theta} & = 1 + i\theta + \frac{(i\theta)^2}{2!} + \frac{(i\theta)^3}{3!} + \frac{(i\theta)^4}{4!} + \cdots \\ &= 1 - \frac{\theta^2}{2!} + \frac{\theta^4}{4!} - \cdots \\ & + i\left(\theta - \frac{\theta^3}{3!} + \cdots\right). \end{align*} Therefore (by \S~36, 2, Cor.), for real values of $\theta$ \begin{equation} \cos\theta = 1 - \frac{\theta^2}{2!} + \frac{\theta^4}{4!} - \cdots, \end{equation} and \begin{equation} \sin\theta = \theta - \frac{\theta^3}{3!} + \frac{\theta^5}{5!} - \cdots, \end{equation} series which both converge for all finite values of $\theta$. Though $\cos\theta$ and $\sin\theta$ only admit of geometrical interpretation when $\theta$ is real, it is convenient to continue to use these names for the sums of the series (5) and (6) when $\theta$ is complex. \addcontentsline{toc}{section}{\numberline{}Periodicity of these functions} \textbf{75. Periodicity.} When $\theta$ is real, evidently neither its sine nor its cosine will be changed if it be increased or diminished by any multiple of four right angles, or $2\pi$; or, if $n$ be any positive integer, \[ \cos (\theta \pm 2n\pi) = \cos \theta, \; \sin (\theta \pm 2n\pi) = \sin \theta, \] and hence \[e^{i(\theta \pm 2n\pi)} = e^{i\theta}.\] The functions $e^{i\theta}$, $\cos \theta$, $\sin \theta$, are on this account called \emph{periodic} functions, with the \emph{modulus of periodicity $2\pi$}. \addcontentsline{toc}{section}{\numberline{}The logarithmic function} \textbf{76. The Logarithmic Function.} If $z = e^z$ and $t = e^T$, \[ zt = e^z e^T = e^{Z + T}, \qquad \qquad \text{\S~73}\] or \[ \log_e zt = \log_e z + \log_e t. \tag{7} \] The question again is whether a function exists capable of satisfying this equation, or, more generally, the ``functional equation,'' \[ f(zt) = f(z) + f(t), \tag{8} \] for complex values of $z$ and $t$. When $z = 0$, (7) becomes \[ \log_e 0 = \log_e 0 + \log_e t, \] an equation which cannot hold for any value of $t$ for which $\log_e t$ is not zero unless $\log_e 0$ is numerically greater than any finite number whatever. Therefore $\log_e 0$ is infinite. On the other hand, when $z = 1$, (7) becomes \[ \log_e t = \log_e 1 + \log_e t, \] so that $\log_e 1$ is zero. Instead, therefore, of assuming a series with undetermined coefficients for $f(z)$ itself, we assume one for $f(1 +z)$, setting \[ f(1 + z) = A_1 z + A_2 z^2 + \cdots + A_n z^n + \cdots, \] and inquire whether the coefficients $A_i$ admit of values which satisfy the functional equation (8) for complex values of $z$ and $t$. Now \[1+z+t=(1+z)\left(1+\frac{t}{1+z}\right), \; \text{ identically}.\] \[\therefore f\left[1+(z+t)\right]=f(1+z)+f\left(1+\frac{t}{1+z}\right),\] \noindent or \begin{align*} &A_1(z+t)+A_2(z+t)^2+\cdots +A_n(z+t)^n+\cdots\\ =&A_1z+A_2z^2+\cdots +A_nz^n+\cdots\\ +&A_1(1+z)^{-1}t+A_2(1+z)^{-2}t^2+\cdots +A_n(1+z)^{-n}t^n+\cdots \end{align*} Equating the coefficients of the first power of $t$ (\S~72) in the two members of this equation, \begin{align*} & A_1+2A_2z+3A_3z^2+\cdots +(n+1)A_{n+1}z^n+\cdots\\ = \, & A_1(1-z+z^2-z^3+\cdots +(-1)^nz^n+\cdots ); \end{align*} whence, equating the coefficients of like powers of z, \begin{align*} & A_1=A_1, 2A_2=-A_1,\cdots,nA_n=(-1)^{n-1}A_1,\cdots,\\ \text{or} \qquad & A_2=-\frac{A_1}{2},\cdots, A_n=(-1)^{n-1}\frac{A_1}{n},\cdots. \end{align*} As in the case of the exponential function, a part of the equations among the coefficients are sufficient to determine them all in terms of the one coefficient $A_1$. But as in that case (by assuming the truth of the binomial theorem for negative integral values of the exponent) it can be readily shown that these values will satisfy the remaining equations also. The series $\displaystyle \qquad z-\frac{z^2}{2}+\frac{z^3}{3}-\cdots +(-1)^{n-1}\frac{z^n}{n}+\cdots$ \noindent converges for all values of $z$ whose moduli are less than 1 (\S~62, 3) For such values, therefore, the function \[ A\left(z-\frac{z^2}{2}+\cdots +(-1)^{n-1}\frac{z^n}{n}+\cdots \right) \tag{9} \] satisfies the functional equation \[ f\left[(1+z)(1+t)\right]=f(1+z)+f(1+t). \] \begin{align*} \text{And since} \qquad & z\equiv 1-(1-z) \; \text{and} \; t\equiv1-(1-t),\\ \text{the function} \qquad & -A\left(1-z+\frac{(1-z)^2}{2}+\cdots +\frac{(1-z)^n}{n}+\cdots \right) \end{align*} \noindent satisfies this equation when written in the simpler form \begin{equation*} f(zt)=f(z)+f(t), \end{equation*} for values of $1-z$ and $1-t$ whose moduli are both less than 1. 1. $Log_eb$. To identify the general function $f(1+z)$ with the particular function $\log_e(1+z)$ it is only necessary to give the undetermined coefficient $A$ the value 1. For since $\log_e(1+z)$ belongs to the class of functions which satisfy the equation (8), \begin{equation*} \log_e(1+z)=A\left(z-\frac{z^2}{2}+\cdots\right). \end{equation*} Therefore \begin{align*} e^{\log_e(1+z)}&=e^{A\left(z-\frac{z^2}{2}+\cdots \right)}\\ &=1+A\left(z-\frac{z^2}{2}+\cdots\right)+\frac{1}{2!}A^2\left(z-\frac{z^2}{2}+\cdots \right)^2+\cdots.\\ \text{But} \qquad e^{\log_e(1+z)}&=1+z.\\ \end{align*} Hence \begin{equation*} 1+z=1+A\left(z-\frac{z^2}{2}+\cdots \right)+\frac{1}{2!}A^2\left(z-\frac{z^2}{2}+\cdots \right)^2+\cdots ; \end{equation*} or, equating the coefficients of the first power of $z$, $A=1$. The coefficients of the higher powers of $z$ in the right number are then identically 0. It has thus been demonstrated that $\log_eb$ is a number (real or complex), if when $b$ is written in the form $1+z$, the absolute value of $z$ is less than 1. To prove that it is a number for other than such values of $b$, let $b=\rho e^{i\theta }$, (\S~51), where $\rho$, as being the modulus of $b$, is positive. \[\text{Then} \qquad \log_eb=\log_e\rho +i\theta,\] \noindent and it only remains to prove that $\log_e\rho$ is a number. Let $\rho$ be written in the form $\displaystyle e^n-(e^n-\rho )$, where $e^n$ is the first integral power of $e$ greater than $\rho$. \begin{align*} \text{Then since} \qquad & e^n - (e^n - \rho) \equiv e^n \left(1 - \frac{e^n-\rho}{e^n}\right),\\ & \log_e\rho = \log_e e^n + \log_e \left(1 - \frac{e^n - \rho}{e^n} \right) \\ & \qquad \quad = n + \log_e\left(1 - \frac{e^n - \rho}{e^n}\right), \end{align*} and $\displaystyle \log_e\left(1 - \frac{e^n - \rho}{e^n}\right)$ is a number since $\displaystyle \frac{e^n - \rho}{e^n}$ is less than 1. 2. $Log_a b$. It having now been fully demonstrated that $a^z$ is a number satisfying the equation $a^Z a^T = a^{Z+T}$ for all finite values of $a$, $Z$, $T$; let $a^Z = z$, $a^T = t$, and call $Z$ the \textit{logarithm of $z$ to the base $a$}, or $\log_a z$, and in like manner $T$, $\log_a t$. \begin{align*} \text{Then, since} \quad & zt = a^Z a^T = a^{z+T},\\ & \log_a(zt) = \log_a z + \log_a t, \end{align*} or $\log_a z$ belongs, like $\log_e z$, to the class of functions which satisfy the functional equation (8). Pursuing the method followed in the case of $\log_e b$, it will be found that $\displaystyle \log_a(1 +z)$ is equal to the series $\displaystyle A\left(z - \frac{z^2}{2} + \cdots\right)$ when $\displaystyle A= \frac{1}{log_e a}$. This number is called the \textit{modulus} of the system of logarithms of which $a$ is base. \addcontentsline{toc}{section}{\numberline{}Indeterminateness of logarithms} \textbf{77. Indeterminateness of $\mathbf{\log a}$.} Since any complex number $a$ may be thrown into the form $\rho e^{i\theta}$, \[ \log_e a = \log_e \rho + i\theta. \tag{10} \] This, however, is only one of an infinite series of possible values of $\log_e a$. For, since $\displaystyle e^{i\theta} = e^{i(\theta \pm 2n\pi)}$ (\S~75), \[ \log_e a = \log_e \rho e^{i(\theta \pm 2n\pi)} = \log_e \rho + i(\theta \pm 2n\pi), \] where $n$ may be any positive integer. Log$_e a$ is, therefore, to a certain extent indeterminate; a fact which must be carefully regarded in using and studying this function.\footnote{For instance $\log_e(zt)$ is not equal to $\log_ez + \log_et$ for arbitrarily chosen values of these logarithms, but to $\log_ez + \log_et \pm i2n\pi$, where $n$ is some positive integer.} The value given it in (10), for which $n=0$, is called its principal value. When $a$ is a positive real number, $\theta=0$, so that the principal value of $\log_{e}a$ is real; on the other hand, when $a$ is a negative real number, $\theta=\pi$, or the principal value of $\log_{e}a$ is the logarithm of the positive number corresponding to $a$, plus $i\pi$. \addcontentsline{toc}{section}{\numberline{}Permanence of the laws of exponents} \textbf{78. Permanence of the Remaining Laws of Exponents.} Besides the law $a^z a^t = a^{z+t}$ which led to its definition, the function $a^z$ is subject to the laws: \begin{align*} 1. \qquad \qquad (a^z)^t &= a^{zt}.\\ 2. \qquad \qquad (a b)^z &= a^z b^z.\footnotemark[1]\\ 1. \qquad \qquad (a^z)^t &= a^{zt}.\\ \text{For} \quad a^z = \left(e^{\log_{e}a}\right)^z &= 1+(\log_{e}a)z+\frac{(\log_{e}a)^2z^2}{2!}+\cdots & \S~73,\ (4)\\ &= 1+z\log_{e}a + \frac{(z\log_{e}a)^2}{2!}+\cdots\\ &= e^{z\log_{e}a}. & \S~73,\ (3)\\ \therefore (e^{\log_{e}a})^z &= e^{z\log_{e}a}, \; \text{and} \; \log_{e}a^z = z\log_{e}a. \end{align*} From these results it follows that \begin{align*} (a^z)^t &= e^{\log_{e}(a^z)^t}\\ &= e^{t\log_{e}a^z}\\ &= e^{tz\log_{e}a}\\ &= a^{zt}. \\ 2. \qquad (ab)^z &= a^z b^z.\\ \text{For} \qquad (ab)^z &= e^{\log_{e}(ab)^z}\\ &= e^{z\log_{e}ab}\\ &= e^{z\log_{e}a+z\log_{e}b} \qquad \qquad \qquad &\S~76,\ (7)\\ &= e^{z\log_{e}a}\cdot e^{z\log_{e}b} &\S~73,\ (1)\\ \nonumber &= a^z \cdot b^z. \end{align*} \footnotetext[1]{$\displaystyle \frac{a^z}{a^t}=a^{z-t}$, which is sometimes included among the fundamental laws to which $a^z$ is subject, follows immediately from $a^z a^t = a^{z+t}$ by the definition of division.} \addcontentsline{toc}{section}{\numberline{}Permanence of the laws of logarithms} \textbf{79. Permanence of the Remaining Law of Logarithms.} In like manner, the function $\log_a z$ is subject not only to the law \begin{flalign*} && \log_a(zt) &= \log_az + \log_at, &&\\ \intertext{but also to the law } && \log_a z^t &= t\log_a z. && \\ &\text{\indent For }& z &= a^{\log_a z}, &&\\ &\text{and hence }& z^t &= (a^{\log_az})^t &&\\ && &= a^{t\log_a z}. &\text{ \S~78, 1}& \end{flalign*} \addcontentsline{toc}{section}{\numberline{}Involution and evolution} \textbf{80. Evolution.} Consider three complex numbers $\zeta$, $z$, $Z$, connected by the equation $\zeta^Z=z$. This equation gives rise to three problems, each of which is the inverse of the other two. For $Z$ and $\zeta$ may be given and $z$ sought; or $\zeta$ and $z$ may be given and $Z$ sought; or, finally, $z$ and $Z$ may be given and $\zeta$ sought. The exponential function is the general solution of the first problem (\emph{involution}), and the logarithmic function of the second. For the third (\emph{evolution}) the symbol $\sqrt[Z]{z}$ has been devised. This symbol does not represent a new function; for it is defined by the equation $(\sqrt[Z]{z})^Z=z$, an equation which is satisfied by the exponential function $z^{\frac{1}{Z}}$. Like the logarithmic function, $\sqrt[Z]{z}$ is indeterminate, though not always to the same extent. When $Z$ is a positive integer, $\zeta^Z=z$ is an algebraic equation, and by \S~56 has $Z$ roots for any one of which $\sqrt[Z]{z}$ is, by definition, a symbol. From the mere fact that $z=t$, therefore, it cannot be inferred that $\sqrt[Z]{z}=\sqrt[Z]{t}$, but only that one of the values of $\sqrt[Z]{z}$ is equal to one of the values of $\sqrt[Z]{t}$. The same remark, of course, applies to the equivalent symbols $z^{\frac{1}{Z}}$, $t^{\frac{1}{Z}}$. \addcontentsline{toc}{section}{\numberline{}The binomial theorem for complex exponents} \textbf{81. Permanence of the Binomial Theorem.} By aid of the results just obtained, it may readily be demonstrated that the binomial theorem is valid for general complex as well as for rational values of the exponent. For $b$ being any complex number whatsoever, and the absolute value of $z$ being supposed less than 1, \begin{align*} (1 + z)^b &= e^{b\log_e(1+z)} \\ &= e^{b\left( z - \frac{z^2}{2} + \cdots \right)} \\ &= 1 + bz + \text{ terms involving higher powers of } z. \end{align*} Therefore let \[ (1 + z)^b = 1 + bz + A_2z^2 + \cdots + A_nz^n + \cdots. \tag{11} \] Since, then, $(a + Z)^b = a^b\left( 1 + \frac{z}{a} \right)^b,$ \qquad \qquad \qquad \S~78,~2\\ if $\frac{z}{a}$ be substituted for $z$ in (11), and the equation be multiplied throughout by $a^b$, \[ (a + z)^b = a^b + ba^{b-1}z + A_2a^{b-2}z^2 + \cdots + A_na^{b-n}z^n + \cdots. \tag{12} \] Starting with the identity \[ (1 + \underline{z + t})^b = (\underline{1 + z} + t)^b, \] developing $(1 + \underline{z + t})^b$ by (11) and $(\underline{1 + z} + t)^b$ by (12), equating the coefficients of the first power of $t$ in these developments, multiplying the resultant equation by $1 + z$, and equating the coefficients of like powers of $z$ in this product, equations are obtained from which values may be derived for the coefficients $A_i$ identical in form with those occurring in the development for $(1 + z)^b$ when $b$ is a positive integer. It may also be shown that these values of the coefficients satisfy the equations which result from equating the coefficients of higher powers of $t$. \part{HISTORICAL.} \chapter{PRIMITIVE NUMERALS.} \setcounter{subsection}{0} \addcontentsline{toc}{section}{\numberline{}Gesture symbols} \textbf{82. Gesture Symbols.} There is little doubt that primitive counting was done on the fingers, that the earliest numeral symbols were groups of the fingers formed by associating a single finger with each individual thing in the group of things whose number it was desired to represent. Of course the most immediate method of representing the number of things in a group---and doubtless the method first used---is by the presentation of the things themselves or the recital of their names. But to present the things themselves or to recite their names is not in a proper sense to count them; for either the things or their names represent all the properties of the group and not simply the number of things in it. Counting was first done when a group was used to represent the number of things in some other group; of that group it would represent the number only and, therefore, be a true numeral symbol, which it is the sole object of counting to reach. Counting ignores all the properties of a group except the distinctness or separateness of the things in it and presupposes whatever intelligence is required consciously or unconsciously to abstract this from its remaining properties. On this account, that group serves best to represent numbers, in which the individual differences of the members are least obtrusive. The naturalness of finger-counting, therefore, lies not only in the accessibility of the fingers, in their being always present to the counter, but in this: that the fingers are so similar in form and function that it is almost easier to ignore than to take account of their differences. But there is other evidence than its intrinsic probability for the priority of finger-counting over any other. Nearly every system of numeral notation of which we have any knowledge is either quinary, decimal, vigesimal, or a mixture of these;\footnote{\label{Instances of quinary and vigesimal systems of notation}Pure quinary and vigesimal systems are rare, if indeed they occur at all. As an example of the former, Tylor (Primitive Culture, I, p.~261) instances a Polynesian number series which runs 1, 2, 3, 4, 5, $5\cdot 1$, $5\cdot 2$,\ldots; and as an example of the latter, Cantor (Geschichte der Mathematik, p.~8), following Pott, cites the notation of the Mayas of Yucatan who have special words for 20, 400, 8000, 160,000. The Hebrew notation, like the Indo-Arabic, affords an example of a pure decimal notation. Mixed systems are common. Thus the Roman is mixed decimal and quinary, the Aztec mixed vigesimal and quinary. Speaking generally, the quinary and vigesimal systems are more frequent among the lower races, the decimal among the higher. (Primitive Culture, I, p.~262.)} that is to say, expresses numbers which are greater than 5 in terms of 5 and lesser numbers, or makes a similar use of 10 or 20. These systems point to primitive methods of reckoning with the fingers of one hand, the fingers of both hands, all the fingers and toes, respectively. Finger-counting, furthermore, is universal among uncivilized tribes of the present day, even those not far enough developed to have numeral words beyond 2 or 3 representing higher numbers by holding up the appropriate number of fingers.\footnote{So, for instance, the aborigines of Victoria and the Bororos of Brazil (Primitive Culture, I, p.~244).} \addcontentsline{toc}{section}{\numberline{}Spoken symbols} \textbf{83. Spoken Symbols.} Numeral words---spoken symbols---would naturally arise much later than gesture symbols. Wherever the origin of such a word can be traced, it is found to be either descriptive of the corresponding finger symbol or---when there is nothing characteristic enough about the finger symbol to suggest a word, as is particularly the case with the smaller numbers---the name of some familiar group of things. Thus in the languages of numerous tribes the numeral 5 is simply the word for hand, 10 for both hands, 20 for ``an entire man'' \, (hands and feet); while 2 is the word for the eyes, the ears, or wings.\footnote{\label{Instances of digit numerals}In the language of the Tamanacs on the Orinoco the word for 5 means ``a whole hand,'' \, the word for 6, ``one of the other hand,'' \, and so on up to 9; the word for 10 means ``both hands,'' \, 11, ``one to the foot,'' \, and so on up to 14; 15 is ``a whole foot,'' \, 16, ``one to the other foot,'' \, and so on up to 19; 20 is ``one Indian,'' \, 40, ``two Indians,'' \, etc. Other languages rich in digit numerals are the Cayriri, Tupi, Abipone, and Carib of South America; the Eskimo, Aztec, and Zulu (Primitive Culture, I, p.~247). ``Two'' \, in Chinese is a word meaning ``ears,'' \, in Thibet ``wing,'' \, in Hottentot ``hand.'' \, (Gow, Short History of Greek Mathematics, p.~7.) See also Primitive Culture, I, pp.~252--259.} As its original meaning is a distinct encumbrance to such a word in its use as a numeral, it is not surprising that the numeral words of the highly developed languages have been so modified that it is for the most part impossible to trace their origin. The practice of counting with numeral words probably arose much later than the words themselves. There is an artificial element in this sort of counting which does not appertain to primitive counting\footnote{Were there any reason for supposing that primitive counting was done with numeral words, it would be probable that the ordinals, not the cardinals, were the earliest numerals. For the normal order of the cardinals must have been fully recognized before they could be used in counting. In this connection, see Kronecker, Ueber den Zahlbegriff; Journal f\"ur die reine und angewandte Mathematik, Vol. 101, p.~337. Kronecker goes so far as to declare that he finds in the ordinal numbers the natural point of departure for the development of the number concept.} (see \S~5). One fact is worth reiterating with reference to both the primitive gesture symbols and word symbols for numbers. There is nothing in either symbol to represent the individual characteristics of the things counted or their arrangement. The use of such symbols, therefore, presupposes a conviction that the number of things in a group does not depend on the character of the things themselves or on their collocation, but solely on their maintaining their separateness and integrity. \addcontentsline{toc}{section}{\numberline{}Written symbols } \textbf{84. Written Symbols.} The earliest \textit{written} symbols for number would naturally be mere groups of strokes----$|$, $||$, $|||$, etc. Such symbols have a double advantage over gesture symbols: they can be made permanent, and are capable of indefinite extension---there being, of course, no limit to the numbers of strokes which may be drawn. \chapter{HISTORIC SYSTEMS OF NOTATION.} \addcontentsline{toc}{section}{\numberline{}Egyptian and Ph\oe nician} \textbf{85. Egyptian and Ph\oe nician.} This written symbolism did not assume the complicated character it might have had, had counting with written strokes and not with the fingers been the primitive method. Perhaps the written strokes were employed in connection with counting numbers higher than 10 on the fingers to indicate how often all the fingers had been used; or if each stroke corresponded to an individual in the group counted, they were arranged as they were drawn in groups of 10, so that the number was represented by the number of these complete groups and the strokes in a remaining group of less than 10. At all events, the decimal idea very early found expression in special symbols for 10, 100, and if need be, of higher powers of 10. Such signs are already at hand in the earliest known writings of the Egyptians and Phoenicians in which numbers are represented by unit strokes and the signs for 10, 100, 1000, 10,000, and even 100,000, each repeated up to 9 times. \addcontentsline{toc}{section}{\numberline{}Greek} \textbf{86. Greek.} In two of the best known notations of antiquity, the old Greek notation---called sometimes the Herodianic, sometimes the Attic---and the Roman, a primitive system of counting on the fingers of a single hand has left its impress in special symbols for 5. In the Herodianic notation the only symbols---apart from certain abbreviations for products of 5 by the powers of 10---are $\mathsf{I}$, $\Gamma$ ($\pi\acute{\epsilon}\nu\tau\epsilon$, 5), $\Delta$ ($\delta\acute{\epsilon}\kappa\alpha$, 10), $\mathsf{H}$ ($\grave{\epsilon}\kappa\alpha\tau\acute{o}\nu$, 100), $\chi$ ($\chi\acute{\iota}\lambda\iota o\iota$, 1000), $\mathsf{M}$ ($\mu\nu\rho\acute{\iota}o\iota$, 10,000); all of them, except $\mathsf{I}$, it will be noticed, initial letters of numeral words. This is the only notation, it may be added, found in any Attic inscription of a date before Christ. The later and, for the purposes of arithmetic, much inferior notation, in which the 24 letters of the Greek alphabet with three inserted strange letters represent in order the numbers 1, 2, \ldots 10, 20, \ldots 100, 200, \ldots 900, was apparently first employed in Alexandria early in the 3d century B.~C., and probably originated in that city. \addcontentsline{toc}{section}{\numberline{}Roman} \textbf{87. Roman.} The Roman notation is probably of Etruscan origin. It has one very distinctive peculiarity: the subtractive meaning of a symbol of lesser value when it precedes one of greater value, as in $\mathsf{IV}$ = 4 and in early inscriptions $\mathsf{IIX} = 8$. In nearly every other known system of notation the principle is recognized that the symbol of lesser value shall follow that of greater value and be added to it. In this connection it is worth noticing that two of the four fundamental operations of arithmetic---addition and multiplication---are involved in the very use of special symbols for 10 and 100, for the one is but a symbol for the \textit{sum} of 10 units, the other a symbol for 10 sums of 10 units each, or for the \textit{product} 10 $\times$ 10. Indeed, addition is primarily only abbreviated counting; multiplication, abbreviated addition. The representation of a number in terms of tens and units, moreover, involves the expression of the result of a division (by 10) in the number of its tens and the result of a subtraction in the number of its units. It does not follow, of course, that the inventors of the notation had any such notion of its meaning or that these inverse operations are, like addition and multiplication, as old as the symbolism itself. Yet the Etrusco-Roman notation testifies to the very respectable antiquity of one of them, subtraction. \addcontentsline{toc}{section}{\numberline{}Indo-Arabic} \textbf{88. Indo-Arabic.} Associated thus intimately with the four fundamental operations of arithmetic, the character of the numeral notation determines the simplicity or complexity of all reckonings with numbers. An unusual interest, therefore, attaches to the origin of the beautifully clear and simple notation which we are fortunate enough to possess. What a boon that notation is will be appreciated by one who attempts an exercise in division with the Roman or, worst of all, with the later Greek numerals. The system of notation in current use to-day may be characterized as the positional decimal system. A number is resolved into the sum: \[ a_{n}10^{n} + a_{n-1}10^{n-1} + \cdots + a_{1}10 + a_{0}, \] where $10^{n}$ is the highest power of 10 which it contains, and $a_{n}$, $a_{n-1}$, $\ldots$ $a_{0}$ are all numbers less than 10; and then represented by the mere sequence of numbers $a_{n}a_{n-1} \cdots a_{0}$---it being left to the \emph{position} of any number $a_i$ in this sequence to indicate the power of 10 with which it is to be associated. For a system of this sort to be complete---to be capable of representing all numbers unambiguously---a symbol (0), which will indicate the absence of any particular power of 10 from the sum $a_{n}10^{n} + a_{n-1}10^{n-1} + \cdots + a_{1}10 + a_{0}$, is indispensable. Thus without 0, 101 and 11 must both be written 11. But this symbol at hand, any number may be expressed unambiguously in terms of it and symbols for 1, 2, $\ldots$ 9. The positional idea is very old. The ancient Babylonians commonly employed a decimal notation similar to that of the Egyptians; but their astronomers had besides this a very remarkable notation, a \emph{sexagesimal} positional system. In 1854 a brick tablet was found near Senkereh on the Euphrates, certainly older than 1600 \textsc{b.~c.}, on one face of which is impressed a table of the squares, on the other, a table of the cubes of the numbers from 1 to 60. The squares of $1$, $2$, $\ldots$ $7$ are written in the ordinary decimal notation, but $8^2$, or $64$, the first number in the table greater than $60$, is written $1$, $4$ ($1 \times 60 + 4$); similarly $9^2$, and so on to $59^2$, which is written $58$, $1$ ($58 \times 60 +1$); while $60^2$ is written $1$. The same notation is followed in the table of cubes, and on other tablets which have since been found. This is a positional system, and it only lacks a symbol for $0$ of being a perfect positional system. The inventors of the $0$-symbol and the modern complete decimal positional system of notation were the Indians, a race of the finest arithmetical gifts. The earlier Indian notation is decimal but not positional. It has characters for 10, 100, etc., as well as for $1$, $2$, $\ldots$ $9$, and, on the other hand, no $0$. Most of the Indian characters have been traced back to an old alphabet\footnote{Dr.~Isaac Taylor, in his book ``The Alphabet,'' names this alphabet the Indo-Bactrian. Its earliest and most important monument is the version of the edicts of King Asoka at Kapur-di-giri. In this inscription, it may be added, numerals are denoted by strokes, as $|, ||, |||, ||||, |||||$.} in use in Northern India 200 \textsc{b.~c.} The original of each numeral symbol 4, 5, 6, 7, 8 (?), 9, is the initial letter in this alphabet of the corresponding numeral word (see table on page 89,\footnote{Columns 1--5, 7, 8 of the table on page~89 are taken from Taylor's Alphabet, II, p.~266; column~6, from Cantor's Geschichte der Mathematik.} column~1). The characters first occur as numeral signs in certain inscriptions which are assigned to the 1st and 2d centuries \textsc{a.~d.~} (column~2 of table). Later they took the forms given in column~3 of the table. When 0 was invented and the positional notation replaced the old notation cannot be exactly determined. It was certainly later than 400 \textsc{a.~d.}, and there is no evidence that it was earlier than 500 \textsc{a.~d.} The earliest known instance of a date written in the new notation is 738 \textsc{a.~d.} By the time that 0 came in, the other characters had developed into the so-called Devanagari numerals (table, column 4), the classical numerals of the Indians. The perfected Indian system probably passed over to the Arabians in 773 \textsc{a.~d.}, along with certain astronomical writings. However that may be, it was expounded in the early part of the 9th century by Alkhwarizm\^{i}, and from that time on spread gradually throughout the Arabian world, the numerals taking different forms in the East and in the West. Europe in turn derived the system from the Arabians in the 12th century, the ``Gobar'' \, numerals (table, column 5) of the Arabians of Spain being the pattern forms of the European numerals (table, column 7). The arithmetic founded on the new system was at first called \textit{algorithm} (after Alkhwarizm\^{i}), to distinguish it from the arithmetic of the abacus which it came to replace. A word must be said with reference to this arithmetic on the abacus. In the primitive abacus, or reckoning table, unit counters were used, and a number represented by the appropriate number of these counters in the appropriate columns of the instrument; \textit{e.~g.} 321 by 3 counters in the column of 100's, 2 in the column of 10's, and 1 in the column of units. The Romans employed such an abacus in all but the most elementary reckonings, it was in use in Greece, and is in use to-day in China. Before the introduction of \textit{algorithm}, however, reckoning on the abacus had been improved by the use in its columns of separate characters (called \textit{apices}) for each of the numbers 1, 2, \ldots, 9, instead of the primitive unit counters. This improved abacus reckoning was probably invented by Gerbert (Pope Sylvester II.), and certainly used by him at Rheims about 970--980, and became generally known in the following century. \begin{figure}[htbp] \centering \includegraphics[scale=0.45]{images/fig5.eps}\\ \end{figure} Now these apices are not Roman numerals, but symbols which do not differ greatly from the Gobar numerals and are clearly, like them, of Indian origin. In the absence of positive evidence a great controversy has sprung up among historians of mathematics over the immediate origin of the apices. The only earlier mention of them occurs in a passage of the geometry of Boetius, which, if genuine, was written about 500 \textsc{a.~d.} Basing his argument on this passage, the historian Cantor urges that the earlier Indian numerals found their way to Alexandria before her intercourse with the East was broken off, that is, before the end of the 4th century, and were transformed by Boetius into the apices. On the other hand, the passage in Boetius is quite generally believed to be spurious, and it is maintained that Gerbert got his apices directly or indirectly from the Arabians of Spain, not taking the 0, either because he did not learn of it, or because, being an abacist, he did not appreciate its value. At all events, it is certain that the Indo-Arabic numerals, 1, 2, \ldots 9 (not 0), appeared in Christian Europe more than a century before the complete positional system and \textit{algorithm}. The Indians are the inventors not only of the positional decimal system itself, but of most of the processes involved in elementary reckoning with the system. Addition and subtraction they performed quite as they are performed nowadays; multiplication they effected in many ways, ours among them, but division cumbrously. \chapter{THE FRACTION.} \addcontentsline{toc}{section}{\numberline{}Primitive fractions} \textbf{89. Primitive Fractions.} Of the artificial forms of number---as we may call the fraction, the irrational, the negative, and the imaginary in contradistinction to the positive integer---all but the fraction are creations of the mathematicians. They were devised to meet purely mathematical rather than practical needs. The fraction, on the other hand, is already present in the oldest numerical records---those of Egypt and Babylonia---was reckoned with by the Romans, who were no mathematicians, and by Greek merchants long before Greek mathematicians would tolerate it in arithmetic. The primitive fraction was a concrete thing, merely an aliquot part of some larger thing. When a unit of measure was found too large for certain uses, it was subdivided, and one of these subdivisions, generally with a name of its own, made a new unit. Thus there arose fractional units of measure, and in like manner fractional coins. In time the relation of the sub-unit to the corresponding principal unit came to be abstracted with greater or less completeness from the particular kind of things to which the units belonged, and was recognized when existing between things of other kinds. The relation was generalized, and a pure numerical expression found for it. \addcontentsline{toc}{section}{\numberline{}Roman fractions} \textbf{90. Roman Fractions.} Sometimes, however, the relation was never completely enough separated from the sub-units in which it was first recognized to be generalized. The Romans, for instance, never got beyond expressing all their fractions in terms of the \textit{uncia}, \textit{sicilicus}, etc., names originally of subdivisions of the old unit coin, the \textit{as}. \addcontentsline{toc}{section}{\numberline{}Egyptian (the Book of Ahmes)} \textbf{91. Egyptian Fractions.} Races of better mathematical endowments than the Romans, however, had sufficient appreciation of the fractional relation to generalize it and give it an arithmetical symbolism. The ancient Egyptians had a very complete symbolism of this sort. They represented any fraction whose numerator is 1 by the denominator simply, written as an integer with a dot over it, and resolved all other fractions into sums of such unit fractions. The oldest mathematical treatise known,---a papyrus\footnote{The Rhind papyrus of the British Museum; translated by A. Eisenlohr, Leipzig, 1877.} roll entitled ``Directions for Attaining to the Knowledge of All Dark Things,'' \, written by a scribe named Ahmes in the reign of Ra-\"{a}-us (therefore before 1700 \textsc{b.~c.}), after the model, as he says, of a more ancient work,---opens with a table which expresses in this manner the quotient of 2 by each odd number from 5 to 99. Thus the quotient of 2 by 5 is written $\dot{3}$ $\dot{15}$, by which is meant $\displaystyle \frac{1}{3} + \frac{1}{15}$; and the quotient of 2 by 13, $\dot{8}$ $ \dot{52}$ $\dot{104}$. Only $\displaystyle \frac{2}{3}$, among the fractions having numerators which differ from 1, gets recognition as a distinct fraction and receives a symbol of its own. \addcontentsline{toc}{section}{\numberline{}Babylonian or sexagesimal} \textbf{92. Babylonian or Sexagesimal Fractions.} The fractional notation of the Babylonian astronomers is of great interest intrinsically and historically. Like their notation of integers it is a sexagesimal positional notation. The denominator is always 60 or some power of 60 indicated by the position of the numerator, which alone is written. The fraction $\displaystyle \frac{3}{8}$, for instance, which is equal to $\displaystyle \frac{22}{60} + \frac{30}{60^2}$, would in this notation be written 22 30. Thus the ability to represent fractions by a single integer or a sequence of integers, which the Egyptians secured by the use of fractions having a common numerator, 1, the Babylonians found in fractions having common denominators and the principle of position. The Egyptian system is superior in that it gives an exact expression of every quotient, which the Babylonian can in general do only approximately. As regards practical usefulness, however, the Babylonian is beyond comparison the better system. Supply the 0-symbol and substitute 10 for 60, and this notation becomes that of the modern decimal fraction, in whose distinctive merits it thus shares. As in their origin, so also in their subsequent history, the sexagesimal fractions are intimately associated with astronomy. The astronomers of Greece, India, and Arabia all employ them in reckonings of any complexity, in those involving the lengths of lines as well as in those involving the measures of angles. So the Greek astronomer, Ptolemy (150 \textsc{a.~d.}), in the \textit{Almagest} ($\mu\epsilon\gamma\Acute{\alpha}\lambda\eta$ $\sigma\Acute{\upsilon}\nu\tau\alpha\xi\iota\varsigma$) measures chords as well as arcs in degrees, minutes, and seconds---the degree of chord being the 60th part of the radius as the degree of arc is the 60th part of the arc subtended by a chord equal to the radius. The sexagesimal fraction held its own as the fraction \textit{par excellence} for scientific computation until the 16th century, when it was displaced by the decimal fraction in all uses except the measurement of angles. \addcontentsline{toc}{section}{\numberline{}Greek} \textbf{93. Greek Fractions.} Fractions occur in Greek writings---both mathematical and non-mathematical---much earlier than Ptolemy, but not in arithmetic.\footnote{The usual method of expressing fractions was to write the numerator with an accent, and after it the denominator twice with a double accent: \textit{e.~g.} $\displaystyle \iota\zeta^\prime~\kappa\alpha^{\prime\prime}~\kappa\alpha^{\prime\prime} =\frac{17}{21}$. Before sexagesimal fractions came into vogue actual reckonings with fractions were effected by unit fractions, of which only the denominators (doubly accented) were written.} The Greeks drew as sharp a distinction between pure arithmetic, $\grave{\alpha}\rho\iota\theta\mu\eta\tau\iota\kappa\Acute{\eta}$, and the art of reckoning, $\lambda o \gamma\iota\sigma\tau\iota\kappa\Acute{\eta}$, as between pure and metrical geometry. The fraction was relegated to $\lambda o \gamma\iota\sigma\tau\iota\kappa\Acute{\eta}$. There is no place in a pure science for artificial concepts, no place, therefore, for the fraction in $\Acute{\alpha}\rho\iota\theta\mu\eta\tau\iota\kappa\Acute{\eta}$; such was the Greek position. Thus, while the metrical geometers---as Archimedes (250 \textsc{b.~c.}), in his ``Measure of the Circle'' \, ($\kappa \Acute{\upsilon}\kappa\lambda o \upsilon$ $\mu\Acute{\epsilon}\tau\rho\eta\sigma\iota\varsigma$), and Hero (120 \textsc{b.~c.})---employ fractions, neither of the treatises on Greek arithmetic before Diophantus (300 \textsc{a.~d.~}) which have come down to us---the 7th, 8th, 9th books of Euclid's ``Elements'' (300~\textsc{b.~c.}), and the ``Introduction to Arithmetic'' ([$\epsilon\iota\sigma\alpha\gamma\omega\gamma\acute{\eta}\ \alpha\rho\iota\theta\mu\eta\tau\iota\kappa\grave{\eta}$]) of Nicomachus (100~\textsc{a.~d.})---recognizes the fraction. They do, it is true, recognize the fractional relation. Euclid, for instance, expressly declares that any number is either a multiple, a part, or parts ([$\mu\grave{\epsilon}\rho\eta$]), \textit{i.~e.}\ multiple of a part, of every other number (Euc.~VII,~4), and he demonstrates such theorems as these: \emph{If $A$ be the same parts of $B$ that $C$ is of $D$, then the sum or difference of $A$ and $C$ is the same parts of the sum or difference of $B$ and $D$ that $A$ is of $B$} (VII,~6 and~8). \emph{If $A$ be the same parts of $B$ that $C$ is of $D$, then, alternately, $A$ is the same parts of $C$ that $B$ is of $D$} (VII,~10). But the relation is expressed by two integers, that which indicates the part and that which indicates the multiple. It is a ratio, and Euclid has no more thought of expressing it except by \emph{two} numbers than he has of expressing the ratio of two geometric magnitudes except by two magnitudes. There is no conception of a single number, the fraction proper, the quotient of one of these integers by the other. In the $\alpha\rho\iota\theta\mu\eta\tau\iota\kappa\grave{\alpha}$ of Diophantus, on the other hand, the last and transcendently the greatest achievement of the Greeks in the science of number, the fraction is granted the position in elementary arithmetic which it has held ever since. \chapter{ORIGIN OF THE IRRATIONAL.} \addcontentsline{toc}{section}{\numberline{}Discovery of irrational lines. Pythagoras} \textbf{94. The Discovery of Irrational Lines.} The Greeks attributed the discovery of the Irrational to the mathematician and philosopher Pythagoras\footnote{\label{Summary of the history of Greek mathematics}This is the explicit declaration of the most reliable document extant on the history of geometry before Euclid, a chronicle of the ancient geometers which Proclus (\textsc{a.~d.~} 450) gives in his commentary on Euclid, deriving it from a history written by Eudemus about 330 \textsc{b.~c.} This chronicle credits the Egyptians with the discovery of geometry and Thales (600 \textsc{b.~c.~}) with having first introduced this study into Greece. Thales and Pythagoras are the founders of the Greek mathematics. But while Thales should doubtless be credited with the first conception of an abstract deductive geometry in contradistinction to the practical empirical geometry of Egypt, the glory of realizing this conception belongs chiefly to Pythagoras and his disciples in the Greek cities of Italy (Magna Gr\ae cia); for they established the principal theorems respecting rectilineal figures. To the Pythagoreans the discovery of many of the elementary properties of numbers is due, as well as the geometric form which characterized the Greek theory of numbers throughout its history. In the middle of the fifth century before Christ Athens became the principal centre of mathematical activity. There Hippocrates of Chios (430 \textsc{b.~c.~}) made his contributions to the geometry of the circle, Plato (380 \textsc{b.~c.}) to geometric method, The\ae tetus (380 \textsc{b.~c.}) to the doctrine of incommensurable magnitudes, and Eudoxus (360 \textsc{b.~c.}) to the theory of proportion. There also was begun the study of the conics. About 300 \textsc{b.~c.~} the mathematical centre of the Greeks shifted to Alexandria, where it remained. The third century before Christ is the most brilliant period in Greek mathematics. At its beginning---in Alexandria---Euclid lived and taught and wrote his Elements, collecting, systematizing, and perfecting the work of his predecessors. Later (about 250) Archimedes of Syracuse flourished, the greatest mathematician of antiquity and founder of the science of mechanics; and later still (about 230) Apollonius of Perga, ``the great geometer,'' \, whose Conics marks the culmination of Greek geometry. Of the later Greek mathematicians, besides Hero and Diophantus, of whom an account is given in the text, and the great summarizer of the ancient mathematics, Pappus (300 \textsc{a.~d.}), only the famous astronomers Hipparchus (130 \textsc{b.~c.}) and Ptolemy (150 \textsc{a.~d.~}) call for mention here. To them belongs the invention of trigonometry and the first trigonometric tables, tables of chords. The dates in this summary are from Gow's Hist.\ of Greek Math.} (525~\textsc{b.~c.}). If, as is altogether probable,\footnote{Compare Cantor, Geschichte der Mathematik, p.~153.} the most famous theorem of Pythagoras---that \textit{the square on the hypothenuse of a right triangle is equal to the sum of the squares on the other two sides}---was suggested to him by the fact that $\displaystyle 3^2 +4^2 = 5^2$, in connection with the fact that the triangle whose sides are 3, 4, 5, is right-angled,---for both almost certainly fell within the knowledge of the Egyptians,---he would naturally have sought, after he had succeeded in demonstrating the geometric theorem generally, for number triplets corresponding to the sides of any right triangle as do 3, 4, 5 to the sides of the particular triangle. The search of course proved fruitless, fruitless even in the case which is geometrically the simplest, that of the isosceles right triangle. To discover that it was \textit{necessarily} fruitless; in the face of preconceived ideas and the apparent testimony of the senses, to conceive that lines may exist which have no common unit of measure, however small that unit be taken; to demonstrate that the hypothenuse and side of the isosceles right triangle actually are such a pair of lines, was the great achievement of Pythagoras.\footnote{\label{Old Greek demonstration that the side and diagonal of a square are incommensurable}His demonstration may easily have been the following, which was old enough in Aristotle's time (340 \textsc{b.~c.}) to be made the subject of a popular reference, and which is to be found at the end of the 10th book in all old editions of Euclid's Elements: If there be any line which the side and diagonal of a square both contain an exact number of times, let their lengths in terms of this line be $a$ and $b$ respectively; then $b^2=2a^2$. The numbers $a$ and $b$ may have a common factor, $\gamma$; so that $a=\alpha\gamma$ and $b=\beta\gamma$, where $\alpha$ and $\beta$ are prime to each other. The equation $b^2=2a^2$ then reduces, on the removal of the factor $\gamma^2$ common to both its members, to $\beta^2=2\alpha^2$. From this equation it follows that $\beta^2$, and therefore $\beta$, is an even number, and hence that $\alpha$ which is prime to $\beta$ is odd. But set $\beta=2\beta'$, where $\beta'$ is integral, in the equation $\beta^2=2\alpha^2$; it becomes $4\beta'^2=2\alpha^2$, or $2\beta'^2=\alpha^2$, whence $\alpha^2$, and therefore $\alpha$, is even. $\alpha$ has thus been proven to be both odd and even, and is therefore not a number.} \addcontentsline{toc}{section}{\numberline{}Consequences of this discovery in Greek mathematics} \textbf{95. Consequences of this Discovery in Greek Mathematics.} One must know the antecedents and follow the consequences of this discovery to realize its great significance. It was the first recognition of the fundamental difference between the geometric magnitudes and number, which Aristotle formulated brilliantly 200 years later in his famous distinction between the continuous and the discrete, and as such was potent in bringing about that complete banishment of numerical reckoning from geometry which is so characteristic of this department of Greek mathematics in its best, its creative period. No one before Pythagoras had questioned the possibility of expressing all size relations among lines and surfaces in terms of number,---rational number of course. Indeed, except that it recorded a few facts regarding congruence of figures gathered by observation, the Egyptian geometry was nothing else than a meagre collection of formulas for computing areas. The earliest geometry was metrical. But to the severely logical Greek no alternative seemed possible, when once it was known that lines exist whose lengths---whatever unit be chosen for measuring them---cannot both be integers, than to have done with number and measurement in geometry altogether. Congruence became not only the final but the sole test of equality. For the study of size relations among unequal magnitudes a pure geometric theory of proportion was created, in which proportion, not ratio, was the primary idea, the method of exhaustions making the theory available for figures bounded by curved lines and surfaces. The outcome was the system of geometry which Euclid expounds in his Elements and of which Apollonius makes splendid use in his Conics, a system absolutely free from extraneous concepts or methods, yet, within its limits, of great power. It need hardly be added that it never occurred to the Greeks to meet the difficulty which Pythagoras' discovery had brought to light by inventing an \textit{irrational number}, itself incommensurable with rational numbers. For artificial concepts such as that they had neither talent nor liking. On the other hand, they did develop the theory of irrational magnitudes as a department of their geometry, the irrational line, surface, or solid being one incommensurable with some chosen (rational) line, surface, solid. Such a theory forms the content of the most elaborate book of Euclid's Elements, the 10th. \addcontentsline{toc}{section}{\numberline{}Greek approximate values of irrationals} \textbf{96. Approximate Values of Irrationals.} In the practical or metrical geometry which grew up after the pure geometry had reached its culmination, and which attained in the works of Hero the Surveyor almost the proportions of our modern elementary mensuration,\footnote{The formula $\displaystyle \sqrt{s(s-a)(s-b)(s-c)}$ for the area of a triangle in terms of its sides is due to Hero.} \textit{approximate values} of irrational numbers played a very important rôle. Nor do such approximations appear for the first time in Hero. In Archimedes' ``Measure of the Circle'' \, a number of excellent approximations occur, among them the famous approximation $\displaystyle \frac{22}{7}$ for $\pi$, the ratio of the circumference of a circle to its diameter. The approximation $\displaystyle \frac{7}{5}$ for $\displaystyle \sqrt{2}$ is reputed to be as old as Plato. It is not certain how these approximations were effected.\footnote{\label{Greek methods of approximation}Many attempts have been made to discover the methods of approximation used by Archimedes and Hero from an examination of their results, but with little success. The formula $\displaystyle \sqrt{a^2\pm b}=a\pm\frac{b}{2a}$ will account for some of the simpler approximations, but no single method or set of methods have been found which will account for the more difficult. See G\"{u}nther: Die quadratischen Irrationalit\"{a}ten der Alten und deren Entwicklungsmethoden. Leipzig, 1882. Also in Handbuch der klassischen Altertums-Wissenschaft, 11ter. Halbband.} They involve the use of some method for extracting square roots. The earliest explicit statement of the method in common use to-day for extracting square roots of numbers (whether exactly or approximately) occurs in the commentary of Theon of Alexandria (380 \textsc{a.~d.~}) on Ptolemy's \textit{Almagest}. Theon, who like Ptolemy employs sexagesimal fractions, thus finds the length of the side of a square containing $4500^\circ$ to be $67^\circ 1' 55^{\prime\prime}$. \textbf{97. The Later History of the Irrational} is deferred to the chapters which follow (\S\S~106, 108, 112, 121, 129). It will be found that the Indians permitted the simplest forms of irrational numbers, surds, in their algebra, and that they were followed in this by the Arabians and the mathematicians of the Renaissance, but that the general irrational did not make its way into algebra until after Descartes. \chapter{ORIGIN OF THE NEGATIVE AND THE IMAGINARY\@. THE EQUATION.} \addcontentsline{toc}{section}{\numberline{}The equation in Egyptian mathematics} \textbf{98. The Equation in Egyptian Mathematics.} While the irrational originated in geometry, the negative and the imaginary are of purely algebraic origin. They sprang directly from the algebraic equation. The authentic history of the equation, like that of geometry and arithmetic, begins in the book of the old Egyptian scribe Ahmes. For Ahmes, quite after the present method, solves numerical problems which admit of statement in an equation of the first degree involving one unknown quantity.\footnote{His symbol for the unknown quantity is the word \textit{hau}, meaning heap.} \addcontentsline{toc}{section}{\numberline{}In the earlier Greek mathematics} \textbf{99. In the Earlier Greek Mathematics.} The equation was slow in arousing the interest of Greek mathematicians. They were absorbed in geometry, in a geometry whose methods were essentially non-algebraic. To be sure, there are occasional signs of a concealed algebra under the closely drawn geometric cloak. Euclid solves three geometric problems which, stated algebraically, are but the three forms of the quadratic; $x^2+ax =b^2$, $x^2 = ax+b^2$, $x^2 + b^2 = ax$.\footnote{Elements, VI, 29, 28; Data, 84, 85.} And the Conics of Apollonius, so astonishing if regarded as a product of the pure geometric method used in its demonstrations, when stated in the language of algebra, as recently it has been stated by Zeuthen,\footnote{Die Lehre von den Kegelschnitten im Altertum. Copenhagen, 1886.} almost convicts its author of the use of algebra as his instrument of investigation. \addcontentsline{toc}{section}{\numberline{}Hero of Alexandria} \textbf{100. Hero.} But in the writings of Hero of Alexandria (120 \textsc{b.~c.}) the equation first comes clearly into the light again. Hero was a man of practical genius whose aim was to make the rich pure geometry of his predecessors available for the surveyor. With him the rigor of the old geometric method is relaxed; proportions, even equations, among the \textit{measures} of magnitudes are permitted where the earlier geometers allow only proportions among the magnitudes themselves; the theorems of geometry are stated metrically, in formulas; and more than all this, the equation becomes a recognized geometric instrument. Hero gives for the diameter of a circle in terms of $s$, the sum of diameter, circumference, and area, the formula:\footnote{See Cantor; Geschichte der Mathematik, p.~341.} \[ d=\frac{\sqrt{154s+841}-29}{11} \] He could have reached this formula only by \textit{solving a quadratic equation}, and that not geometrically,---the nature of the oddly constituted quantity $s$ precludes that supposition,---but by a purely algebraic reckoning like the following: The area of a circle in terms of its diameter being $\displaystyle \frac{\pi d^2}{4}$, the length of its circumference $\pi d$, and $\pi$ according to Archimedes' approximation $\displaystyle \frac{22}{7}$, we have the equation: \[ s = d+\frac{\pi d^2}{4}+\pi d, \; \text{ or } \; \frac{11}{14}d^2+\frac{29}{7}d=s. \] Clearing of fractions, multiplying by 11, and completing the square, \[ 121 d^2 + 638d+ 841 = 154 s + 841, \] whence \[ 11 d + 29 =\sqrt{154 s + 841}, \] or \[d=\frac{\sqrt{154s+841}-29}{11}. \] Except that he lacked an algebraic symbolism, therefore, Hero was an algebraist, an algebraist of power enough to solve an affected quadratic equation. \addcontentsline{toc}{section}{\numberline{}Diophantus of Alexandria} \textbf{101. Diophantus.} (300 \textsc{a.~d.}?). The last of the Greek mathematicians, Diophantus of Alexandria, was a great algebraist. The period between him and Hero was not rich in creative mathematicians, but it must have witnessed a gradual development of algebraic ideas and of an algebraic symbolism. At all events, in the $\Grave{\alpha}\rho\iota\theta\mu\eta\tau\iota\kappa\Acute{\alpha}$ of Diophantus the algebraic equation has been supplied with a symbol for the unknown quantity, its powers and the powers of its reciprocal to the 6th, and a symbol for equality. Addition is represented by mere juxtaposition, but there is a special symbol, see Figure A, for subtraction. On the other hand, there are no general symbols for known quantities,---symbols to serve the purpose which the first letters of the alphabet are made to serve in elementary algebra nowadays,---therefore no literal coefficients and no general formulas. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{images/symbol.eps}\\ \textsc{Fig. A.} \end{figure} With the symbolism had grown up many of the formal rules of algebraic reckoning also. Diophantus prefaces the $\alpha\rho\iota\theta\mu\eta\tau\iota\kappa\grave{\alpha}$ with rules for the addition, subtraction, and multiplication of polynomials. He states expressly that the product of two subtractive terms is additive. The $\alpha\rho\iota\theta\mu\eta\tau\iota\kappa\grave{\alpha}$ itself is a collection of problems concerning numbers, some of which are solved by determinate algebraic equations, some by indeterminate. Determinate equations are solved which have given positive integers as coefficients, and are of any of the forms $ax^m = bx^n$, $ax^2 + bx = c$, $ax^2 +c = bx$, $ax^2 = bx + c$; also a single cubic equation, $ax^3 + x = 4x^2 + 4$. In reducing equations to these forms, equal quantities in opposite members are cancelled and subtractive terms in either member are rendered additive by transposition to the other member. The indeterminate equations are of the form $y^2 = ax^2 + bx + c$, Diophantus regarding any pair of positive \textit{rational} numbers (integers or fractions) as a solution which, substituted for $y$ and $x$, satisfy the equation.\footnote{\label{Diophantine equations}The designation ``Diophantine equations,'' commonly applied to indeterminate equations of the first degree when investigated for integral solutions, is a striking misnomer. Diophantus nowhere considers such equations, and, on the other hand, allows fractional solutions of indeterminate equations of the second degree.} These equations are handled with marvellous dexterity in the $\alpha\rho\iota\theta\mu\eta\tau\iota\kappa\grave{\alpha}$. No effort is made to develop general comprehensive methods, but each exercise is solved by some clever device suggested by its individual peculiarities. Moreover, the discussion is never exhaustive, one solution sufficing when the possible number is infinite. Yet until some trace of indeterminate equations earlier than the $\alpha\rho\iota\theta\mu\eta\tau\iota\kappa\grave{\alpha}$ is discovered, Diophantus must rank as the originator of this department of mathematics. The determinate quadratic is solved by the method which we have already seen used by Hero. The equation is first multiplied throughout by a number which renders the coefficient of $x^2$ a perfect square, the ``square is completed,'' the square root of both members of the equation taken, and the value of $x$ reckoned out from the result. Thus from $ax^2+c=bx$ is derived first the equation \begin{align*} a^2x^2+ac&=abx,\\ \mbox{then} \quad a^2x^2 - abx +\left(\frac{b}{2}\right)^2 &= \left(\frac{b}{2}\right)^2 - ac,\\ \mbox{then} \quad ax-\frac{b}{2} &= \sqrt{\left(\frac{b}{2}\right)^2-ac},\\ \mbox{and finally,} \quad x&=\frac{\frac{b}{2}+\sqrt{\left(\frac{b}{2}\right)^2-ac}}{a}. \end{align*} The solution is regarded as possible only when the number under the radical is a perfect square (it must, of course, be positive), and only one root---that belonging to the positive value of the radical---is ever recognized. Thus the number system of Diophantus contained only the positive integer and fraction; the irrational is excluded; and as for the negative, there is no evidence that a Greek mathematician ever conceived of such a thing,---certainly not Diophantus with his three classes and one root of affected quadratics. The position of Diophantus is the more interesting in that in the $\alpha\rho\iota\theta\mu\eta\tau\iota\kappa\grave{\alpha}$ the Greek science of number culminates. \addcontentsline{toc}{section}{\numberline{}The Indian mathematics. Âryabha\d{t}\d{t}a, Brahmagupta, Bhâskara} \textbf{102. The Indian Mathematics.} The pre-eminence in mathematics passed from the Greeks to the Indians. Three mathematicians of India stand out above the rest: \emph{Âryabha\d{t}\d{t}a} (born 476 \textsc{a.~d.}), \emph{Brahmagupta} (born 598 \textsc{a.~d.}), \emph{Bhâskara} (born 1114 \textsc{a.~d.}) While all are in the first instance astronomers, their treatises also contain full expositions of the mathematics auxiliary to astronomy, their reckoning, algebra, geometry, and trigonometry.\footnote{The mathematical chapters of Brahmagupta and Bhâskara have been translated into English by Colebrooke: ``Algebra, Arithmetic, and Mensuration, from the Sanscrit of Brahmagupta and Bhâskara,'' 1817; those of Âryabha\d{t}\d{t}a into French by L. Rodet (Journal Asiatique, 1879).} An examination of the writings of these mathematicians and of the remaining mathematical literature of India leaves little room for doubt that the Indian geometry was taken bodily from Hero, and the algebra---whatever there may have been of it before \^Aryabha\d{t}\d{t}a---at least powerfully affected by Diophantus. Nor is there occasion for surprise in this. \^Aryabha\d{t}\d{t}a lived two centuries after Diophantus and six after Hero, and during those centuries the East had frequent communication with the West through various channels. In particular, from Trajan's reign till later than 300~\textsc{a.~d.~} an active commerce was kept up between India and the east coast of Egypt by way of the Indian Ocean. Greek geometry and Greek algebra met very different fates in India. The Indians lacked the endowments of the geometer. So far from enriching the science with new discoveries, they seem with difficulty to have kept alive even a proper understanding of Hero's metrical formulas. But algebra flourished among them wonderfully. Here the fine talent for reckoning which could create a perfect numeral notation, supported by a talent equally fine for symbolical reasoning, found a great opportunity and made great achievements. With Diophantus algebra is no more than an art by which disconnected numerical problems are solved; in India it rises to the dignity of a science, with general methods and concepts of its own. \addcontentsline{toc}{section}{\numberline{}Its algebraic symbolism} \textbf{103. Its Algebraic Symbolism.} First of all, the Indians devised a complete, and in most respects adequate, symbolism. Addition was represented, as by Diophantus, by mere juxtaposition; subtraction, exactly as addition, except that a dot was written over the coefficient of the subtrahend. The syllable \textit{bha} written after the factors indicated a product; the divisor written under the dividend, a quotient; a syllable, \textit{ka}, written before a number, its (irrational) square root; one member of an equation placed over the other, their equality. The equation was also provided with symbols for any number of unknown quantities and their powers. \addcontentsline{toc}{section}{\numberline{}Its invention of the negative} \textbf{104. Its Invention of the Negative.} The most note-worthy feature of this symbolism is its representation of subtraction. To remove the subtractive symbol from between minuend and subtrahend (where Diophantus had placed his symbol, see Figure A.) to attach it wholly to the subtrahend and then connect this modified subtrahend with the minuend additively, is, formally considered, to transform the subtraction of a positive quantity into the addition of the corresponding negative. It suggests what other evidence makes certain, that \textit{algebra owes to India the immensely useful concept of the absolute negative.} Thus one of these dotted numbers is allowed to stand by itself as a member of an equation. Bh\^{a}skara recognizes the double sign of the square root, as well as the impossibility of the square root of a negative number (which is very interesting, as being the first dictum regarding the imaginary), and no longer ignores either root of the quadratic. More than this, recourse is had to the same expedients for interpreting the negative, for attaching a concrete physical idea to it, as are in common use to-day. The primary meaning of the very name given the negative was \textit{debt}, as that given the positive was \textit{means}. The opposition between the two was also pictured by lines described in opposite directions. \addcontentsline{toc}{section}{\numberline{}Its use of zero} \textbf{105. Its Use of Zero.} But the contributions of the Indians to the fund of algebraic concepts did not stop with the absolute negative. They made a number of 0, and though some of their reckonings with it are childish, Bh\^{a}skara, at least, had sufficient understanding of the nature of the ``quotient'' \, $\displaystyle \frac{a}{0}$ (infinity) to say ``it suffers no change, however much it is increased or diminished.'' \, He associates it with Deity. \addcontentsline{toc}{section}{\numberline{}Its use of irrational numbers} \textbf{106. Its Use of Irrational Numbers.} Again, the Indians were the first to reckon with irrational square roots as with numbers; Bhâskara extracting square roots of binomial surds and rationalizing irrational denominators of fractions even when these are polynomial. Of course they were as little able rigorously to justify such a procedure as the Greeks; less able, in fact, since they had no equivalent of the method of exhaustions. But it probably never occurred to them that justification was necessary; they seem to have been unconscious of the gulf fixed between the discrete and continuous. And here, as in the case of 0 and the negative, with the confidence of apt and successful reckoners, they were ready to pass immediately from numerical to purely symbolical reasoning, ready to trust their processes even where formal demonstration of the right to apply them ceased to be attainable. Their skill was too great, their instinct too true, to allow them to go far wrong. \addcontentsline{toc}{section}{\numberline{}Its treatment of determinate and indeterminate equations} \textbf{107. Determinate and Indeterminate Equations in Indian Algebra.} As regards equations---the only changes which the Indian algebraists made in the treatment of determinate equations were such as grew out of the use of the negative. This brought the triple classification of the quadratic to an end and secured recognition for both roots of the quadratic. Brahmagupta solves the quadratic by the rule of Hero and Diophantus, of which he gives an explicit and general statement. Çrîdhara, a mathematician of some distinction belonging to the period between Brahmagupta and Bhâskara, made the improvement of this method which consists in first multiplying the equation throughout by four times the coefficient of the square of the unknown quantity and so preventing the occurrence of fractions under the radical sign.\footnote{This method still goes under the name ``Hindoo method.''} Bhâskara also solves a few cubic and biquadratic equations by special devices. The theory of indeterminate equations, on the other hand, made great progress in India. The achievements of the Indian mathematicians in this beautiful but difficult department of the science are as brilliant as those of the Greeks in geometry. They created the doctrine of the indeterminate equation of the first degree, $ax + by = c$, which they treated for integral solutions by the method of continued fractions in use to-day. They worked also with equations of the second degree of the forms $ax^2 + b = cy^2$, $xy = ax + by + c$, originating general and comprehensive methods where Diophantus had been content with clever devices. \addcontentsline{toc}{section}{\numberline{}The Arabian mathematics. Alkhwarizmî, Alkarchî, Alchayyâmî} \textbf{108. The Arabian Mathematics.} The Arabians were the instructors of modern Europe in the ancient mathematics. The service which they rendered in the case of the numeral notation and reckoning of India they rendered also in the case of the geometry, algebra, and astronomy of the Greeks and Indians. Their own contributions to mathematics are unimportant. Their receptiveness for mathematical ideas was extraordinary, but they had little originality. The history of Arabian mathematics begins with the reign of Alman\d{s}ûr (754--775),\footnote{It was Alman\d{s}ûr who transferred the throne of the caliphs from Damascus to Bagdad which immediately became not only the capital city of Islam, but its commercial and intellectual centre.} the second of the Abbasid caliphs. It is related (by Ibn-al-Adamî, about 900) that in this reign, in the year 773, an Indian brought to Bagdad certain astronomical writings of his country, which contained a method called ``Sindhind,'' for computing the motions of the stars,---probably portions of the Siddhânta of Brahmagupta,---and that Alfazârî was commissioned by the caliph to translate them into Arabic.\footnote{This translation remained the guide of the Arabian astronomers until the reign of Almamûn (813--833), for whom Alkhwarizmî prepared his famous astronomical tables (820). Even these were based chiefly on the ``Sindhind,'' though some of the determinations were made by methods of the Persians and Ptolemy.} Inasmuch as the Indian astronomers put full expositions of their reckoning, algebra, and geometry into their treatises, Alfazârî's translation laid open to his countrymen a rich treasure of mathematical ideas and methods. It is impossible to set a date to the entrance of Greek ideas. They must have made themselves felt at Damascus, the residence of the later Omayyad caliphs, for that city had numerous inhabitants of Greek origin and culture. But the first translations of Greek mathematical writings were made in the reign of Hârûn Arraschîd (786--809), when Euclid's Elements and Ptolemy's Almagest were put into Arabic. Later on, translations were made of Archimedes, Apollonius, Hero, and last of all, of Diophantus (by Abû'l Wafâ, 940--998). The earliest mathematical author of the Arabians is Alkhwarizmî, who flourished in the first quarter of the 9th century. Besides astronomical tables, he wrote a treatise on algebra and one on reckoning (elementary arithmetic). The latter has already been mentioned. It is an exposition of the positional reckoning of India, the reckoning which mediæval Europe named after him \textit{Algorithm}. The treatise on algebra bears a title in which the word \textit{Algebra} appears for the first time: viz., \textit{Aldjebr walmukâbala}. Aldjebr (\textit{i.~e.} reduction) signifies the making of all terms of an equation positive by transferring negative terms to the opposite member of the equation; \textit{almukâbala} (\textit{i.~e.} opposition), the cancelling of equal terms in opposite members of an equation. Alkhwarizmî's classification of equations of the 1st and 2d degrees is that to which these processes would naturally lead, viz.: \[ \begin{array}{lll} ax^2 = bx, & bx^2 = c, & bx = c,\\ x^2 + bx = c, & x^2 + c = bx, & x^2 = bx + c. \end{array} \] These equations he solves separately, following up the solution in each case with a geometric demonstration of its correctness. He recognizes both roots of the quadratic when they are positive. In this respect he is Indian; in all others---the avoidance of negatives, the use of geometric demonstration---he is Greek. Besides Alkhwarizmî, the most famous algebraists of the Arabians were \textit{Alkarchî} and \textit{Alchayyâmî}, both of whom lived in the 11th century. Alkarchî gave the solution of equations of the forms: \[ax^{2p}+bx^p=c, ax^{2p}+c=bx^p, bx^p+c=ax^{2p}.\] He also reckoned with irrationals, the equations \[\sqrt{8}+\sqrt{18}=\sqrt{50}, \sqrt[3]{54}-\sqrt[3]{2}=\sqrt[3]{16},\] being pretty just illustrations of his success in this field. Alchayyâmî was the first mathematician to make a systematic investigation of the cubic equation. He classified the various forms which this equation takes when all its terms are positive, and solved each form geometrically---by the intersections of conics.\footnote{\label{Alchayyami method of solving cubics by the intersections of conics}Thus suppose the equation $x^3+bx=a$, given. For $b$ substitute the quantity $p^2$, and for $a$, $p^2r$. Then $x^3=p^3(r-x)$. Now this equation is the result of eliminating $y$ from between the two equations, $x^2=py$, $y^2=x(r-x)$; the first of which is the equation of a parabola, the second, of a circle. Let these two curves be constructed; they will intersect in one real point distinct from the origin, and the abscissa of this point is a root of $x^3+bx=a$. See Hankel, Geschichte der Mathematik, p.~279. This method is of greater interest in the history of geometry than in that of algebra. It involves an anticipation of some of the most important ideas of Descartes' \textit{Géométrie} (see p.~118).} A pure algebraic solution of the cubic he believed impossible. Like Alkhwarizmî, Alkarchî and Alchayyâmî were Eastern Arabians. But early in the 8th century the Arabians conquered a great part of Spain. An Arabian realm was established there which became independent of the Bagdad caliphate in 747, and endured for 300 years. The intercourse of these Western Arabians with the East was not frequent enough to exercise a controlling influence on their æsthetic or scientific development. Their mathematical productions are of a later date than those of the East and almost exclusively arithmetico-algebraic. They constructed a formal algebraic notation which went over into the Latin translations of their writings and rendered the path of the Europeans to a knowledge of the doctrine of equations easier than it would have been, had the Arabians of the East been their only instructors. The best known of their mathematicians are \textit{Ibn Aflah} (end of 11th century), \textit{Ibn Albannâ} (end of 13th century), \textit{Alkasâdî} (15th century). \addcontentsline{toc}{section}{\numberline{}Arabian algebra Greek rather than Indian} \textbf{109. Arabian Algebra Greek rather than Indian.} Thus, of the three greater departments of the Arabian mathematics, the Indian influence gained the mastery in reckoning only. The Arabian geometry is Greek through and through. While the algebra contains both elements, the Greek predominates. Indeed, except that both roots of the quadratic are recognized, the doctrine of the determinate equation is altogether Greek. It avoids the negative almost as carefully as Diophantus does; and in its use of the geometric method of demonstration it is actuated by a spirit less modern still---the spirit in which Euclid may have conceived of algebra when he solved his geometric quadratics. The theory of indeterminate equations seldom goes beyond Diophantus; where it does, it is Indian. The Arabian trigonometry is based on Ptolemy's, but is its superior in two important particulars. It employs the sine where Ptolemy employs the chord (being in this respect Indian), and has an algebraic instead of a geometric form. Some of the methods of approximation used in reckoning out trigonometric tables show great cleverness. Indeed, the Arabians make some amends for their ill-advised return to geometric algebra by this excellent achievement in algebraic geometry. The preference of the Arabians for Greek algebra was especially unfortunate in respect to the negative, which was in consequence forced to repeat in Europe the fight for recognition which it had already won in India. \addcontentsline{toc}{section}{\numberline{}Mathematics in Europe before the twelfth century} \textbf{110. Mathematics in Europe before the Twelfth Century.} The Arabian mathematics found entrance to Christian Europe in the 12th century. During this century and the first half of the next a good part of its literature was translated into Latin. Till then the plight of mathematics in Europe had been miserable enough. She had no better representatives than the Romans, the most deficient in the sense for mathematics of all cultured peoples, ancient or modern; no better literature than the collection of writings on surveying known as the \textit{Codex Arcerianus}, and the childish arithmetic and geometry of Boetius. Prior to the 10th century, however, Northern Europe had not sufficiently emerged from barbarism to call even this paltry mathematics into requisition. What learning there was was confined to the cloisters. Reckoning (\textit{computus}) was needed for the Church calendar and was taught in the cloister schools established by Alcuin (735--804) under the patronage of Charlemagne. Reckoning was commonly done on the fingers. Not even was the multiplication table generally learned. Reference would be made to a written copy of it, as nowadays reference is made to a table of logarithms. The Church did not need geometry, and geometry in any proper sense did not exist. \addcontentsline{toc}{section}{\numberline{}Gerbert} \textbf{111. Gerbert.} But in the 10th century there lived a man of true scientific interests and gifts, Gerbert,\footnote{See §88.} Bishop of Rheims, Archbishop of Ravenna, and finally Pope Sylvester II\@. In him are the first signs of a new life for mathematics. His achievements, it is true, do not extend beyond the revival of Roman mathematics, the authorship of a geometry based on the \textit{Codex Arcerianus}, and a method for effecting division on the abacus with apices. Yet these achievements are enough to place him far above his contemporaries. His influence gave a strong impulse to mathematical studies where interest in them had long been dead. He is the forerunner of the intellectual activity ushered in by the translations from the Arabic, for he brought to life the feeling of the need for mathematics which these translations were made to satisfy. \addcontentsline{toc}{section}{\numberline{}Entrance of the Arabian mathematics. Leonardo} \textbf{112. Entrance of the Arabian Mathematics. Leonardo.} It was the elementary branch of the Arabian mathematics which took root quickest in Christendom---reckoning with nine digits and 0. \textit{Leonardo} of Pisa---\textit{Fibonacci}, as he was also called---did great service in the diffusion of the new learning through his \textit{Liber Abaci} (1202 and 1228), a remarkable presentation of the arithmetic and algebra of the Arabians, which remained for centuries the fund from which reckoners and algebraists drew and is indeed the foundation of the modern science. The four fundamental operations on integers and fractions are taught after the Arabian method; the extraction of the square root and the doctrine of irrationals are presented in their pure algebraic form; quadratic equations are solved and applied to quite complicated problems; \textit{negatives are accepted when they admit of interpretation as debt}. The last fact illustrates excellently the character of the \textit{Liber Abaci}. It is not a mere translation, but an independent and masterly treatise in one department of the new mathematics. Besides the \textit{Liber Abaci}, Leonardo wrote the \textit{Practica Geometriae}, which contains much that is best of Euclid, Archimedes, Hero, and the elements of trigonometry; also the \textit{Liber Quadratorum}, a collection of original algebraic problems most skilfully handled. \addcontentsline{toc}{section}{\numberline{}Mathematics during the age of Scholasticism} \textbf{113. Mathematics during the Age of Scholasticism.} Leonardo was a great mathematician,\footnote{\label{Jordanus Nemorarius}Besides Leonardo there flourished in the first quarter of the 13th century an able German mathematician, \textit{Jordanus Nemorarius}. He was the author of a treatise entitled \textit{De numeris datis}, in which known quantities are for the first time represented by letters, and of one \textit{De trangulis} which is a rich though rather systemless collection of theorems and problems principally of Greek and Arabian origin. See Günther: Geschichte des mathemathischen Unterrichts im deutschen Mittelalter, p.~156.} but fine as his work was, it bore no fruit until the end of the 15th century. In him there had been a brilliant response to the Arabian impulse. But the awakening was only momentary; it quickly yielded to the heavy lethargy of the ``dark'' ages. The age of scholasticism, the age of devotion to the forms of thought, logic and dialectics, is the age of greatest dulness and confusion in mathematical thinking.\footnote{\label{The summa of Luca Pacioli}Compare Hankel, Geschichte der Mathematik, pp.~349--352. To the unfruitfulness of these centuries the \textit{Summa} of \textit{Luca Pacioli} bears witness. This book, which has the distinction of being the earliest book on algebra printed, appeared in 1494, and embodies the arithmetic, algebra, and geometry of the time just preceding the Renaissance. It contains not an idea or method not already presented by Leonardo. Even in respect to algebraic symbolism it surpasses the \textit{Liber Abaci} only to the extent of using abbreviations for a few frequently recurring words, as p.\ for ``plus,'' and R.\ for ``res'' (the unknown quantity). And this is not to be regarded as original with Pacioli for the Arabians of Leonardo's time made a similar use of abbreviations. In a translation made by Gerhard of Cremona (12th century) from an unknown Arabic original the letters \textit{r} (radix), $c$ (census), $d$ (dragma) are used to represent the unknown quantity, its square, and the absolute term respectively. The \textit{Summa} of Pacioli has great merits, notwithstanding its lack of originality. It satisfied the mathematical needs of the time. It is very comprehensive, containing full and excellent instruction in the art of reckoning after the methods of Leonardo, for the merchant-man, and a great variety of matter of a purely theoretical interest also---representing the elementary theory of numbers, algebra, geometry, and the application of algebra to geometry. Compare Cantor, Geschichte der Mathematik, II, p.~308. \label{Regiomontanus}It should be added that the 15th century produced a mathematician who deserves a distinguished place in the general history of mathematics on account of his contributions to trigonometry, the astronomer \textit{Regiomontanus} (1436--1476). Like Jordanus, he was a German.} Algebra owes the entire period but a single contribution; the concept of the fractional power. Its author was Nicole Oresme (died 1382), who also gave a symbol for it and the rules by which reckoning with it is governed. \addcontentsline{toc}{section}{\numberline{}The Renaissance. Solution of the cubic and biquadratic equations} \textbf{114. The Renaissance. Solution of the Cubic and Biquadratic Equations.} The first achievement in algebra by the mathematicians of the Renaissance was the algebraic solution of the cubic equation: a fine beginning of a new era in the history of the science. The cubic $x^3 + mx = n$ was solved by \textit{Ferro} of Bologna in 1505, and a second time and independently, in 1535, by Ferro's countryman, \textit{Tartaglia}, who by help of a transformation made his method apply to $x^3 \pm mx^2 = \pm n$ also. But \textit{Cardan} of Milan was the first to publish the solution, in his \textit{Ars Magna},\footnote{The proper title of this work is: ``Artis magnae sive de regulis Algebraicis liber unus.'' It has stolen the title of Cardan's ``Ars magna Arithmeticae,'' published at Basel, 1570.} 1545. The \textit{Ars Magna} records another brilliant discovery: the solution---after a general method---of the biquadratic $x^4 + 6x^2 + 36 = 60x$ by \textit{Ferrari}, a pupil of Cardan. Thus in Italy, within fifty years of the new birth of algebra, after a pause of sixteen centuries at the quadratic, the limits of possible attainment in the algebraic solution of equations were reached; for the algebraic solution of the general equation of a degree higher than 4 is impossible, as was first demonstrated by Abel.\footnote{Mémoire sur les Equations Algébriques: Christiania, 1826. Also in Crelle's Journal, I, p.~65.} The general solution of higher equations proving an obstinate problem, nothing was left the searchers for the roots of equations but to devise a method of working them out approximately. In this the French mathematician \textit{Vieta} (1540--1603) was successful, his method being essentially the same as that now known as Newton's. \addcontentsline{toc}{section}{\numberline{}The negative in the algebra of this period. First appearance of the imaginary} \textbf{115. The Negative in the Algebra of this Period. First Appearance of the Imaginary.} But the general equation presented other problems than the discovery of rules for obtaining its roots; the nature of these roots and the relations between them and the coefficients of the equation invited inquiry. We witness another phase of the struggle of the negative for recognition. The imaginary is now ready to make common cause with it. Already in the \textit{Ars Magna} Cardan distinguishes between \textit{numeri veri}---the positive integer, fraction, and irrational,---and \textit{numeri ficti}, or \textit{falsi}---the negative and the square root of the negative. Like Leonardo, he tolerates negative roots of equations when they admit of interpretation as ``debitum,'' not otherwise. While he has no thought of accepting imaginary roots, he shows that if $5 + \sqrt{-15}$ be substituted for $x$ in $x(10 - x) = 40$, that equation is satisfied; which, of course, is all that is meant nowadays when $5 + \sqrt{-15}$ is called a root. His declaration that $5 \pm \sqrt{-15}$ are ``vere sophistica'' does not detract from the significance of this, the earliest recorded instance of reckoning with the imaginary. It ought perhaps to be added that Cardan is not always so successful in these reckonings; for in another place he sets \[ \frac{1}{4}(-\sqrt{-\frac{1}{4}}) = \sqrt{\frac{1}{64}} = \frac{1}{8} \] Following Cardan, \textit{Bombelli}\footnote{L'Algebra, 1579. He also formally states rules for reckoning with $\pm \sqrt{-1}$ and $a + b \sqrt{-1}$.} reckoned with imaginaries to good purpose, explaining by their aid the irreducible case in Cardan's solution of the cubic. On the other hand, neither Vieta nor his distinguished follower, the Englishman \textit{Harriot} (1560--1621), accept even negative roots; though Harriot does not hesitate to perform algebraic reckonings on negatives, and even allows a negative to constitute one member of an equation. \addcontentsline{toc}{section}{\numberline{}Algebraic symbolism. Vieta and Harriot} \textbf{116. Algebraic Symbolism. Vieta and Harriot.} Vieta and Harriot, however, did distinguished service in perfecting the symbolism of algebra; Vieta, by the systematic use of letters to represent known quantities,---algebra first became ``literal'' or ``universal arithmetic'' in his hands,\footnote{\label{Algebraic symbolism}There are isolated instances of this use of letters much earlier than Vieta in the \textit{De numeris datis} of Jordanus Nemorarius, and in the \textit{Algorithmus demonstratus} of the same author. But the credit of making it the general practice of algebraists belongs to Vieta.}---Harriot, by ridding algebraic statements of every non-symbolic element, of everything but the letters which represent quantities known as well as unknown, symbols of operation, and symbols of relation. Harriot's \textit{Artis Analyticae Praxis} (1631) has quite the appearance of a modern algebra.\footnote{One has only to reflect how much of the power of algebra is due to its admirable symbolism to appreciate the importance of the \textit{Artis Analyticae Praxis}, in which this symbolism is finally established. But one addition of consequence has since been made to it, integral and fractional exponents introduced by Descartes (1637) and Wallis (1659). Harriot substituted small letters for the capitals used by Vieta, but followed Vieta in representing known quantities by consonants and unknown by vowels. The present convention of representing known quantities by the earlier letters of the alphabet, unknown by the later, is due to Descartes. Vieta's notation is unwieldy and ill adapted to purposes of algebraic reckoning. Instead of restricting itself, as Harriot's does, to the use of brief and easily apprehended conventional symbols, it also employs words subject to the rules of syntax. Thus for $A^3 - 3B^2A = Z$ (or $aaa - 3bba = z$, as Harriot would have written it), Vieta writes \textit{A cubus - B quad 3 in A aequatur Z solido}. In this respect Vieta is inferior not only to Harriot, but to several of his predecessors and notably to his contemporary, the Dutch mathematician Stevinus (1548--1620), who would, for instance, have written $x^2 + 3x - 8$ as $1* + 3* - 8*$. The geometric affiliations of Vieta's notation are obvious. It suggests the Greek arithmetic. It is surprising that algebraic symbolism should owe so little to the great Italian algebraists of the 16th century. Like Pacioli (see note, p.~113) they were content with a few abbreviations for words, a ``syncopated'' notation, as it has been called, and an incomplete one at that. The current symbols of operation and relation are chiefly of English and German origin, having been invented or adopted as follows: viz. $=$, by \textit{Recorde} in 1556; $\sqrt{}$, by \textit{Rudolf} in 1525; the \textit{vinculum}, by \textit{Vieta} in 1591; \textit{brackets}, by \textit{Bombelli}, 1572; $\div$, by \textit{Rahn} in 1659; $\times , >, <,$ by \textit{Harriot} in 1631. The signs $+$ and $-$ occur in a 15th century manuscript discovered by Gerhardt at Vienna. The notations $a - b$ and $\frac{a}{b}$ for the fraction were adopted from the Arabians.} \addcontentsline{toc}{section}{\numberline{}The fundamental theorem of algebra. Harriot and Girard} \textbf{117. Fundamental Theorem of Algebra. Harriot and Girard.} Harriot has been credited with the discovery of the ``fundamental theorem'' of algebra---the theorem that the number of roots of an algebraic equation is the same as its degree. The \textit{Artis Analyticae Praxis} contains no mention of this theorem---indeed, by ignoring negative and imaginary roots, leaves no place for it; yet Harriot develops systematically a method which, if carried far enough, leads to the discovery of this theorem as well as to the relations holding between the roots of an equation and its coefficients. By multiplying together binomial factors which involve the unknown quantity, and setting their product equal to 0, he builds ``canonical'' equations, and shows that the roots of these equations---the only roots, he says---are the positive values of the unknown quantity which render these binomial factors 0. Thus he builds $aa - ba - ca = -bc$, in which $a$ is the unknown quantity, out of the factors $a - b, a + c$, and proves that $b$ is a root of this equation and the only root, the negative root $c$ being totally ignored. While no attempt is made to show that if the terms of a ``common'' equation be collected in one member, this can be separated into binomial factors, the case of canonical equations raised a strong presumption for the soundness of this view of the structure of an equation. The first statement of the fundamental theorem and of the relations between coefficients and roots occurs in a remarkably clever and modern little book, the \textit{Invention Nouvelle en l'Algebre}, of \textit{Albert Girard}, published in Amsterdam in 1629, two years earlier, therefore, than the \textit{Artis Anatyticae Praxis}. Girard stands in no fear of imaginary roots, but rather insists on the wisdom of recognizing them. They never occur, he says, except when real roots are lacking, and then in number just sufficient to fill out the entire number of roots to equality with the degree of the equation. Girard also anticipated Descartes in the geometrical interpretation of negatives. But the \textit{Invention Nouvelle} does not seem to have attracted much notice, and the genius and authority of Descartes were needed to give the interpretation general currency. \chapter{ACCEPTANCE OF THE NEGATIVE, THE GENERAL IRRATIONAL, AND THE IMAGINARY AS NUMBERS.} \addcontentsline{toc}{section}{\numberline{}Descartes' \textit{Géométrie} and the negative} \textbf{118. Descartes' Géométrie and the Negative.} The \textit{Géométrie} of Descartes appeared in 1637. This famous little treatise enriched geometry with a general and at the same time simple and natural method of investigation: the method of representing a geometric curve by an equation, which, as Descartes puts it, expresses generally the relation of its points to those of some chosen line of reference.\footnote{See Géométrie, Livre II\@. In Cousin's edition of Descartes' works, Vol. V, p.~337.} To form such equations Descartes represents line segments by letters,---the known by $a, b, c,$ etc., the unknown by $x$ and $y$. He supposes a perpendicular, $y$, to be dropped from any point of the curve to the line of reference, and then the equation to be found from the known properties of the curve which connects $y$ with $x$, the distance of $y$ from a fixed point of the line of reference. This is the equation of the curve in that it is satisfied by the $x$ and $y$ of each and every curve-point.\footnote{Descartes fails to recognize a number of the conventions of our modern Cartesian geometry. He makes no formal choice of two axes of reference, calls abscissas $y$ and ordinates $x$, and as frequently regards as positive ordinates below the axis of abscissas as ordinates above it.} To meet the difficulty that the mere length of the perpendicular ($y$) from a curve-point will not indicate to which side of the line of reference the point lies, Descartes makes the convention that perpendiculars on opposite sides of this line (and similarly intercepts ($x$) on opposite sides of the point of reference) shall have opposite algebraic signs. This convention gave the negative a new position in mathematics. Not only was a ``real'' interpretation here found for it, the lack of which had made its position so difficult hitherto, but it was made indispensable, placed on a footing of equality with the positive. The acceptance of the negative in algebra kept pace with the spread of Descartes' analytical method in geometry. \addcontentsline{toc}{section}{\numberline{}Descartes' geometric algebra} \textbf{119. Descartes' Geometric Algebra.} But the \textit{Géométrie} has another and perhaps more important claim on the attention of the historian of algebra. The entire method of the book rests on the assumption---made only tacitly, to be sure, and without knowledge of its significance---that two algebras are formally identical whose fundamental operations are formally the same; \textit{i.~e.} subject to the same laws of combination. For the algebra of the \textit{Géométrie} is not, as is commonly said, mere numerical algebra, but what may for want of a better name be called the algebra of line segments. Its symbolism is the same as that of numerical algebra; but symbols which there represent numbers here represent line segments. Not only is this the case with the letters $a, b, x, y,$ etc., which are mere names (\textit{noms}) of line segments, not their numerical measures, but with the algebraic combinations of these letters. $a + b$ and $a - b$ are respectively the sum and difference of the line segments $a$ and $b$; $ab$, the fourth proportional to an assumed unit line, $a$, and $b$; $\frac{a}{b}$, the fourth proportional to $b, a,$ and the unit line; and $\sqrt{a}, \sqrt[3]{a}$, etc., the first, second, etc., mean proportionals to the unit line and $a$.\footnote{Géométrie, Livre I.\ Ibid.\ pp.~313--314.} Descartes' justification of this use of the symbols of numerical algebra is that the geometric constructions of which he makes $a + b, a - b,$ etc., represent the results are ``the same'' as numerical addition, subtraction, multiplication, division, and evolution, respectively. Moreover, since all geometric constructions which determine line segments may be resolved into combinations of these constructions as the operations of numerical algebra into the fundamental operations, the correspondence which holds between these fundamental constructions and operations holds equally between the more complex constructions and operations. The entire system of the geometric constructions under consideration may therefore be regarded as formally identical with the system of algebraic operations, and be represented by the same symbolism. In what sense his fundamental constructions are ``the same'' as the fundamental operations of arithmetic, Descartes does not explain. The true reason of their formal identity is that both are controlled by the commutative, associative, and distributive laws. Thus in the case of the former as of the latter, $ab = ba$, and $a(bc) = abc$; for the fourth proportional to the unit line, $a$, and $b$ is the same as the fourth proportional to the unit line, $b$, and $a$; and the fourth proportional to the unit line, $a$, and $bc$ is the same as the fourth proportional to the unit line, $ab$, and $c$. But this reason was not within the reach of Descartes, in whose day the fundamental laws of numerical algebra had not yet been discovered. \addcontentsline{toc}{section}{\numberline{}The continuous variable. Newton. Euler} \textbf{120. The Continuous Variable. Newton. Euler.} It is customary to credit the \textit{Géométrie} with having introduced the \textit{continuous variable} into mathematics, but without sufficient reason. Descartes prepared the way for this concept, but he makes no use of it in the \textit{Géométrie}. The $x$ and $y$ which enter in the equation of a curve he regards not as variables but as indeterminate quantities, a pair of whose values correspond to each curve-point.\footnote{Géométrie, Livre II. Ibid.\ pp.~337--338.} The real author of this concept is Newton (1642--1727), of whose great invention, the method of fluxions, continuous variation, ``flow,'' is the fundamental idea. But Newton's calculus, like Descartes' algebra, is geometric rather than purely numerical, and his followers in England, as also, to a less extent, the followers of his great rival, Leibnitz, on the continent, in employing the calculus, for the most part conceive of variables as lines, not numbers. The geometric form again threatened to become paramount in mathematics, and geometry to enchain the new ``analysis'' as it had formerly enchained the Greek arithmetic. It is the great service of \textit{Euler} (1707--1783) to have broken these fetters once for all, to have accepted the \textit{continuously variable number} in its purity, and therewith to have created the pure analysis. For the relations of continuously variable numbers constitute the field of the pure analysis; its central concept, the \textit{function}, being but a device for representing their interdependence. \addcontentsline{toc}{section}{\numberline{}The general irrational} \textbf{121. The General Irrational.} While its concern with variables puts analysis in a certain opposition to elementary algebra, concerned as this is with constants, its establishment of the continuously variable number in mathematics brought about a rich addition to the number-system of algebra---the \textit{general irrational}. Hitherto the only irrational numbers had been ``surds,'' impossible roots of rational numbers; henceforth their domain is as wide as that of all possible lines incommensurable with any assumed unit line. \addcontentsline{toc}{section}{\numberline{}The imaginary, a recognized analytical instrument} \textbf{122. The Imaginary, a Recognized Analytical Instrument.} Out of the excellent results of the use of the negative grew a spirit of toleration for the imaginary. Increased attention was paid to its properties. Leibnitz noticed the real sum of conjugate imaginaries (1676--7); Demoivre discovered (1730) the famous theorem \[ (\cos \theta + i \sin \theta)^{n} = \cos n\theta + i \sin n\theta; \] and Euler (1748) the equation \[ \cos \theta + i \sin \theta = e^{i\theta}, \] which plays so great a rôle in the modern theory of functions. Euler also, practising the method of expressing complex numbers in terms of modulus and angle, formed their products, quotients, powers, roots, and logarithms, and by many brilliant discoveries multiplied proofs of the power of the imaginary as an analytical instrument. \addcontentsline{toc}{section}{\numberline{}Argand's geometric representation of the imaginary} \textbf{123. Argand's Geometric Representation of the Imaginary.} But the imaginary was never regarded as anything better than an algebraic fiction---to be avoided, where possible, by the mathematician who prized purity of method---until a method was discovered for representing it geometrically. A Norwegian, \textit{Wessel},\footnote{See W.~W.~Beman in Proceedings of the American Association for the Advancement of Science, 1897.} published such a method in 1797, and a Frenchman, \textit{Argand}, the same method independently in 1806. As +1 and -1 may be represented by unit lines drawn in opposite directions from any point, $O$, and as $i$ (\textit{i.~e.} $\sqrt{-1}$) is a mean proportional to +1 and -1, it occurred to Argand to represent this symbol by the line whose direction with respect to the line +1 is the same as the direction of the line -1 with respect to it; viz., the unit perpendicular through $O$ to the 1-line. Let only the \textit{direction} of the 1-line be fixed, the position of the point $O$ in the plane is altogether indifferent. Between the segments of a given line, whether taken in the same or opposite directions, the equation holds: \[AB+BC=AC.\] It means nothing more, however, when the directions of $AB$ and $BC$ are opposite, than that the result of carrying a moving point from $A$ first to $B$, and thence back to $C$, is the same as carrying it from $A$ direct to $C$. But in this sense the equation holds equally when $A, B, C$ are not in the same right line. Given, therefore, a complex number, $a+ib$; choose any point $A$ in the plane; from it draw a line $AB$, of length $a$, in the direction of the 1-line, and from $B$ a line $BC$, of length $b$, in the direction of the $i$-line. The line $AC$, thus fixed in length and direction, but situated anywhere in the plane, is Argand's picture of $a+ib$. Argand's skill in the use of his new device was equal to the discovery of the demonstration given in §54, that every algebraic equation has a root. \addcontentsline{toc}{section}{\numberline{}Gauss. The complex number} \textbf{124. Gauss. The Complex Number.} The method of representing complex numbers in common use to-day, that described in §42, is due to Gauss. He was already in possession of it in 1811, though he published no account of it until 1831. To Gauss belongs the conception of $i$ as an independent unit co-ordinate with 1, and of $a+ib$ as a \textit{complex} number, a sum of multiples of the units 1 and $i$; his also is the name ``complex number'' and the concept of complex numbers in general, whereby $a + ib$ secures a footing in the theory of numbers as well as in algebra. He too, and not Argand, must be credited with really breaking down the opposition of mathematicians to the imaginary. Argand's \textit{Essai} was little noticed when it appeared, and soon forgotten; but there was no withstanding the great authority of Gauss, and his precise and masterly presentation of this doctrine.\footnote{See Gauss, Complete Works, II, p.~174.} \chapter{RECOGNITION OF THE PURELY SYMBOLIC CHARACTER OF ALGEBRA\@. QUATERNIONS\@. AUSDEHNUNGSLEHRE.} \addcontentsline{toc}{section}{\numberline{}The principle of permanence. Peacock } \textbf{125. The Principle of Permanence.} Thus, one after another, the fraction, irrational, negative, and imaginary, gained entrance to the number-system of algebra. Not one of them was accepted until its correspondence to some actually existing thing had been shown, the fraction and irrational, which originated in relations among actually existing things, naturally making good their position earlier than the negative and imaginary, which grew immediately out of the equation, and for which a ``real'' interpretation had to be sought. Inasmuch as this correspondence of the artificial numbers to things extra-arithmetical, though most interesting and the reason of the practical usefulness of these numbers, has not the least bearing on the nature of their position in \textit{pure} arithmetic or algebra; after all of them had been accepted as numbers, the necessity remained of justifying this acceptance by purely algebraic considerations. This was first accomplished, though incompletely, by the English mathematician, \textit{Peacock}.\footnote{Arithmetical and Symbolical Algebra, 1830 and 1845; especially the later edition. Also British Association Reports, 1833.} Peacock begins with a valuable distinction between \textit{arithmetical} and \textit{symbolical} algebra. Letters are employed in the former, but only to represent positive integers and fractions, subtraction being limited, as in ordinary arithmetic, to the case where subtrahend is less than minuend. In the latter, on the other hand, the symbols are left altogether general, untrammelled at the outset with any particular meanings whatsoever. It is then \textit{assumed} that the rules of operation applying to the symbols of arithmetical algebra apply without alteration in symbolical algebra; \textit{the meanings of the operations themselves and their results being derived from these rules of operation.} This assumption Peacock names the \textit{Principle of Permanence of Equivalent Forms}, and illustrates its use as follows:\footnote{Algebra, edition of 1845, §§~631, 569, 639.} In arithmetical algebra, when $a > b, c > d$, it may readily be demonstrated that \[ (a - b)(c - d) = ac - ad - bc + bd. \] By the principle of permanence, it follows that \[ (0 - b)(0 - d) = 0 \times 0 - 0 \times d - b \times 0 + bd, \\ \textrm{or} (-b)(-d) = bd. \] Or again. In arithmetical algebra $a^m a^n = a^{m+n}$, when $m$ and $n$ are positive integers. Applying the principle of permanence, \begin{align*} (a^{\frac{p}{q}})^q & = a^{\frac{p}{q}} \cdot a^{\frac{p}{q}} \cdots \textrm{to} q \textrm{factors}\\ & = a^{\frac{p}{q} + \frac{p}{q} + \cdots \textrm{to} q \textrm{terms}} \\ & = a^p, \\ \textrm{whence} \quad a^{\frac{p}{q}} & = \sqrt[q]{a^p}. \end{align*} Here the meanings of the product $(-b)(-d)$ and of the symbol $a^{\frac{p}{q}}$ are both derived from certain rules of operation in arithmetical algebra. Peacock notices that the symbol = also has a wider meaning in symbolical than in arithmetical algebra; for in the former = means that ``the expression which exists on one side of it is the result of an operation which is indicated on the other side of it and not performed.''\footnote{Algebra, Appendix, §631.} He also points out that the terms ``real'' and ``imaginary'' or ``impossible'' are relative, depending solely on the meanings attaching to the symbols in any particular application of algebra. For a quantity is real when it can be shown to correspond to any real or possible existence; otherwise it is imaginary.\footnote{Ibid.\ §557.} The solution of the problem: to divide a group of 5 men into 3 equal groups, is imaginary though a positive fraction, while in Argand's geometry the so-called imaginary is real. The principle of permanence is a fine statement of the assumption on which the reckoning with artificial numbers depends, and the statement of the nature of this dependence is excellent. Regarded as an attempt at a complete presentation of the doctrine of artificial numbers, however, Peacock's Algebra is at fault in classing the positive fraction with the positive integer and not with the negative and imaginary, where it belongs, in ignoring the most difficult of all artificial numbers, the irrational, in not defining artificial numbers as symbolic results of operations, but principally in not subjecting the operations themselves to a final analysis. \addcontentsline{toc}{section}{\numberline{}The fundamental laws of algebra. ``Symbolical algebras.'' Gregory } \textbf{126. The Fundamental laws of Algebra. ``Symbolical Algebras.''} Of the fundamental laws to which this analysis leads, two, the commutative and distributive, had been noticed years before Peacock by the inventors of symbolic methods in the differential and integral calculus as being common to number and the operation of differentiation. In fact, one of these mathematicians, \textit{Servois},\footnote{Gergonne's Annales, 1813. One must go back to Euclid for the earliest known recognition of any of these laws. Euclid demonstrated, of integers (Elements, VII, 16), that $ab = ba$.} introduced the names \textit{commutative} and \textit{distributive}. Moreover, Peacock's contemporary, \textit{Gregory}, in a paper ``On the Real Nature of Symbolical Algebra,'' which appeared in the interim between the two editions of Peacock's Algebra,\footnote{In 1838. See The Mathematical Writings of D.~F.~Gregory, p.~2. Among other writings of this period, which promoted a correct understanding of the artificial numbers, should be mentioned Gregory's interesting paper, ``On a Difficulty in the Theory of Algebra,'' Writings, p.~235, and De Morgan's papers ``On the Foundation of Algebra'' (1839, 1841; Cambridge Philosophical Transactions, VII).} had restated these two laws, and had made their significance very clear. To Gregory the formal identity of complex operations with the differential operator and the operations of numerical algebra suggested the comprehensive notion of algebra embodied in his fine definition: ``symbolical algebra is the science which treats of the combination of operations defined not by their nature, that is, by what they are or what they do, but by the laws of combination to which they are subject.'' This definition recognizes the possibility of an entire class of algebras, each characterized primarily not by its subject-matter, but by \textit{its operations and the formal laws to which they are subject}; and in which the algebra of the complex number $a + ib$ and the system of operations with the differential operator are included, the two (so far as their laws are identical) as one and the same particular case. So long, however, as no ``algebras'' existed whose laws differed from those of the algebra of number, this definition had only a speculative value, and the general acceptance of the dictum that the laws regulating its operations constituted the essential character of algebra might have been long delayed had not Gregory's paper been quickly followed by the discovery of two ``algebras,'' the \textit{quaternions} of \textit{Hamilton} and the \textit{Ausdehnungslehre} of \textit{Grassmann}, in which one of the laws of the algebra of number, the commutative law for multiplication, had lost its validity. \addcontentsline{toc}{section}{\numberline{}Hamilton's quaternions} \textbf{127. Quaternions.} According to his own account of the discovery,\footnote{Philosophical Magazine, II, Vol. 25, 1844.} Hamilton came upon \textit{quaternions} in a search for a second imaginary unit to correspond to the perpendicular which may be drawn in space to the lines 1 and $i$. In pursuance of this idea he formed the expressions, $a + ib + jc, x + iy + jz$, in which $a$, $b$, $c$, $x$, $y$, $z$ were supposed to be real numbers, and $j$ the new imaginary unit sought, and set their product \[ (a + ib + jc)(x + iy + jz) = ax - by - cz + i(ay + bx) \ + j(az + cx) + ij(bz + cy). \] The question then was, what interpretation to give $ij$. It would not do to set it equal to $a' + ib' + jc'$, for then the theorem that the modulus of a product is equal to the product of the moduli of its factors, which it seemed indispensable to maintain, would lose its validity; unless, indeed, $a' = b' = c' = 0$, and therefore $ij = 0$, a very unnatural supposition, inasmuch as $1i$ is different from 0. No course was left for destroying the $ij$ term, therefore, but to make its coefficient, $bz + cy$, vanish, which was tantamount to supposing, since $b, c, y, z$ are perfectly general, that $ji = -ij$. Accepting this hypothesis, \textit{denial of the commutative law} as it was, Hamilton was driven to the conclusion that the system upon which he had fallen contained at least three imaginary units, the third being the product $ij$. He called this $k$, took as general complex numbers of the system, $a + ib + jc + kd, x + iy + jz + kw,$ \textit{quaternions}, built their products, and assuming \begin{align*} i^2 &= j^2 = k^2 = -1 \\ ij &= -ji = k \\ jk &= -kj = i \\ ki &= -ik = j, \end{align*} found that the modulus law was fulfilled. A geometrical interpretation was found for the ``\textit{imaginary triplet}'' $ib + jc + kd$, by making its coefficients, $b, c, d$, the rectangular co-ordinates of a point in space; the line drawn to this point from the origin picturing the triplet by its length and direction. Such directed lines Hamilton named \textit{vectors}. To interpret geometrically the multiplication of $i$ into $j$, it was then only necessary to conceive of the $j$ axis as rigidly connected with the $i$ axis, and \textit{turned by it} through a right angle in the $jk$ plane, into coincidence with the $k$ axis. The geometrical meanings of other operations followed readily. In a second paper, published in the same volume of the Philosophical Magazine, Hamilton compares in detail the laws of operation in \textit{quaternions} and the algebra of number, for the first time explicitly stating and naming the \textit{associative} law. \addcontentsline{toc}{section}{\numberline{}Grassmann's Ausdehnungslehre } \textbf{128. Grassmann's Ausdehnungslehre.} In the \textit{Ausdehnungslehre}, as Grassmann first presented it, the elementary magnitudes are vectors. The fact that the equation $AB + BC = AC$ always holds among the segments of a line, when account is taken of their directions as well as their lengths, suggested the probable usefulness of directed lengths in general, and led Grassmann, like Argand, to make trial of this definition of addition for the general case of three points, $A$, $B$, $C$, not in the same right line. But the outcome was not great until he added to this his definition of the product of two vectors. He took as the product $ab$, of two vectors, $a$ and $b$, the parallelogram generated by $a$ when its initial point is carried along $b$ from initial to final extremity. This definition makes a product vanish not only when one of the vector factors vanishes, but also when the two are parallel. It clearly conforms to the distributive law. On the other hand, since \begin{align*} (a+b)(a+b) & = aa+ab+ba+bb, \\ \text{and} \qquad (a+b)(a+b) & = aa =bb=0, \\ ab+ba & = 0, \; \text{or} \; ba=-ab, \end{align*} the commutative law for multiplication has lost its validity, and, as in quaternions, an interchange of factors brings about a change in the sign of the product. The opening chapter of Grassmann's first treatise on the \textit{Ausdehnungslehre} (1844) presents with admirable clearness and from the general standpoint of what he calls ``Formenlehre'' \, (the doctrine of forms), the fundamental laws to which operations are subject as well in the \textit{Ausdehnungslehre} as in common algebra. \addcontentsline{toc}{section}{\numberline{}The fully developed doctrine of the artificial forms of number. Hankel. Weierstrass. G. Cantor} \textbf{129. The Doctrine of the Artificial Numbers fully Developed.} The discovery of quaternions and the \textit{Ausdehnungslehre} made the algebra of number in reality what Gregory's definition had made it in theory, no longer the sole algebra, but merely one of a class of algebras. A higher standpoint was created, from which the laws of this algebra could be seen in proper perspective. Which of these laws were distinctive, and what was the significance of each, came out clearly enough when numerical algebra could be compared with other algebras whose characteristic laws were not the same as its characteristic laws. The doctrine of the artificial numbers regarded from this point of view---as symbolic results of the operations which the fundamental laws of algebra define---was fully presented for the negative, fraction, and imaginary, by \textit{Hankel}, in his \textit{Complexe Zahlensystemen} (1867). Hankel re-announced Peacock's principle of permanence. The doctrine of the irrational now accepted by mathematicians is due to \textit{Weierstrass} and \textit{G. Cantor} and \textit{Dedekind}.\footnote{See Cantor in Mathematische Annalen, V, p.~123, XXI, p.~567. The first paper was written in 1871. In the second, Cantor compares his theory with that of Weierstrass, and also with the theory proposed by Dedekind in his \textit{Stetigkeit und irrationals Zahlen} (1872). The theory of the irrational, set forth in Chapter IV of the first part of this book, is Cantor's.} \addcontentsline{toc}{section}{\numberline{}Recent literature } A number of interesting contributions to the literature of the subject have been made recently; among them a paper\footnote{Journal für die reine und angewandte Mathematik, Vol. 101, p.~337.} by Kronecker in which methods are proposed for avoiding the artificial numbers by the use of congruences and ``indeterminates,'' and papers\footnote{Göttinger Nachrichten for 1884, p.~395; 1885, p.~141; 1889, p.~34, p.~237. Leipziger Berichte for 1889, p.~177, p.~290, p.~400. Mathemathische Annalen, XXXIII, p.~49.} by Weierstrass, Dedekind, Hölder, Study, Scheffer, and Schur, all relating to the theory of general complex numbers built from $n$ fundamental units (see page 40). \textsc{SUPPLEMENTARY NOTE, 1902}. An elaborate and profound analysis of the number-concept from the ordinal point of view is made by Dedekind in his \textit{Was sind und was sollen die Zahlen?} (1887). This essay, together with that on irrational numbers cited above, has been translated by W.~W.~Beman, and published by the Open Court Company, Chicago, 1901. The same point of view is taken by Kronecker in the memoir above mentioned, and by Helmholtz in his \textit{Zählen und Messen} (Zeller-Jubeläum, 1887). G. Cantor discusses the general notion of cardinal number, and extends it to infinite groups and assemblages in his now famous Memoirs on the theory of infinite assemblages. See particularly Mathematische Annalen, XLVI, p.~489. Very recently much attention has been given to the question: What is the \textit{simplest} system of consistent and independent laws---or ``axioms,'' as they are called---by which the fundamental operations of the ordinary algebra may be defined? A very complete \textit{résumé} of the literature may be found in a paper by O. Hölder in Leipziger Berichte, 1901. See also E.~V.~Huntington in Transactions of the American Mathematical Society, Vol. III, p.~264. \newpage \newpage \small \pagenumbering{gobble} \begin{verbatim} End of Project Gutenberg's Number-System of Algebra, by Henry Fine *** START OF THIS PROJECT GUTENBERG EBOOK NUMBER-SYSTEM OF ALGEBRA *** Produced by Jonathan Ingram, Susan Skinner and the Online Distributed Proofreading Team at https://www.pgdp.net *** This file should be named 17920-t.tex or 17920-t.zip *** *** or 17920-pdf.pdf or 17920-pdf.pdf *** This and all associated files of various formats will be found in: https://www.gutenberg.org/1/7/9/2/17920/ Updated editions will replace the previous one--the old editions will be renamed. Creating the works from public domain print editions means that no one owns a United States copyright in these works, so the Foundation (and you!) can copy and distribute it in the United States without permission and without paying copyright royalties. Special rules, set forth in the General Terms of Use part of this license, apply to copying and distributing Project Gutenberg-tm electronic works to protect the PROJECT GUTENBERG-tm concept and trademark. Project Gutenberg is a registered trademark, and may not be used if you charge for the eBooks, unless you receive specific permission. If you do not charge anything for copies of this eBook, complying with the rules is very easy. You may use this eBook for nearly any purpose such as creation of derivative works, reports, performances and research. They may be modified and printed and given away--you may do practically ANYTHING with public domain eBooks. Redistribution is subject to the trademark license, especially commercial redistribution. *** START: FULL LICENSE *** THE FULL PROJECT GUTENBERG LICENSE PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK To protect the Project Gutenberg-tm mission of promoting the free distribution of electronic works, by using or distributing this work (or any other work associated in any way with the phrase "Project Gutenberg"), you agree to comply with all the terms of the Full Project Gutenberg-tm License (available with this file or online at https://gutenberg.org/license). Section 1. General Terms of Use and Redistributing Project Gutenberg-tm electronic works 1.A. By reading or using any part of this Project Gutenberg-tm electronic work, you indicate that you have read, understand, agree to and accept all the terms of this license and intellectual property (trademark/copyright) agreement. If you do not agree to abide by all the terms of this agreement, you must cease using and return or destroy all copies of Project Gutenberg-tm electronic works in your possession. If you paid a fee for obtaining a copy of or access to a Project Gutenberg-tm electronic work and you do not agree to be bound by the terms of this agreement, you may obtain a refund from the person or entity to whom you paid the fee as set forth in paragraph 1.E.8. 1.B. "Project Gutenberg" is a registered trademark. It may only be used on or associated in any way with an electronic work by people who agree to be bound by the terms of this agreement. There are a few things that you can do with most Project Gutenberg-tm electronic works even without complying with the full terms of this agreement. See paragraph 1.C below. There are a lot of things you can do with Project Gutenberg-tm electronic works if you follow the terms of this agreement and help preserve free future access to Project Gutenberg-tm electronic works. See paragraph 1.E below. 1.C. The Project Gutenberg Literary Archive Foundation ("the Foundation" or PGLAF), owns a compilation copyright in the collection of Project Gutenberg-tm electronic works. Nearly all the individual works in the collection are in the public domain in the United States. If an individual work is in the public domain in the United States and you are located in the United States, we do not claim a right to prevent you from copying, distributing, performing, displaying or creating derivative works based on the work as long as all references to Project Gutenberg are removed. Of course, we hope that you will support the Project Gutenberg-tm mission of promoting free access to electronic works by freely sharing Project Gutenberg-tm works in compliance with the terms of this agreement for keeping the Project Gutenberg-tm name associated with the work. You can easily comply with the terms of this agreement by keeping this work in the same format with its attached full Project Gutenberg-tm License when you share it without charge with others. 1.D. The copyright laws of the place where you are located also govern what you can do with this work. Copyright laws in most countries are in a constant state of change. If you are outside the United States, check the laws of your country in addition to the terms of this agreement before downloading, copying, displaying, performing, distributing or creating derivative works based on this work or any other Project Gutenberg-tm work. The Foundation makes no representations concerning the copyright status of any work in any country outside the United States. 1.E. Unless you have removed all references to Project Gutenberg: 1.E.1. The following sentence, with active links to, or other immediate access to, the full Project Gutenberg-tm License must appear prominently whenever any copy of a Project Gutenberg-tm work (any work on which the phrase "Project Gutenberg" appears, or with which the phrase "Project Gutenberg" is associated) is accessed, displayed, performed, viewed, copied or distributed: This eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org 1.E.2. If an individual Project Gutenberg-tm electronic work is derived from the public domain (does not contain a notice indicating that it is posted with permission of the copyright holder), the work can be copied and distributed to anyone in the United States without paying any fees or charges. If you are redistributing or providing access to a work with the phrase "Project Gutenberg" associated with or appearing on the work, you must comply either with the requirements of paragraphs 1.E.1 through 1.E.7 or obtain permission for the use of the work and the Project Gutenberg-tm trademark as set forth in paragraphs 1.E.8 or 1.E.9. 1.E.3. If an individual Project Gutenberg-tm electronic work is posted with the permission of the copyright holder, your use and distribution must comply with both paragraphs 1.E.1 through 1.E.7 and any additional terms imposed by the copyright holder. Additional terms will be linked to the Project Gutenberg-tm License for all works posted with the permission of the copyright holder found at the beginning of this work. 1.E.4. Do not unlink or detach or remove the full Project Gutenberg-tm License terms from this work, or any files containing a part of this work or any other work associated with Project Gutenberg-tm. 1.E.5. Do not copy, display, perform, distribute or redistribute this electronic work, or any part of this electronic work, without prominently displaying the sentence set forth in paragraph 1.E.1 with active links or immediate access to the full terms of the Project Gutenberg-tm License. 1.E.6. You may convert to and distribute this work in any binary, compressed, marked up, nonproprietary or proprietary form, including any word processing or hypertext form. However, if you provide access to or distribute copies of a Project Gutenberg-tm work in a format other than "Plain Vanilla ASCII" or other format used in the official version posted on the official Project Gutenberg-tm web site (www.gutenberg.org), you must, at no additional cost, fee or expense to the user, provide a copy, a means of exporting a copy, or a means of obtaining a copy upon request, of the work in its original "Plain Vanilla ASCII" or other form. Any alternate format must include the full Project Gutenberg-tm License as specified in paragraph 1.E.1. 1.E.7. Do not charge a fee for access to, viewing, displaying, performing, copying or distributing any Project Gutenberg-tm works unless you comply with paragraph 1.E.8 or 1.E.9. 1.E.8. You may charge a reasonable fee for copies of or providing access to or distributing Project Gutenberg-tm electronic works provided that - You pay a royalty fee of 20% of the gross profits you derive from the use of Project Gutenberg-tm works calculated using the method you already use to calculate your applicable taxes. The fee is owed to the owner of the Project Gutenberg-tm trademark, but he has agreed to donate royalties under this paragraph to the Project Gutenberg Literary Archive Foundation. Royalty payments must be paid within 60 days following each date on which you prepare (or are legally required to prepare) your periodic tax returns. Royalty payments should be clearly marked as such and sent to the Project Gutenberg Literary Archive Foundation at the address specified in Section 4, "Information about donations to the Project Gutenberg Literary Archive Foundation." - You provide a full refund of any money paid by a user who notifies you in writing (or by e-mail) within 30 days of receipt that s/he does not agree to the terms of the full Project Gutenberg-tm License. You must require such a user to return or destroy all copies of the works possessed in a physical medium and discontinue all use of and all access to other copies of Project Gutenberg-tm works. - You provide, in accordance with paragraph 1.F.3, a full refund of any money paid for a work or a replacement copy, if a defect in the electronic work is discovered and reported to you within 90 days of receipt of the work. - You comply with all other terms of this agreement for free distribution of Project Gutenberg-tm works. 1.E.9. If you wish to charge a fee or distribute a Project Gutenberg-tm electronic work or group of works on different terms than are set forth in this agreement, you must obtain permission in writing from both the Project Gutenberg Literary Archive Foundation and Michael Hart, the owner of the Project Gutenberg-tm trademark. Contact the Foundation as set forth in Section 3 below. 1.F. 1.F.1. Project Gutenberg volunteers and employees expend considerable effort to identify, do copyright research on, transcribe and proofread public domain works in creating the Project Gutenberg-tm collection. Despite these efforts, Project Gutenberg-tm electronic works, and the medium on which they may be stored, may contain "Defects," such as, but not limited to, incomplete, inaccurate or corrupt data, transcription errors, a copyright or other intellectual property infringement, a defective or damaged disk or other medium, a computer virus, or computer codes that damage or cannot be read by your equipment. 1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the "Right of Replacement or Refund" described in paragraph 1.F.3, the Project Gutenberg Literary Archive Foundation, the owner of the Project Gutenberg-tm trademark, and any other party distributing a Project Gutenberg-tm electronic work under this agreement, disclaim all liability to you for damages, costs and expenses, including legal fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH F3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE. 1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a defect in this electronic work within 90 days of receiving it, you can receive a refund of the money (if any) you paid for it by sending a written explanation to the person you received the work from. If you received the work on a physical medium, you must return the medium with your written explanation. The person or entity that provided you with the defective work may elect to provide a replacement copy in lieu of a refund. If you received the work electronically, the person or entity providing it to you may choose to give you a second opportunity to receive the work electronically in lieu of a refund. If the second copy is also defective, you may demand a refund in writing without further opportunities to fix the problem. 1.F.4. Except for the limited right of replacement or refund set forth in paragraph 1.F.3, this work is provided to you 'AS-IS', WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTIBILITY OR FITNESS FOR ANY PURPOSE. 1.F.5. Some states do not allow disclaimers of certain implied warranties or the exclusion or limitation of certain types of damages. If any disclaimer or limitation set forth in this agreement violates the law of the state applicable to this agreement, the agreement shall be interpreted to make the maximum disclaimer or limitation permitted by the applicable state law. The invalidity or unenforceability of any provision of this agreement shall not void the remaining provisions. 1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the trademark owner, any agent or employee of the Foundation, anyone providing copies of Project Gutenberg-tm electronic works in accordance with this agreement, and any volunteers associated with the production, promotion and distribution of Project Gutenberg-tm electronic works, harmless from all liability, costs and expenses, including legal fees, that arise directly or indirectly from any of the following which you do or cause to occur: (a) distribution of this or any Project Gutenberg-tm work, (b) alteration, modification, or additions or deletions to any Project Gutenberg-tm work, and (c) any Defect you cause. Section 2. Information about the Mission of Project Gutenberg-tm Project Gutenberg-tm is synonymous with the free distribution of electronic works in formats readable by the widest variety of computers including obsolete, old, middle-aged and new computers. It exists because of the efforts of hundreds of volunteers and donations from people in all walks of life. Volunteers and financial support to provide volunteers with the assistance they need, is critical to reaching Project Gutenberg-tm's goals and ensuring that the Project Gutenberg-tm collection will remain freely available for generations to come. In 2001, the Project Gutenberg Literary Archive Foundation was created to provide a secure and permanent future for Project Gutenberg-tm and future generations. To learn more about the Project Gutenberg Literary Archive Foundation and how your efforts and donations can help, see Sections 3 and 4 and the Foundation web page at https://www.pglaf.org. Section 3. Information about the Project Gutenberg Literary Archive Foundation The Project Gutenberg Literary Archive Foundation is a non profit 501(c)(3) educational corporation organized under the laws of the state of Mississippi and granted tax exempt status by the Internal Revenue Service. The Foundation's EIN or federal tax identification number is 64-6221541. Its 501(c)(3) letter is posted at https://pglaf.org/fundraising. Contributions to the Project Gutenberg Literary Archive Foundation are tax deductible to the full extent permitted by U.S. federal laws and your state's laws. The Foundation's principal office is located at 4557 Melan Dr. S. Fairbanks, AK, 99712., but its volunteers and employees are scattered throughout numerous locations. Its business office is located at 809 North 1500 West, Salt Lake City, UT 84116, (801) 596-1887, email business@pglaf.org. Email contact links and up to date contact information can be found at the Foundation's web site and official page at https://pglaf.org For additional contact information: Dr. Gregory B. Newby Chief Executive and Director gbnewby@pglaf.org Section 4. Information about Donations to the Project Gutenberg Literary Archive Foundation Project Gutenberg-tm depends upon and cannot survive without wide spread public support and donations to carry out its mission of increasing the number of public domain and licensed works that can be freely distributed in machine readable form accessible by the widest array of equipment including outdated equipment. Many small donations ($1 to $5,000) are particularly important to maintaining tax exempt status with the IRS. The Foundation is committed to complying with the laws regulating charities and charitable donations in all 50 states of the United States. Compliance requirements are not uniform and it takes a considerable effort, much paperwork and many fees to meet and keep up with these requirements. We do not solicit donations in locations where we have not received written confirmation of compliance. To SEND DONATIONS or determine the status of compliance for any particular state visit https://pglaf.org While we cannot and do not solicit contributions from states where we have not met the solicitation requirements, we know of no prohibition against accepting unsolicited donations from donors in such states who approach us with offers to donate. International donations are gratefully accepted, but we cannot make any statements concerning tax treatment of donations received from outside the United States. U.S. laws alone swamp our small staff. Please check the Project Gutenberg Web pages for current donation methods and addresses. Donations are accepted in a number of other ways including including checks, online payments and credit card donations. To donate, please visit: https://pglaf.org/donate Section 5. General Information About Project Gutenberg-tm electronic works. Professor Michael S. Hart was the originator of the Project Gutenberg-tm concept of a library of electronic works that could be freely shared with anyone. For thirty years, he produced and distributed Project Gutenberg-tm eBooks with only a loose network of volunteer support. Project Gutenberg-tm eBooks are often created from several printed editions, all of which are confirmed as Public Domain in the U.S. unless a copyright notice is included. Thus, we do not necessarily keep eBooks in compliance with any particular paper edition. Most people start at our Web site which has the main PG search facility: https://www.gutenberg.org This Web site includes information about Project Gutenberg-tm, including how to make donations to the Project Gutenberg Literary Archive Foundation, how to help produce our new eBooks, and how to subscribe to our email newsletter to hear about new eBooks. *** END: FULL LICENSE *** \end{verbatim} \end{document}