Limits, Infinitesimals, and Differentials

Wes Hansen
7 min readSep 7, 2023

--

Working with Lee Vaughn’s book Python Tools for Scientists, I learn that Python, as an open-ended project of the creative commons, has available for guidance these Python Enhancement Proposals (PEPs). These proposals attempt to establish, via community buy-in, best practices for collectively evolving the Python language and its associated milieu. If you give it the tiniest bit of thought, especially from the perspective of Model Theory, Mathematics is too an open-ended project of the creative commons focused on an evolving body of formal languages and its associated milieu and one wonders if Mathematics too wouldn’t benefit from the occasional Mathematics Enhancement Proposal (MEP). I hate to keep beating what may in fact be a dead horse, but this situation regarding infinitesimals and differentials in calculus and analysis borders on the nonsensical and it need not be that way.

Now Michael Taylor’s An Introduction to Geometric Algebra and Geometric Calculus is a joy to work with; it makes learning math fun and enjoyable. I really appreciate his approach to Geometric Algebra/Calculus which, I believe, is largely informed by his focus on doing calculus on manifolds. He assumes one is familiar with linear algebra, matrix algebra and inner product spaces in particular, and vector calculus, so he begins by introducing spaces of simple k-vectors. Simple k-vectors are introduced as equivalence classes of parallelepipeds in R^n, where two parallelepipeds belong to the same equivalence class if they have the same volume, the same orientation, and if they lie in the same k-dimensional subspace. So, for example, e_1 ∧ e_2 ~ 2e_1 ∧ (1/2)e_2 but ¬ (e_1 ∧ e_2 ~ e_1 ∧ e_3). He relies entirely on det(A^TA) and det(B^TA), where A^T signifies the transpose of matrix A, to define volume and relative orientation, which is pretty slick; it’s also well-defined and, of course, consistent with Geometric Algebra.

I’ll let interested parties consult his book, because it involves some interesting twists and turns (simple k-vectors are represented as functions f: S^k → R, where S^k is the class of all simple k-vectors), but he eventually develops the sets of simple k-vectors into vector spaces with generalized inner and outer products defined on them. This, in turn, leads to the Geometric Algebra as the sum of these k-vector spaces. In Geometric Algebra simple k-vectors are called k-blades and k-blades with nonzero volume, i. e. k-blades composed of a set of linearly independent vectors, describe oriented subspaces. Not only can these blades be manipulated algebraically, but every linear transformation on R^n extends uniquely to a wedge product preserving outermorphism on the Geometric Algebra G^n, hence, since calculus on manifolds involves tangent spaces, there is great utility in describing these tangent spaces with a blade in some G^n.

All things considered, Taylor’s construction is, to my mind, a very elegant construction, and yet, in the early stages of this construction, page 28, we have this minor blemish (emphasis mine):

We see that v = x(t_0 + λb) − x(t_0) is a vector running from the point x_0 = x(t_0) to x(t_0 + λb), another point in x(U). (Figure 2.16) As λ → 0, the point x(t_0 + λb) approaches x_0 and v becomes ever more nearly tangent to x(U) at x_0. Unfortunately, v approaches 0; however, since we divide it by λ which also approaches 0, the limit can be both nonzero and a tangent vector to x(U).

Michael Taylor being Professor Emeritus of Mathematics from University of Central Florida, this, to me, is a key indicator of how mathematicians think when doing calculus, on manifolds or otherwise, even though in my opinion the key, fundamental concept is not here very well articulated. And this, of course, forms the basis of my MEP.

Consider the following from Alan Macdonald’s introductory textbook, which I have, admittedly, referenced a few times during my foray with Taylor, Vector and Geometric Calculus, his section How to Think about an Integral, page 108 (emphases his):

We will use the language of infinitesimals. There are no infinitesimals among the real numbers. That is, there are no nonzero real numbers x such that |x| < 1/n for all positive integers n. Nevertheless, infinitesimals are a useful fiction when thinking about integrals, forvored by most people when actually using integrals.

[d]x. This is the length of an infinitesimal part of [a, b]. For an integral over a curve, surface, or solid, dx is replaced with the arc length ds, area dA, or volume dV of an infinitesimal part of the curve, surface, or solid.

So, what we see here quite clearly is this informal idea that limits, infinitesimals, and differentials (based on my experience online, it is not well known that dx actually stands for “differential in x” and dy for “differential in y”) are all intimately related. This leads to the obvious FACT that the differential in multivariable (vector) calculus is a vector, a geometric object, and NOT a linear map or transformation. In GAlgebra software. This is just a short note for anyone… | by Wes Hansen | Jul, 2023 | Medium I statte that Michael Taylor treats the differential correctly, but he doesn’t; he defines it as a linear map, although his treatment is nowhere near as convoluted as Macdonald’s. I have already discussed Macdonald’s treatment in In Mathematics Words Matter. T: And how did you get involved in… | by Wes Hansen | Medium; this appears to be the conventional treatment and it is inconsistent with scalar calculus, a special case of vector calculus:

Definition 3.4 (Differential) Let f : U ⊆ R^n → R^m, U open. Fix x ∈ U. Suppose that there is a linear transformation (f_x)’, also R^n → R^m, such that

f(x + h) = f(x) + (f_x)’(h) + r(h) (3.1)

where

lim_(h→0) r(h)/|h| = 0. (3.2)

Then (f_x)’is called the differential of f at x. We say that f is differentiable at x.

Okay, now apply this to scalar calculus, scalar calculus being a special case of vector calculus. We can define a linear transformation, R → R, by:

(f_x)’(h) = f′(x)h;

= f′(x)dx;

= dy.

And it’s really simple to show that:

lim_(h → 0) r(h)/h = lim_(h → 0) (f(x + h) − f(x) − f′(x)h)/h;

= lim_(h → 0) (f(x + h) − f(x))/h − lim_(h → 0) (f′(x)h)/h;

= f′(x) − f′(x);

= 0;

as it should.

At least when working with “manifolds” embedded in R^n (Taylor expands the definition to include cells and simplexes), there exist multiple tangent lines (vectors) to said “manifold” at every point x_0 on said “manifold,” this informing the concept of a tangent space. The linear map which generally gets referred to as the “differential” GENERATES said tangent space, hence, should be called the differential generator. (f_x)’ is NOT the differential, rather, (f_x)’(h) IS, the differential at the point x in the direction of h. THIS is why (f_x)’(h) = ∂_hf(x), i. e. the differential is equal to the directional derivative. At different places in his book, Macdonald calls both (f_x)’ AND (f_x)’(h) the differential? Clearly it has to be one or the other, and the second, a vector, is correct.

It is my belief that this confusion comes about because we never see a formal treatment of this relation, an intimate relation indeed, between limits, infinitesimals, and differentials, even though Ross Middlemiss provides just such a treatment circa 1940!?! So, my MEP is the simple suggestion that the mathematics community adopt the Middlemiss formalism, at least when training up-and-coming mathematicians, scientists, and engineers. All you have to lose is a good bit of ambiguity? Although when I first read Middlemiss my thoughts went immediately to statistics and I said to myself, “Well now, Professor Middlemiss . . . “

--

--