Glam Prestige Journal

Bright entertainment trends with youth appeal.

$\begingroup$

I have been a witness to many a discussion about numbers to the power of zero, but I have never really been sold on any claims or explanations. This is a three part question, the parts are as follows...


  1. Why does $n^{0}=1$ when $n\neq 0$? How does that get defined?

  2. What is $0^{0}$? Is it undefined? If so, why does it not equal $1$?

  3. What is the equation that defines exponents? I can easily write a small program to do it (see below), but what about in equation format?


I just want a little discussion about numbers to the power of zero, for some clarification.


Code for Exponents: (pseudo-code/Ruby)

def int find_exp (int x, int n){ int total = 1; n.times{total*=x} return total;
}
$\endgroup$ 29

10 Answers

$\begingroup$

It's basically just a matter of what you define the notation to mean. You can define things to mean whatever you want -- except that if you choose a definition that leads to different results than everyone else's definitions give, then you're responsible for any confusion brought about by your using a familiar notation to mean something nonstandard.

Most commonly we define $x^0$ to mean $1$ for any $x$. What you find in discussions elsewhere are argument that this is a useful definition, not arguments that it is correct. (Definitions are correct because we choose them, not for any other reason. That's why they are definitions).

Some people choose (for certain purposes) to explicitly refrain from defining $0^0$ to mean anything. That choice is (supposedly) useful because then the map $x,y\mapsto x^y$ is continuous in the entire subset of $\mathbb R\times\mathbb R$ it is defined on. But it's an equally valid choice to define $0^0$ to mean $1$ and then just remember that $x,y\mapsto x^y$ is not continuous at $(0,0)$.

$\endgroup$ 7 $\begingroup$

The invention of numbers was one of the biggest breakthroughs in the history of math. It marked the realization that this sack of pebbles $$\{ \blacktriangle\;\blacktriangle\;\blacktriangle\;\blacktriangle\;\blacktriangle \}$$ this string of knots $$-\bullet-\bullet-\bullet-\bullet-\bullet-$$ and this bone full of tally marks $$/\,/\,/\,/\,/$$ are all incarnations of a single thing, the abstract quantity five. That leap of abstraction has become so prosaic for us that it almost feels weird to do arithmetic by actually counting things. In some cases, though, it can be illuminating to go back to the basics—back to the days when we didn't have numbers, and we did all our arithmetic by counting things. Your question is one of those cases.

In what follows, I'll use a capital letter like $X$ to stand for a finite set of things, like a herd of goats or a pile of beads, and I'll use the symbol $|X|$ to stand for the number of things in the set.


Exponentiation is a tricky operation, as you've clearly noticed, so let's warm up with something simpler. If you have two piles of beads, $A$ and $B$, the simplest thing you can do with them is shove them together to make a bigger pile, which is often written $A \sqcup B$. You should easily be able to convince yourself that, on the level of numbers, $|A \sqcup B| = |A| + |B|$. In other words, the concrete operation of shoving two piles together corresponds to the abstract operation of adding two numbers. Addition of whole numbers is often defined in this way.


Here's a slightly tougher warm-up. If you have a bunch of shirts, $H$, and a bunch of skirts, $K$, you might wonder how many different outfits you can make by pairing a shirt with a skirt. The set of outfits is usually written $H \times K$. You should be able to convince yourself that $|H \times K| = |H| \cdot |K|$. In other words, the concrete operation of counting pairs corresponds to the abstract operation of multiplication. Multiplication of whole numbers is often defined in this way.


Now that we're warmed up, suppose you have a set of paints, $C$, and a bag of beads, $X$. You might wonder how many different ways there are to color each bead with one of the paints. The set of ways to color the beads is usually written $C^X$. If you try a few examples, you'll see that $|C^X| = |C|^{|X|}$. Exponentiation of whole numbers is often defined this way.

Finally, we can get to your question. Suppose you have a bunch of paints, but the bag of beads is empty. Is it possible for you to paint all the beads? Sure: you just don't do anything! In fact, not doing anything is the only way to paint all the beads in the bag, since there are no beads. So, when the set $C$ has a bunch of paints, but the bag $X$ is empty, $|C^X| = 1$. If you define exponentiation by counting colorings, that means $|C|^0 = 1$ for any positive number $|C|$.

To make matters worse, suppose you have no paints and no beads. Happily, you can still paint all the beads: once again, you just don't do anything. Like before, not doing anything is the only way to paint all the beads, so $|C^X| = 1$ even when both $C$ and $X$ are empty. If you define exponentiation by counting colorings, that means $0^0 = 1$.

On the other hand, suppose you don't have any paints, but you do have some beads. In this case, you can't paint all the beads, because you have no paints! There are just no ways to paint all the beads. In other words, when $C$ is empty but $X$ is not, $|C^X| = 0$. If you define exponentiation by counting colorings, that means $0^{|X|} = 0$ for any positive number $|X|$, just like you'd expect.


Here's a bonus. André Nicolas argued that $0^0$ should be $1$ in order to make the binomial theorem true. Even those weird-looking numbers $\binom{n}{k}$ can be defined using finite sets: if you have $N$ toys and $K$ kids, $\binom{N}{K}$ is the set of ways you can pick out enough toys to have one for each kid. (Note that you don't give each toy to a particular kid: you just want the numbers of kids and toys to be the same.) If you get out your set of paints $C$ and another set of paints $D$ and start painting various numbers of kids and handing out toys based on how many colors of kids there are, you should somehow be able to convince yourself that the binomial theorem is true, even when $C$ doesn't have any paints in it. That's why André Nicolas came up with the same rules for zero exponentials as we just did.

$\endgroup$ 2 $\begingroup$

It is for various reasons convenient to define $0^0$ as being equal to $1$. For one thing, consider the Binomial Theorem, or power series. It is useful to be able to write $$(1+x)^n =\sum_{k=0}^n \binom{n}{k}x^k,$$ or $$e^x=\sum_{k=0}^\infty \frac{x^k}{k!}.$$ In each of these equations, if we want the expression on the right to give the correct answer when $x=0$, we need to set $0^0=1$.

$\endgroup$ 3 $\begingroup$

To 1): We define the exponents of a nonzero integer $a$ such that they satisfy the relation $a^ba^c=a^{b+c}$ for ay integers $b,c$, with $a^1=a$. In order for exponents to be well defined, we thus need $a^0=1$.

To 2): It depends on how you define it. If you define it via the limits $\lim_{x\rightarrow 0} x^0$ or $\lim_{x\rightarrow 0} x^x$, then $0^0=1$. If you define it as $\lim_{x\rightarrow 0} 0^x$, then $0^0=0$.

To 3): Exponents are defined simply by $a^n=\underbrace{a\cdot a\cdot \,...\, \cdot a}_{n}$.

$\endgroup$ 4 $\begingroup$

From definition of division of powers with the same base we have that $$\frac{a^n}{a^m}=a^{n-m}$$ Assuming that $n=m$ from left side we get $$\frac{a^n}{a^n}=1$$ and from right side we get $$\frac{a^n}{a^n}=a^{n-n}=a^0$$ comparing the last two equations we have that $$a^0=1$$

$\endgroup$ 1 $\begingroup$

To define $x^0$, we just cannot use the definition of repeated factors in multiplication. You have to understand how the laws of exponentiation work. We can define $x^0$ to be: $$x^0 = x^{n - n} = \frac{x^n}{x^n}.$$ Now, let us assume that $x^n = a$. It would then be simplified as $$\frac{x^n}{x^n} = \frac{a}{a} = 1.$$ So that's why $x^0 = 1$ for any number $x$.

Now, you were asking what does $0^0$ mean. Well, let us the example above: $$0^0 = 0^{n - n} = \frac{0}{0}.$$ Here is where it gets confusing. It is more likely to say that $\frac{0}{0}$ equals either $0$ or $1$, but it turns out that $\frac{0}{0}$ has infinitely many solutions. Therefore, it is indeterminate. Because we mathematicians want to define it as some exact value, which is not possible because there are many values, we just say that is undefined.

NOTE: $0^0$ still follows the rule of $x^0 = 1$. So it is correct to say that $x^0 = 1$ for ANY value of $x$.

I hope this clarify all your doubts.

$\endgroup$ 1 $\begingroup$

(1) For intuition, if $k\ge0$ is an integer, take $x^k$ to mean "$1$ multiplied $k$ times by $x$", and $x^{-k}$ with $(x\neq 0)$ to mean "$1$ divided $k$ times by $x$." For integers $n\geq 0$, we may define $n!$ as the number of distinct ways to line up $n$ distinct objects--the only way to line up $0$ objects is to not line up any objects.

(2) We often define $0^0$ to be $1$, which accords with the intuitive definition above--if we multiply $1$ by $0$ not at all, then we still just have $1$. Now, sometimes we will not define $0^0$ at all, which I'll discuss further below.

(3) We can extend integer powers to rational powers as follows: We say $y=x^{\frac1m}$ for some integer $m>0$ if $x=y^m$. If $m$ is odd, there will be a unique solution $y$ to the equation $x=y^m$. If $m$ is even and $x<0$, there will be no real solution $y$; if $m$ is even and $x\ge0$, then there is at least one real solution $y$, and we will take $x^{\frac1m}$ to be the nonnegative solution. At that point, given integers $k,m$ with $m>0$ and $\frac k m$ in lowest terms, we define $x^{\frac k m}:=\left(x^{\frac1m}\right)^k$ for such $x$ as this is possible. Finally, for such $x$ that $x^{q}$ is defined for all rational $q,$ we can use continuity arguments to define $x^y$ for all real $y.$


In the manner described above, given real numbers $x$ and $y,$ we have defined a real number $x^y$ for all real $y$ when $x>0,$ for all nonnegative $y$ when $x=0,$ and for all rational $y$ with odd denominators when $x<0.$ Unfortunately, continuity arguments won't work to extend to any more $y$ when $x\le0,$ because the function behaves too erratically to extend continuously in such cases. In fact, the erratic behavior of the function $f(x,y)=x^y$ means that $f$ isn't even continuous at the origin! For example, we can approach the origin along the line $y=x$ in the first quadrant (that is, when $x$ and $y$ are positive), and find that $x^y$ approaches $1,$ which is what we would expect. However, if we try to approach it along the positive $y$ axis (that is, when $x=0$ and $y>0$), then we find that $x^y$ approaches $0,$ which is not at all what we want! This means that not only is $f(x,y)$ discontinuous at the origin, but that there is no way that we can define $f(0,0)$ to make it continuous there! Similarly, $f(x,y)$ is badly discontinuous when $x<0.$ For this reason, when trying to define a continuous real-valued exponential function, one cannot define $0^0$ at all, nor define $x^y$ when $x<0.$ This continuous function $g(x,y)=x^y$ is defined for all real $y$ when $x>0,$ defined for all positive $y$ when $x=0,$ and undefined otherwise. However, this doesn't alter the truth of $0^0=1,$ merely the domain of continuous definition.

$\endgroup$ 9 $\begingroup$

[This answer was migrated from an answer in the just-deleted thread Simple way of explaining the empty product].

Below I explain in simple terms the (abstract) algebraic motivation behind the uniform extension of the power laws from positive powers to zero and negative powers. Most of the post is elementary, so if you encounter unfamiliar terms you can safely skip past them.

$a^{m+n} = a^m a^n,\ $ i.e. $p(m\!+\!n) = p(m)p(n)\,$ for $\,p(k) = a^k\in \Bbb R\backslash 0\,$ shows that powers $\,a^\Bbb {N_+}$ under multiplication have the same algebraic (semigroup) structure as the positive naturals $\Bbb N_+$ under addition. If we enlarge $\Bbb N_+$ to a monoid $\Bbb N$ or group $\Bbb Z$ by adjoining a neutral element $0$ along with additive inverses (negative integers) then there is a unique way of extending this structure preserving (hom) power map, namely:

$$p(n) = p(0+n) = p(0)p(n) \,\Rightarrow\, p(0) = 1,\ \ {\rm i.e.}\ \ a^{0}= 1$$

$$1 = p(0) = p(-n+n) = p(-n)p(n)\,\Rightarrow\, p(-n) = p(n)^{-1},\ \ {\rm i.e.}\ \ a^{-n} = (a^n)^{-1}$$

The fact that it proves very handy to use $0$ and negative integers when deducing facts about positive integers transfers to the isomorphic structure of powers $a^{\Bbb Z}.\,$ Because the power map on $\,\Bbb Z\,$ is an extension of that on positive powers we are guaranteed that proofs about positive powers remain true even if the proof uses negative or zero powers, just as for proofs about positive integers that use negative integers and zero.

For example using negative integers allows us to concisely state Bezout's lemma for the gcd, i.e. that $\,\gcd(m,n) = j m + k n\,$ for some integers $j,k$ (which may be negative). In particular if $S$ is a group of integers (i.e. closed under subtraction) and it contains two coprime positives then it also contains their gcd $= 1$. When translated to the isomorphic power form this says that if a set of powers is closed under division and it contains powers $a^m, a^n$ for coprime $m,n$ then it contains $a^1 = a,\,$ e.g. see here where $a^m, a^n$ are integer matrices with determinant $= 1$. However, if we are restricted to positive powers and cancellation (vs. division) then analogous proofs may become much more cumbersome, and the key algebraic structure may become highly obfuscated by the manipulations needed to keep all powers positive, e.g. see here on proving $\,a^m = b^m, a^n = b^n\,\Rightarrow\, a = b\,$ for integers $a,b,\,$ Such enlargments to richer structures with $0$ and inverses allows us to work with objects in simpler forms that better highlight fundamental algebraic structure (here cyclic groups or principal ideals)

This structure preservation principle is a key property that is employed when enlarging algebraic structures such as groups and rings. If the extended structure preserves the laws (axioms) of the base structure then everything we deduce about the base structure using the extended structure remains valid in the base structure. For example, to solve for integer or rational roots of quadratic and cubics we can employ well-known formulas. Even though these formula may employ complex numbers to solve for integer, rational or real roots, those result are valid in these base number systems because the proofs only employed (ring) axioms that remain valid in the base structures, e.g. associative, commutative, distributive laws. These single uniform formulas greatly simplify ancient methods where the quadratic and cubic formulas bifurcated into motley special cases to avoid untrusted "imaginary" or "negative" numbers, as well as (ab)surds (nowadays, using e.g. set-theoretic foundations, we know rigorous methods to construct such extended numbers systems in a way that proves they remain as consistent as the base number system). Further, as above, instead of appealing to ancient heuristics like the Hankel or Peacock Permanence Principle, we can use the axiomatic method to specify precisely what algebraic structure is preserved in extensions (e.g. the (semi)group structure underlying the power laws).

$\endgroup$ 1 $\begingroup$

Another approach...

It can be shown that there exists infinitely many, what I call "exponent-like functions" that can be defined on the set of natural numbers $N$. By an exponent-like function $f$ on $N$, I mean $f$ such that:

  1. $f: N\times N\to N$

  2. $f(x,0)=1$ for $x\ne 0$

  3. $f(x,y+1)=f(x,y)\cdot x$

For all $x_0\in N$, there exists a unique exponent-like function $f$ such that $f(0,0)=x_0$.

It can be shown that, except for the value of $f(0,0)$, all exponent-like functions, as defined here, are identical.

From each such exponent-like function, we can derive the usual Laws of Exponents for non-zero bases corresponding to:

  1. $x^{y+z}=x^y\cdot x^z$

  2. $(x^y)^z= x^{y\cdot z}$

  3. $(x\cdot y)^z=x^z\cdot y^z$

If we define $0^0=1$, then these Laws of Exponents are true for all bases, including $0$. It might then be argued that we must have $0^0=1$. But the same is true for $0^0=0$ (but no other values), and we are no further ahead! If we are to look at exponentiation on $N$ as simply repeated multiplication, then $0^0$ is inherently ambiguous. We can formally define exponentiation on $N$ as follows:

  1. $\forall x,y\in N: x^y\in N$
  2. $\forall x\in N: (x\ne 0 \implies x^0 = 1)$
  3. $\forall x,y\in N: x^{y+1}=x^y\cdot x$

Here, $0^0$ is a natural number, but no value has been assigned to it.

See formal proofs, etc. in "Oh, the ambiguity!" at my math blog.

$\endgroup$ $\begingroup$

I've always been told that the definition of a power is: $a^{b+c}=a^b\times{a^c}$.

My reasoning works as follows

  • Set $b=1$ and $c=0$ (so $b+c= 1+0 = 1$)
  • This would mean $a^1=a^1*a^0$
  • There are only two options where one can get the same result by multiplying by another number:
  1. That 'other number' is 1 which would mean that $a^0=1$
  2. That 'other number' is 0 and the first number is also 0 which mean that $a^1=0$ and $a^0=0$
  • We now have 2 valid options left, which means we must expand the definition:
  • In the second case every power would be 0 because $a^x=a^{1+1+1+...}$ with $x$ amount of 1's. This makes the whole idea of a power rather 'useless' so we can skip this option
  • For the 1st option only the $a^1=a$ makes sense because it's the only way to invalidate the 2nd option and not break the original definition.

So the full definition of a power now becomes $a^{b+c}=a^b\times{a^c}$ and $a^1=a$.

This would also lead to $a^0=a^{1-1}=\frac{a^1}{a^1}=1$

$\endgroup$

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy