Alex's Notes

Arithmetic Operations

Arithmetic Operations

If we want to represent an n-bit integer (eg n=2048) on a 64-bit machine we divide the integer into 32 bit blocks and we get n/32 blocks.

Operation time

Given: two n-bit integers

  • Addition/subtration takes \(O(n)\), linear time.

  • Multiplication - naively takes quadratic time (this is the high school algorithm). Kartsuba improved this to \(O(n^{1.585})\). There is a better algo asymptotic-wise, but it’s only useful on very large numbers.

  • Division with remainder is also quadratic.

Exponentiation in a finite cyclic group can be done more efficiently than repeated multiplication with repeat square.

Let’s say we want to compute \(g^x\) where \(x=53\).

note that in binary \(53 = 110101 = 32 + 16 + 4 + 1\)

Then \(g^{53} = g^{32+16+4+1} = g^{32} \cdot g^{16} \cdot g^4 \cdot g^1\)

So what we can do is repeatedly square g to get the values of \(g^1, g^2, g^4, g^8, g^{16}, g^{32}\)

Then we multiply the values we want to get the final result. Much faster.

The formal algorithm is:


Input: \(g \in G\) and \(x \gt 0\). Output: \(g^x\)

write \(x = (x_n, x_{n-1}, \dots , x_2, x_1, x_0)_2\)

(ie binary representation of x)

\(y \leftarrow g, z \leftarrow 1\)

for i in range 0,n do:

if (x[i] == 1): \(z \leftarrow z \cdot y\)

\(y \leftarrow y^2\)

output z


Links to this note