30
$\begingroup$

Is there an efficient method to determine if a very large number (300 - 600 digits) is a perfect square without finding its square root. I tried finding the square root but I realized that even for perfect squares, it wasn't efficient (I used newton's approximation method and it was coded in JAVA). I read somewhere about using modular arithmetic but I'm not quite sure how to do that. Can anyone give me a hint on how to proceed?

Thanks in advance.

  • 0
    does [this](http://burningmath.blogspot.ch/2013/09/how-to-check-if-number-is-perfect-square.html) not discuss the criteria. I'm confused2017-04-20

10 Answers 10

21

Faster than binary search is to use an integer version of Newton's method: for $\sqrt{A}$ and an initial guess $x_0$ of the right order of magnitude, let $x_{n+1} = \left \lfloor \frac{x_n^2 + A}{2 x_n} \right \rfloor$. I tried a 1200-digit number for $A$, with $x_0$ a 600-digit number, and $x_{10}$ was already correct. In Maple 15 on a not-very-fast computer, the CPU time was given as 0.

  • 0
    On the problem of cycling between $+1$ and $-1$: using an iterated floors-Newton of degree $4$, that means $x_{k+1}=\lfloor {x_k^4 + 6 x_k^2 A + A^2 \over 4 x_k^3+4 x_k A} \rfloor$ seems to avoid that cycling. It also halves the number of iterations - but because of the more complex formula (I didn't optimize it yet, for instance by some Horner-scheme or so) it is finally slower than the original Newton-iterations on the *floors* . With Pari/GP on a 2 Ghz standard laptop I got avg $40$ msec for $2000$-digit pseudorandomnumbers and $147$ msec for $4000$ digits, 7 iterations.2018-01-20
11

I agree with Yuval Filmus that you should use binary search, but in case you're still curious about the approach using modular arithmetic: a number $a$ is a square $\bmod p$ for $p$ a prime not dividing $a$ if and only if the Jacobi symbol $\left( \frac{a}{p} \right)$ is equal to $1$. You can efficiently compute the Jacobi symbol using quadratic reciprocity, and if you get an answer of $1$ for $n$ primes, then $a$ is square with probability about $1 - \frac{1}{2^n}$.

(Alternately, $a$ is a square $\bmod p$ if and only if $a^{ \frac{p-1}{2} } \equiv 1 \bmod p$. I don't know how the efficiency of computing this compares to the efficiency of computing the Jacobi symbol.)

  • 5
    +1. Also, searching for small prime factors is worthwhile -- a number divisible by 2 but not 4, 3 but not 9, etc., can't be a square. Further, some residue classes like 2 mod 3 are impossible. It's often worthwhile to check mod a precomputed number and use a lookup table to see if the number is a possible square mod that number. Continuing along this path leads to pseudosquares and the research of Sorenson and such.2011-05-25
8

The square root can be found using binary search. The resulting algorithm should run very fast on a 600 digit number, my guess is under a second.

You can implement the binary search without repeated squaring - each step you're only shifting and adding. That's why it's so fast. But even if you were squaring at each step, it would still be very quick and certainly feasible.

Any reasonable bignum package will contain a function computing the square root, so you don't even need to code the binary search yourself.

  • 3
    @obinna: I'm afraid your comment reveals that you haven't really thought about the problem. If your number $n$ is even, just count the number $b$ of trailing binary zeroes. If $b$ is odd, then $n$ is certainly not a square. If $b$ is even, then $n$ is a square if and only if the (odd!) number $n/(2^b)$ is. So you gain nothing from the knowledge that $n$ is odd, because the even case can trivially be reduced to the odd case.2011-05-25
5

There is direct analogue of algorithm mentioned here http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Digit-by-digit_calculation for arbitrary big integer. For binary representation it is only needed subtractions, comparisons and shifts. So, despite it finding integer part of square root precisely, I doubt it is possible to find something faster - the complexity level of the algorithm is comparable with integer division of two numbers. I suspect that Yuval Filmus also mentioned something similar in note about “shifting and adding” - here instead of that is shifting and subtracting.

  • 0
    "For binary representation" only mean internal representation of big integer on computer. Example of the algorithm in wikipedia need only small modification to work with arbitrary size integers.2011-05-26
4

Since there seems to be some confusion about this, here is the almost trivial pseudo-code to test for integer square roots in a language that has bigints, and integer division without remainder that I'll write div (as / might be confused with exact rational division):

boolean is_square(bigint n) { bigint a=n // or any value whose square is known to be at least n ; while a*a>n do a=div(a+div(n,a),2) od ; return a*a==n } 

No cycles are possible, no fuss (but negative numbers as input may cause it to loop forever, so that should be avoided). The outermost division is just a bit-shift, so it should be rather inexpensive. The clue to the correctness of this simple procedure it that, when the initial guess $a$ has $a^2>n$, then the exact value $b=\frac12(a+\frac na)\in\Bbb Q$ can be shown to also have $b^2>n$; if $\sqrt n$ is integer, then rounding down $b$ to an integer either gives exactly $\sqrt n$, or another too high estimate. If rounding down gives a too small value, then one can be sure that $\sqrt n$ is not integer. One also clearly has $b, so the sequence (with rounding down at each step) is strictly decreasing and termination is assured.

For very large numbers the given code wastes a lot of time to get a ballpark-figure for the square root. This could be avoided by using a better initial guess for $a$, using for instance a logarithm, or basically anything that tells you the (approximate) number of bits in $n$ (there might well be a built-in function for that). But I think any pure integer arithmetic method to do this will suffer from the same problem.

As a numerical example, for a $999$-digit perfect square as input it took $1667$ iterations to find its square root. But just repeated halving would take about $\log_2(10^{499})\approx 1658$ iterations to get down to the proper order of magnitude, so that is what the algorithm is essentially doing most of the time. If instead of $n\approx10^{998}$ one takes an initial guess of $10^{500}$ (still about $10$ times larger than the square root) it takes only $13$ iterations to find the square root. The last iteration step hits the square root on the head from a distance of more than $10^{142}$ (that is, the error of the approximation goes from that gigantic number directly to $0$, which is a witness of how much Newton's method improves the approximation here). By contrast, a binary search procedure even with a fairly good initial guess like $10^{500}$ would need as number of iterations at least $1658$, namely the number binary digits of the square root (since it adds one binary digit precision each iteration). For the record, the input used in the experiment was the square of $10^{499}+345674632452435$ (you can guess the type of "random generation" procedure I used for it).

  • 0
    In Pari/GP one can get the initial guess for $a$ by `a=1 << (#binary(n) \2)` (giving $a=2^{\lfloor \log_2(n)\rfloor /2} $ )which is near the squareroot and does need nearly nothing. For n with 4000-digits I got roughly 13 iterations (with cycle detection) and on average 40 msec, for 2000 digits on average 10 msec with 11 to 12 iterations. (Standard notebook of 2012)2018-01-19
3

I'm late to the party. But, 2-adic Newton is pretty fast. Here's a 64-bit version:

/* If x is square, return +sqrt(x).  * Otherwise, return -1.  */ long long isqrt(unsigned long long x) {     /* Precondition: make x odd if possible. */     int sh = __builtin_ctzll(x);     x >>= (sh&~1);      /* Early escape. */     if (x&6) return -1;      /* 2-adic Newton */     int i;     const int ITERATIONS = 5; // log log x - 1     unsigned long long z = (3-x)>>1, y=x*z;     for (i=0; i> 1;         y *= w;         z *= w;     }     assert(x==0 || (y*z == 1 && x*z == y));      /* Get the positive square root.  Also the top bit      * might be wrong. */     if (y & (1ull<<62)) y = -y;     y &= ~(1ull<<63);      /* Is it the square root in Z? */     if (y >= 1ull<<32) return -1;      /* Yup. */     return y<<(sh/2); } 

The function listed above takes 31 Haswell cycles when inlined. So it's about half as fast as the double-precision sqrtsd() call. It computes the square root of its argument, which is not what OP asked for. But it takes advantage of the fact that you only care about integer square roots. As a result, it will be much faster on extended-precision integers than an approximate square root algorithm.

This works for longer integer types, with a few changes.

  • It's a fixed-precision algorithm. If you use mpn_t for this, it will allocate new limbs instead of wrapping which is not what you want. You will need to explicitly clamp to the lowest n limbs.
  • You need log log x - 1 ITERATIONS. So if x is 2048 bits, you need 10 instead of 5. That is, it's Newton's method so each iteration doubles the precision.
  • You can compute the first 4 iterations with longs if you want.
  • You need to replace the 32 with half the number of bits of your integer type, likewise 62 and 63 with 2 less and 1 less.
  • 1
    @ToddLehman: It's Newton's algorithm, computing in the $p$-adic field $\mathbb{Q}_2$. Since you're working with $2$-adic integers, working with 64-bits of precision is identical to doing arithmetic modulo $2^{64}$. I.e. addition, subtraction, and multiplication of `uint64_t` does exactly the right thing.2018-08-26
3

An algorithm based on subtractions only.

Let $n$ be the number, $a=5n$ and $b=5$. While $a\geq b$, replace $a$ with $a-b$ and add $10$ to $b$. When you get $a, then $n$ is a square if and only if $a=0$.

2

One of the ways to quickly eliminate several possibilities upfront is to use the test whether the sum of the digits of the given number adds up to either 1,4,7,9. The number not satisfying this can't be a perfect square. If it does, then you can carry on with one of the efficient methods discussed in the responses to check whether the number is perfect square.

2

In my answer here I've described a probabilistic test for squaredness that can achieve an arbitrarily small error with time complexity very close to $\mathcal O$(log $n$) for an integer $n$. This is much better than Newton et. al.

It's the same style of probabilistic test used to determine primality for huge numbers, except it's based on properties of quadratic residues. It's sort of a generalization of the approach used in the GMP library.

0

This is my way of finding square roots larger than $100$. Take $1089$ for example. You first find the closest number (without going over) that has a square root divisible by $10$. That would be $900$; $30 \times 30$. The tens digit is a $3$. Next subtract that number leaving me with $189$. Now take the square root of $900$. Take the $10$s digit, multiply from $20$, and add $1$. That would be $61$. Subtract that number from $189; 128$. Now subtract $63, 65, 67$, etc until you reach $0$. Including $61$, count the amount of numbers that you subtracted that is the units digit. For this problem it would be $189-61-63-65=0$, so the units is $3$. Your final answer is $\sqrt{1089}=33$.

  • 1
    As noted by Ross Millikan in a comment to [this duplicate answer](http://math.stackexchange.com/a/370850), this is just the standard digit-by-digit square root algorithm. The OP wanted a way *without* computing the square root.2013-04-23