I was going through my company’s code base and found a statement that compares an array with 0 like this:
array > 0;
if we let array = [“1”], which has a single element, the above statement would be true; but if let array = [“1”, “2”] or [], the statement would become false;
Can someone explain the meaning of this statement, why it yields such results, and if it would be useful in any situation?
Advertisement
Answer
When you use >
, the engine will first convert both sides to a primitive, which will call the valueOf
, and, if that didn’t return a primitive, then the toString
method if it exists. For arrays, only the toString
method returns a primitive, so that’s what’s used – and what it does is equivalent to doing .join(',')
.
console.log(['1', '2'].toString());
Looking at the spec again, after the array has been turned into a primitive, we now have one side that’s a string (which came from the array), and another side that’s a number. So, both sides are then converted to numbers:
d. Let nx be ? ToNumeric(px). e. Let ny be ? ToNumeric(py).
And then the numbers are compared.
In the case of ['1']
, you get 1 > 0
, which is true.
In the case of ['1', '2']
, the resulting string is '1,2'
, which cannot be converted into a number, so the following runs:
h. If nx or ny is NaN, return undefined.
and when undefined
is returned by this algorithm, the whole >
evaluates to false
.
and if it would be useful in any situation?
For clean, understandable code, it generally wouldn’t. Better to explicitly cast to types whose comparison makes intuitive sense first.