Ad
  • Custom User Avatar

    This comment is hidden because it contains spoiler information about the solution

  • Custom User Avatar

    how is this so? just curious

  • Custom User Avatar

    I agree. changing prototype is not a good practise. But using filter method is good.

  • Default User Avatar

    It's O(n^2) anyway. Big-O gives the limit on worst-case performance, not average performance or actual performance for any real set of data.

  • Custom User Avatar

    I don't understand how can somebody vote this up both for clever and for Best Practice!

  • Custom User Avatar

    I find it (interesting?) that I told you what would bring your code to its knees at O(n^2), and you didn't test it:

      var descending = [];
      for (var i = 100000; i > 0; i--) {
        descending.push(i);
      }
    

    http://jsperf.com/largest-gap/3

    Even at 100k, it barely completed for both our codesets. With 1m, yours crashed, and I didn't even bother running mine.

    Playing around with JSPerf, the main performance difference between our code is in the tails: your code has a minimum number of calculations which holds it back in situations where my code can dump out early with an answer, but my code has more common edge cases where it will run all the way through and die on the upper tail.

    Please don't take the feedback personally, I was just pointing out an algorithmic issue. My primary point is that there's rarely a perfect algorithm for every situation; edge cases abound. It all depends on what kind of data you're expecting.

  • Custom User Avatar

    And your solution's only good if there aren't continual inflection points (e.g., 9,8,7,6,5,4,3,2,1).

    Also, worst case scenario, my solution is n * n/2; not quite as bad as n^2 (yes, I know that simplifies to O(n^2); just being precise). Similarly, your worst case scenario is n * n/2 + 2n. Of course, since I'm performing far fewer calculations per loop and store only a single copy of the data, I'm guessing I still have the edge (particularly if we start taking RAM into account for huge datasets).

    Of course, we're talking purely about each of our worst case scenarios. On an average case? I don't know; I bet our solutions are pretty similar. If we combined them (edge jitter + inflection test), we might be able to eke out a little more performance, except our worst case scenario would still be just as terrible, since neither of our solutions is equipped to handle it.

    The difference, and the challenge from my perspective, is that I did it in 9 lines and it's still a better solution than the standard double for-loop that most solutions employ.