Ad
  • Default User Avatar

    I printed the words used in private tests with std::cout and there is a test with a word composed of 20000 times the character "a".
    (https://en.wikipedia.org/wiki/Longest_word_in_English)
    Probably what is causing the timing out for a lot of solutions is the execution time of hashing such a large word several times. In addition, this does not recreate a real case scenario, because long words like the ones that you can find in the wikipedia link, probably only appear once in a text, and then probably a short form is used. My point is that, if I am not wrong, a solution that uses unordered_set (or similar solutions), which stores hashes of the string is the best solution for the grand majority of cases, so if the solution fails because there is a strange input case, like several words of 20000 characters, all of them "a", the programmer should perform a deep analysis of the input cases and write an optimized version of the solution for those cases, which I don't think it's addecuate for a 7kyu problem.

  • Default User Avatar

    I still have this problem. My solution uses unordered_set to check if a word was already present so I don't think the fault is because my code is slow.