Ad
  • Custom User Avatar

    Same thing happened to me. Reading the error message may help you.
    In my case, the error occurred when there was a "+" sign just before a number at the end of the string. Had to get around it using the "isDigit()" method.
    I hope it helps.

  • Custom User Avatar

    While I understand what you mean, I disagree with more points than I agree with. I don't think that applying analogies of "actual software projects" to Codewars setup works well in this case (it works in some aspects, but not in this one) for this simple reason: main difference between software projects and Codewars tests is that in case of a project, you always know the context, and you basically own the context. You can change anything you want, check anythign you want, and look anywhere you need. It's also not exactly true that messages of failed assertions are sufficient. I seriously doubt that it would be easy to locate an issue using nothing else than "Expected X, but got Y" returned by failed assertEquals. Other than the assertion message, you have at your disposal things like an exact information on a failed test case (test method, test class, its namespace and assembly, all of them with DAMP names, @DisplayName or [Description]), and its source code. This context information is something what helps to narrown down the cause in much greater degree than a failed assertion. I would be very interested how easy would it be to find a reason for failing tests if a test report would have no test titles, no grouping or categorization, and only default assertion messages.
    This is exactly what users have to work with on Codewars. All they know is that "should work for random inputs" and that gugkjggg100 should equal gugkjggg0100. Users don't own the source code of tests to see what test failed, and what aspect of their solution misbehaved. Before they even start fixing their solution, they have to reverse engineer tests, which they don't own, and cannot access easily.
    I also disagree with "I have to add println calls anyway" and I generally pity everyone who diagnoses issues by iterations of "add debugging to code of a SUT/recompile/run/fail/repeat". Whatever suits their needs, sure. But for some reason I find things like tests of different granularities and step-through debugger much more efficient tools than spraying prints all over the place. I do write tests at work, and I can't remember the last time when a failed test made me print anything, either in test or in production code (preemptive disclaimer: I don't do front-end).

    I also think it's perfectly OK that users have to add a pritln() call to a solution attempt. When my code fails, I usually have to add not just a println() that prints the input, but several others for intermediate steps, to help me understand where I went wrong. So even if the test prints the input, I have to add println() calls anyway.

    This really does not match my experience, and I think it's an oversimplification. My workflow is totally different: when my solution fails a test, I recreate the failing test case locally and see how it fails locally: inspect the stack trace, or step through the code line by line. I cannot do this if I do not know the inputs, or any other reason of the failure. And modifying my solution to print inputs is not a step towards fixing my solution, but it's reverse engineering of tests which I can't see. It should not be necessary for me to change my solution for any reason other than removing a bug. Hunting for a context (for example by printing inputs) is not an action which removes bugs, and it should never (almost) be my responsibility. Don't get me wrong, I do print sometimes, but usually only when running tests or attaching with a debugger is not possible. And it's possible more often than not to run tests on a solution of a Codewars kata, with an attached debugger.

    On top of all of this, adding context information to tests is usually very easy for authors. Frameworks usually provide tools to make tests informative: descriptive names for describe blocks, descriptine names for it and test cases, custom assertion messages, etc. All an author has to do is to write half of a line of code more (unless C, in C it's harder). I find it difficult to understand why writing a half of line more once by a translator (be it an it title, or a custom assertion, or whatever else) would not be preferred over transferring responsibility of debugging tests on users, and/or responding to their recurring support requests. Of course, I do not know what is exactly the motivation of authors, translators, and reviewers, but all I can fathom is that they push away responsibilities and cannot be bothered with supporting users, neither by providing them with sufficient amount of details, nor by answering their questions.

    BTW I hate discussing things with kata discourse posts. I would enjoy discussing things further on Codewars Discord or Gitter, but I am going to unsubscribe from this thread.

    Cheers!

  • Custom User Avatar

    @hobovsky: I approved and improved the Java translation for this kata. I'd say neither I nor the original author(s) of the translation were "extremely lazy". :-) It's just that assertEquals(expected, actual) is a common way to run a test, and in most cases, it prints enough information to understand why the test failed. (That's what I generally do in actual software projects. JUnit failure messages a pretty good – clearer than the messages of many test frameworks in other languages. In my experience, an additional message rarely adds much value – when a test fails, I have to investigate anyway.)

    I also think it's perfectly OK that users have to add a println() call to a solution attempt. When my code fails, I usually have to add not just a println() that prints the input, but several others for intermediate steps, to help me understand where I went wrong. So even if the test prints the input, I have to add println() calls anyway.

    But you're right that it's easy in this case to make the tests print the input. I updated the Java translation.

  • Custom User Avatar

    The fact that code of tests is hidden is not the problem. The problem is that when a test fails, whole context of the failure, and reason of the failure, is hidden. Tests often tend to be written in an extremely lazy way without a proper thought about anything else than a happy path.
    It takes one piece of effort for authors to make things easier for hundreds of users, and yet, for reasons beyond me, authors, translators, and reviewers often dont even bother to pay attention to quality of feedback of failed tests. Users should not need to print anything. Users should be able to focus on fixing their code, and not on spending effort to reverse engineer and debug test code.

  • Custom User Avatar

    Fair enough - a challenge, after all, is what we are here for.

    And, Hobovsky - thanks for the link.

    Cheers :)
    Marc

  • Custom User Avatar

    I just checked – when my solution is broken, e.g. if I add a stray x at the start of the string, the failure message looks like this:

    expected: <K-#z`+D|hTh/14472438> but was: <xK-#z`+D|hTh/14472438>
    

    I think that's clear enough. If you want to see the original input string, you can temporarily add System.out.println() calls to your solution.

    Regarding hidden tests: Of course we don't do that in most software development contexts, but here on Codewars that's how it's done, and it makes good sense. If I could see the main tests before I found a working solution, I'd be tempted to write my solution such that it just passes these specific tests. But I don't see the tests, so I have to think harder about how to solve the kata. And in many cases, the tests have to contain a reference solution, or code that is close to a solution, and this would also guide users to a specific algorithm, or even specific pieces of code. Even the code that generates test cases may give away a lot about a possible solution. In a nutshell: I think it makes sense that the main tests are hidden, because it makes us think harder and more creatively about our solutions.

  • Custom User Avatar

    Don't use regex, problem solved :)

  • Custom User Avatar

    I totally agree with you that feedback of many kata on failure is insufficuent, bad, or even worse.
    But still, it is possible to see inputs of "hidden tests": https://docs.codewars.com/training/troubleshooting#print-input

    You can raise an issue about insufficient feedback and confusing results of failing tests. For some reason, many users still dont consider this important, but I am going to push the guideline of having inputs somehow presented as much as I can.

  • Custom User Avatar

    Agreed - this is infuriating! It's what initially caused me to bail on this site a couple years ago.

    Sad to see that two years later the devs have still not addressed these problems.

  • Custom User Avatar

    I wrote code that passed all the tests. However, on hitting 'Attenpt' it fails. It would be really nice to see what kind of hidden tests are used. Obviously, in the real world our code will run against unanticipated cases. But I've never worked for a company that hid the unit tests from me.
    BTW: Java JDK 11