Ad
  • Default User Avatar

    Issue persists.

    From the output I assume that you are not using test.assert_approx_equals(...) which would provide the expected information upon failure.

  • Default User Avatar

    Error-margin Tests are not debuggable!

    They do not reflect by how much my model deviates from the expected result. The output is just Error to big..

    Either

    1. The upper limit of the RSME you use in the test cases should be clarified in the kata description
    2. or a failed test should print by how much the error margin was overstepped
    3. or a failed test should print what the expected y-coordinate (± error_margin) was
  • Default User Avatar

    The Object-Oriented Programming tag is needless for this kata.

  • Default User Avatar

    Of course, way easier than what I was building. Thank you!

    I am satisfied to inform you, that your solution now fails successfully ;)

    Closing this.

  • Default User Avatar

    I understand. But I have also heard the opposite.

    My opinion would be that if the act of formulating a solution from the problem is considered X-kyu, then the kata should be rated X-kyu.

    But I guess, it is not for me to decide.

  • Default User Avatar

    Thanks for hinting that!

    Letting you know, that I am currently working on a fix.

    However, it turns out that it is not that trivial to do.
    If you have any suggestions, I am eager to hear what you propose.

  • Default User Avatar

    This comment is hidden because it contains spoiler information about the solution

  • Default User Avatar

    Something that is new to me also, good to know!

    Ok, I agree. Since you are more experience, what would you suggest out of the following choices: 10ms, 50ms, 100ms ?


    Size of the matrices is n^3 many integers. The type of the input is np.ndarray as the inputs are easier to generate using numpy and I can add this to the kata description.

    I see that there will be a discrepancy since the fixed example test cases are fed nested Python lists.

    However, finalizing the generation process with numpy's .tolist() causes even more time overhead which would require to reduce test case numbers if the input should always be a Python list.

    Any thoughts?

  • Default User Avatar

    This comment is hidden because it contains spoiler information about the solution

  • Default User Avatar

    This comment is hidden because it contains spoiler information about the solution

  • Default User Avatar

    Hey, I am fairly new to authoring katas myself and working one my first one, but I can try to help you.

    However, things that you should read prior to that are the codewars docs to be found at:

    I guess that these should clarify most of your issues.

    If not, then I suggest that you join the codewars' Discord server via the invite found in the lefthand side of the codewars webpage:

    There you can either try to contact me (same user name) or ask the community for help in the help-author channel.

  • Default User Avatar

    as issues are resolved: republishing for more feedback.

  • Default User Avatar

    Thank you for your feedback!

    The kata description has been augmented by a section that addresses this logical edge case:

    Any entries in mat where x == y or y == z are logically irrelevant and may hold a random value.

    resolved.

  • Default User Avatar
    • more fixed test cases
    • test groups sizes (w.r.t n) updated to a better fit
    • easy random test cases are now all fully printable
    • number of test cases in all groups adjusted to reasonable numbers
    • big tests now have performance constraint enforced via test timeout instead of number of test cases

    resolved.

  • Default User Avatar
    • Kata Description has been augmented by a section clarifying logical edge cases and how to handle / interpret them

    resolved.

  • Loading more items...