Ad
  • Custom User Avatar

    Thanks! Yeah, I lately can't be bothered too much about the includes, the linker solves it, but you're right.

  • Custom User Avatar

    You're right, random tests can generate 0 for every variable in TS. This seems to have been fixed for JS, but not TS :/ You should raise a new issue about this.

    Edit: updated TS random tests to be in line with other languages (no more zeros :P)

  • Custom User Avatar

    I have done this today and am still getting random test cases with both speeds as 0 (in TypeScript)

  • Custom User Avatar

    I'm not sure why you're still going on about this. As of right now, when running a solution on this translation, you get the same feedback that you would with every other translation I've looked at: a pretty-printed version of the before and after states of the flip display plus the passed/failed feedback.

    The feedback you get on a failed test is equivalent to the failed test feedback in other languages. Heck, QuickCheck actually works to the solver's benefit for the random checks, as the failure message shows the inputs (both the lines and the rotors) as well as the expected vs. actual message.

    For a 7/8 kyu kata, I go out of my way to customize the error message to show the full input, as an aid to the solver. For example, I could refactor runTest in the following way:

    import Test.HUnit (assertEqual)
    
    runTest :: [String] -> [[Int]] -> [String] -> Expectation
    runTest board rotors expected = do
      putStrLn "Before:"
      putStr $ prettyPrint board
      let actual = flapDisplay board rotors
      putStrLn $ "After:"
      putStr $ prettyPrint actual
      assertEqual (unlines [show board, show rotors]) expected actual
    

    For someone whose skill level is at the 7/8 kyu level, "digging out" the input is not expected to be a trivial task, and I would use code like the above. However, this kata is 5-kyu. It's not unreasonable to expect the solver to know how to change this:

    flapDisplay lines' rotors = solution
    

    to this:

    import Debug.Trace
    
    flapDisplay lines' rotors = traceShow (lines', rotors) solution
    

    or this:

    import Debug.Trace
    
    flapDisplay lines' rotors
      | traceShow (lines', rotors) False = undefined
      | otherwise = solution
    

    in order to see your input.

    The Codewars docs specifically discuss printing input as a debugging step and doing a web search for "how to print haskell debugging" will immediately bring you to answers like this one on Stack Overflow that tell you exactly how to print the inputs.

    If there's something specific about my translation, please point it out. Otherwise, your issue is with the original kata author or with the platform.

  • Custom User Avatar

    It's not secret information. And it's not an interesting task to dig it out. Just show it, and let the solver put their energy into solving instead of to compensating for the platform/test code.
    It's also made extra inconvenient because the solver can't read or edit tests. They might not have reached for Debug.Trace if it was done locally, so it often becomes a chore only for the purpose of getting that basic information out.

    There's this idea of codewars being unit testing. But without access to test code, it is not that. So I don't think it should be treated as that.

    QuickCheck is printing out that information. It does that, because that information isn't in the test code. It doesn't expect the developer to use Debug.Trace to figure it out.

  • Custom User Avatar

    @natan: regarding debugging/showing inputs, considering that most Haskell functions that are tested in CW are pure, it is generally best practice to show inputs on failure for 7/8 kyu problems. For a 5-kyu like this, it's not unreasonable for a solver to use Debug.Trace for debugging.

  • Custom User Avatar
  • Custom User Avatar

    no obstacle there, literally loop and print. probably need an it for each individual test though, because there's only a single "Test Passed" per it to break up the outputs

    I don't think that the user should have to do unsafeio debug/trace to find out the inputs for the other tests either because that's terrible ux. that's fine when the test code is available to the solver. it isn't.

  • Custom User Avatar
  • Custom User Avatar

    This comment is hidden because it contains spoiler information about the solution

  • Custom User Avatar

    somebody fixed it

  • Custom User Avatar

    my understanding was that by returning NULL for n = 0 on their pyramid func, there was no heap allocation, and therefore no need to deallocate using free.

    (i suck at memory stuff, take with a grain of salt)

  • Custom User Avatar

    This comment is hidden because it contains spoiler information about the solution

  • Custom User Avatar

    Seems so ^^

  • Custom User Avatar
    • too late to change the return type now :/
    • raised here and fixed
  • Loading more items...