Ad
  • Default User Avatar
  • Custom User Avatar
  • Default User Avatar

    Your changes look good, but since it's a fork of my translation, I can't approve it. However, if you made your own, separate translation, I would be able to.

  • Default User Avatar

    Merge conflict due to:

    recent changes from related record must be merged first.

  • Default User Avatar

    Unfortunately the Julia unit test is very unexpressive and only prints a message on a failure. However, I've changed how the test names are shown (now they are findmincost(money, days, cost) --> expected). The only other option I can think of would be to assign them to variables which look like the test names you suggest (e.g. Cheapest_day_rental = findmincost(money, days, cost) -> Cheapest_day_rental --> expected. Also, I've added the length of the cost array to the extra large random tests. Let me know what if you have any other thoughts. Cheers!

    Edit: While the changes I made show up here, they don't appear on the kata or in the run output. Not sure why

  • Custom User Avatar

    Excellent!

    • Additionally, I would include the output of the number of elements in cost (length of cost) in the Extra Large Random Tests, ensuring all input data is shown in the error message
    • It's better to name individual random tests differently. Instead of using 'occurred --> expected', it's preferable to use 'Small random test 1', 'Medium random test 4', and so on.
    • In the basic tests, I would suggest adopting a similar approach to other languages. For instance, name the test 'Single day rental' or 'Cheapest day rental', and include the values of the variables money, days, and cost in the error message
    • The value of 'days' is fixed across all 10 tests. It would be better to make it different for each individual test
  • Custom User Avatar

    Thank you for the reply! This really clears things out.

  • Default User Avatar

    This is just another way to define a function, so there should be no perfomance difference. It's mentioned in the docs as "assignment form". It's often preferable when it can be used while remaining readable like in this example. I like to do this for fun, however, if you see some of my more contrived "assignment form" solutions on codewars, in practice, they often could be better as full function definitions to improve readability, by expanding out, and/or performance, by using more efficient algorithms/memory management/etc which the compactification didn't allow for.

  • Custom User Avatar

    Performance-wise, would this be better or worse than defining a function and why?

  • Default User Avatar

    For any other future reviewers:

    I've fixed the issues raised below, with the exception of adding type annotations to the function because there was no valid reason given by the previous reviewer. From what I can find, type declarations are not added arbitrarily to functions in Julia:

    If I've missed something, either about this being a codewars platform issue or if it is indeed idomatic to include type declarations, please let me know and I'll change that part of the code. Otherwise I'll leave it as idomatic as I know how to.

  • Default User Avatar

    I've gone ahead and made it "Failed for: $(repr(word))", which I presume you meant. This now prints with quotes around the input.

    Could you clarify why you would like to annotate the function types? From everything I've seen, it is not idomatic in Julia to do that for every function. Usually it's only done when there is multiple dispatch or performance concerns, and I don't see either of those in this kata.

  • Default User Avatar
  • Custom User Avatar
    • Missing types in initial solution setup and reference solution

    • 2nd facts should be context in sample tests

    • Assertion messages should be Failed for: repr($word)

  • Default User Avatar
    • Fixed the context issue.
    • Added assertion messages to random tests (which are now as useful as the fixed tests)
    • Made it 100 random tests

    The assertion messages only show on failure, so I've included the input provided for the random tests so users can see the input explicitly on failure. This information was already present in the fixed tests due to how they are reported. Let me know if there is something else more useful.

    Since calls to FactCheck are so slow, I usually try to avoid going over 100 tests (hence the initial 55 random tests). With the full 100 the full test suite is now running at 7000-8000ms (up from ~5000ms). However, I could change it so not every random test involves a call to @fact and quits on the first failure. Not sure which is preferable.

  • Custom User Avatar
    • Missing context in sample tests

    • Missing useful assertion messages

    • Should have 100 random tests

  • Loading more items...