5 kyu
Letterss of Natac
114 of 139mentalplex
Loading description...
Fundamentals
View
This comment has been reported as {{ abuseKindText }}.
Show
This comment has been hidden. You can view it now .
This comment can not be viewed.
- |
- Reply
- Edit
- View Solution
- Expand 1 Reply Expand {{ comments?.length }} replies
- Collapse
- Spoiler
- Remove
- Remove comment & replies
- Report
{{ fetchSolutionsError }}
-
-
Your rendered github-flavored markdown will appear here.
-
Label this discussion...
-
No Label
Keep the comment unlabeled if none of the below applies.
-
Issue
Use the issue label when reporting problems with the kata.
Be sure to explain the problem clearly and include the steps to reproduce. -
Suggestion
Use the suggestion label if you have feedback on how this kata can be improved.
-
Question
Use the question label if you have questions and/or need help solving the kata.
Don't forget to mention the language you're using, and mark as having spoiler if you include your solution.
-
No Label
- Cancel
Commenting is not allowed on this discussion
You cannot view this solution
There is no solution to show
Please sign in or sign up to leave a comment.
python new test framework is required. updated in this fork
Hello anyone, i passed everything , except some giving "You didn't return a string: False should equal True" Any idea why?
for example : random tests: 1,4,7,10,13 ...
I was trying to figure out why my solution was timing out in R, and out of curiosity I had the function immediately return a list of FALSE and the input hand (code below). I was surprised to see that the server regularly took about 8500ms to process and return the test results. Is anyone else able to duplicate this? If the code that evaluates this kata is really that inefficient then I don't see how someone could complete it.
play_if_enough <- function(a, b) list(FALSE, a)
Your solution still does not pass the tests, I found a solution that with run from cProfile, the performance is lower than your solution and does not pass the tests
After receiving numerous time-outs I forfeited the kata only to find that literally NONE OF THE ACCEPTED SOLUTIONS can pass the tests! Wtf?
It looks like there has been a change in, perhaps the library used for the test suite (a previously working function is returning an object not found error). I haven't been around in some time. If I get a free minute, I will trouble shoot this. Thank you for raising the issue.
This comment has been hidden.
This comment has been hidden.
Excellent suggestion! Now includes a test to confirm a string is returned.
Approved
But the tests expect a specific order.
Tests do not expect a specific order. You can see that by the error message when a test fails, the actual and expected results are transformed to a dictionary, which is unordered.
Where? They don't. There's
assert_equals
with two strings.nope, it's assert equal with two dictionaries. try writing an incorrect solution and you'll see the output. Or try writing a solution that puts the result in a different order and you'll see it passes. I did both to confirm it worked before publishing.
In truth, it's not two dictionaries directly, but two function calls on your functions output. The function call transforms it to a dictionary.
OK, I'm an idiot. The basic tests were transformed, but I forgot to include it in the random tests. Fixed. Sorry I was so boneheaded I didn't see it.
Hi,
I'm facing this message:
... each time I attempt a submission with my "correct" (surely "almost" only) solution while I have no trouble to make attempts when my solution is incorrect. You have to change the way you do the tests to avoid that really BIG structures could be displayed in the assertion messages. Currently, I have no way to know what is the input where my test is failing.
Idea: use
test.expect(..., allow_raise=True)
for your big inputs.That happens because you hit the performance requirement of the kata. So sadly you're gonna have to do micro-optimizations here and there and see which works :(
This comment has been hidden.
@mentalplex: read this. So I agree with Voile, that's an issue.
HI @Blind4Basics and @Voile. Please help me understand more about why you believe a high performance requirement is an issue. Because I really don't understand and would honestly like to. B4B, I read your solution and it sounds like you're saying that using an object with a theoretical performance should pass but doesn't, and that's an issue. From my perspective it's a learning opportunity that, for this specific case, it doesn't meet the requirements, reinforcing the general principle that performance is not always what you'd expect, and you should always benchmark for your specific project requirements, when you run up against an issue, making adjustments as needed... again for the specific project requirements.
Also, I added
allow_raise=True
for the big inputs, so warriors can have more information when tests fail. Thanks for that idea! I didn't know about it!SO it turns out this wasn't a performance issue, but that, I forgot to transform the results of the solution and reference to an unordered structure in the random tests. B4B your MyCounter now works. I'm an idiot, and will self-flagellate as required. Issue resolved.
This comment has been hidden.
Thanks for the explanation. That makes a lot of sense. With this, and those permutation test kata I did a while back, I'm hoping to see some clever solutions I didn't expect. I want solutions that are at least as performant as my reference solution, possibly more, so I can learn something new. To do this, I crank the tests so that my solution nearly times out.
ok, but if you already got the best approach, the sligtest step aside can make the warrior's code fail with no good reasons and so it can be really frustrating. At least, set the limit a bit lower (like 10s instead of 11,5s) so it allows some kind of variations for siilar algorithms.
ok. reference set to ~10s then :) Personally, I believe failing because you haven't met the performance requirements is actually a good reason to fail, and finding that extra 1s is part of the challenge of an optimization kata, but I respect you and @Voile enough to make the adjustment anyway.
Incidentally, isn't this approvable now since the changes were made so that power users votes are counted multiple times? This has 4 votes from 3,000+ honor users, which by my site-events page is 16 votes (and 32 honor).
not yet. It still needs some ranking/upvotes. ;)
EDIT: I'm not completely sure that this ranking/upvote system is still currently use this exact same way.
There is an IMO ridiculous degree of performance required for this kata, which pretty much only allows one way to compose a correct solution.
(Also, if the kata demands performance, you should probably tell it.)
Not an issue. The difficulty of a kata should be reflected in it's rating, test specifications are apparent from the log, and there are already three different implementaitons that pass. I will incorporate your suggestion to include input sizes in the description.
Incidentally, even though it isn't the case for this kata, I would say a kata that requires a high enough level of performance such that you have to use exactly the best algorithm in order to solve it would be a good and useful kata.