Ad
  • Custom User Avatar

    i am stuck on this - Value is not what was expected all the time. this isn't worth it. what is the issue?

  • Custom User Avatar

    I disagree with 3kyu. It should be about 4kyu, if not 5kyu. Other 3kyu or 4kyu problems took me a lot longer and were harder to solve than this one.

  • Custom User Avatar

    I was stuck at exactly the same spot, but I don't know how to describe it without spoiling anything.

  • Custom User Avatar
  • Custom User Avatar
  • Custom User Avatar

    Ditto. i really feel like this kata could use a bit more descriptiveness. I've passed the first two tests but fail on the 3rd with the same error warning as aleation. Thing is, i can't tell why it's failing cuz i don't know what the test is.

  • Custom User Avatar

    Thanks for providing this explination bkimmel, I would have been lost without it.

  • Custom User Avatar

    I'm exactly at that point, I have prepared a few test cases and they all pass, but still the Attempt will fail, just telling me "Value is not what was expected", I tried to return false or null when there are not arguments or the typeof argument !== 'number' , could you give me a hint ?

  • Custom User Avatar

    I would say, see back my own solutions, but it is already on the list, gave it a up-vote for that!

  • Custom User Avatar

    Can't upvote this comment enough.

  • Custom User Avatar

    This is a really great and well researched/thought out post. Thanks for this!

    Once some of the current performance issues are worked out the solutions page will be our next big focus. First thing up is making it easy for users to see their own solutions.

    How to handle sorting is another important update we want to make soon. We want to provide a few different options.

    Performance metrics is something that we have been wanting to add for a long time. You are right though there is a scaling/load/accuracy challenge of getting the benchmarking implemented properly. I think this can be done though, we will just have to defer the benchmarking results until load is not as heavy, so the user experience will have to account for the fact that benchmarking results might not get reported for some time (maybe an hour or more).

    Un-minified size is like you said not going to be very useful. Perhaps once we have the minified size we can look at difference between minified and un-minified and do something with that. Is a similar solution but just longer possibly easier to read? Possibly.

    minification would be very useful, more than anything I think for the ability to group more solutions and as a result have less groups to sort. Trouble is that most languages don't need or have minification options. I'm really only aware of client-side language minifiers (js, css, html). JS is by far the most popular language on Codewars but we intend to support many more languages once we have the resources.

    The other measurement systems that you mentioned (Halstead Complexity Measures, LLOC, Cyclomatic Complexity, Maintainability Index) would be an amazing addition to have and I agree with the remarks you made about them. Standardizing on using any of them would be difficult since they make adding new languages to the site more complex. I imagine if we were to start introducing any of these systems then they may be piecemealed together for different languages at different times. I could see an open-source effort being made to allow developers to create language specific adapters that could be integrated into the site. They could be separate sorting options once they are added and only available for the languages that support them.

    The ability to create kata that have scores would help with the sorting too. The idea being that extra credit can be given for code that goes the extra step of handling edge cases, or for kata challenges like fuzzy matching algorithms where some algorithms match better than others. It wouldn't help across the board with sorting but it would help to an extent.

    The short term option may be to just randomly sort solutions with zero up votes. The problem here is caching, which is manageable but some rework will need to be done in order to keep the page performance decent.

  • Custom User Avatar

    The more I think about this, the more I recognize the real difficulty of sorting the solutions list.

    performance Code execution times can vary significantly depending on the server's load. This means the code would have to be verified a number of times to find the average. Which in turn imposes a greater load on the server. Performance is a nice metric to have, but I think other metrics can describe how elegant a solution is without placing as much a burden on the servers. unminified size This metric would be practically meaningless, especially as white space, comments, and coding styles can greatly impact the size of the code. For example: ```` javascript if(test)return true ```` = 19 characters versus ```` javascript if( test ) { return true; } ```` = 30 characters. That is a 36% difference in size; does that tell you anything about the code quality? I don't think it does. minified size Here we have a metric that can potentially be informative. All the solutions for a given kata perform the same amount of work, so describing them in terms of minified size gives us a sense of the amount of work required to write and understand the code. We are all also aware of speed bottlenecks caused by large downloads, so small code can be appreciated in a production system. [logical lines of code (LLOC)](http://en.wikipedia.org/wiki/Source_lines_of_code) Once again, all solutions for a kata have the same amount of functionality, so solutions that use fewer executable statements are likely to be easier to understand and maintain. This metric is also subject to different coding styles that are effectively equally easy to read and maintain, but impact the LLOC count. ```` javascript i=i+1; // 2 LLOC i+=1; // 1 LLOC i++; // 1 LLOC ```` [Cyclomatic Complexity](http://en.wikipedia.org/wiki/Cyclomatic_complexity) This classic metric describes the number of branches the code may take. More branches means more things to keep track of and greater levels of complexity. Small differences in cyclomatic complexity wouldn't be very informative, but large differences certainly would be. [Halstead Complexity Measures](http://en.wikipedia.org/wiki/Halstead_complexity_measures) This suite of metrics can be useful in describing the amount of work required to write the code and the relative risk for undiscovered bugs. [Maintainability Index](http://www.projectcodemeter.com/cost_estimation/help/GL_maintainability.htm) Taking into account LOC, Cyclomatic Complexity, and Halstead Complexity - Volume, the maintainability index provides a nice single number that balances the above metrics. I personally think this is the best single metric that should be used to sort un-ranked solutions. (Solutions without votes.)

    Several tools exist for javascript, making it possible to implement some of these metrics. I don't know about Ruby or any of the planned languages. (But really, does anyone care about any language other than javascript? That's right, I didn't think anyone did.)

    Convert Coffee to JS

    http://js2coffee.org/

    Code Minifier

    http://closure-compiler.appspot.com/home

    Complexity Calculator

    http://jscomplexity.org/ https://github.com/philbooth/jscomplexity.org

    My proposed simple method for generating metrics:

    1. If source is Coffee, convert to JS
    2. Minify source. (The minified source can also be used to group similar solutions.)
    3. Generate complexity report from source
    Calculating execution time would be a lower priority in my book.
  • Custom User Avatar

    I really like the idea of going off the execution time rather than the time to solve. I think it gives a better idea of how good the solution is.

  • Custom User Avatar

    That sounds about right.

    It's a shame how many programmers don't know about bits, two's complement, the structure of floats, etc. Hopefully this gets some on that path...

  • Custom User Avatar

    I had started to implement an 'extra credit' feature that was along these lines. The idea was that you could define non-required test cases that wouldn't prevent you from completing the kata but would increase your kata score. The initial version probably wouldn't have any affect on your honor score directly, but it would have an affect on the order in which kata solutions get shown, increasing your chance of getting up voted. Each solution of course would also show a score by it.

    This would allow for cool creative possibilities as well as alleviate many of the "how to handle edge cases" debates. Edge cases could just be added as extra credit requirements. It would also allow for additional tests to be added after beta without having to invalidate existing solutions (it would just lower their score instead).

  • Loading more items...