Ad
  • Default User Avatar

    as said by phaffywaffle above the line jaden_case[strlen(jaden_case)] = '\0'; is wrong, because it's tautological. By definition, if a string is nul-terminated, string[strlen(string)] == '\0'. So you werent nul-terminating jaden_case.

  • Custom User Avatar

    I cannot see such test case. The only one similar I see is this:

        {
            const unsigned expected[COUNTS_SIZE] = { ['a'] = 2, ['b'] = 2 };
            tester("aabb", expected);
        }
    

    Are you sure you did not modify the sample tests?

  • Default User Avatar
  • Custom User Avatar

    Unfortunately, SIGSEGV comes from sample tests, because they have a copy/paste error. Full tests seem to not have the error and they work. You can temporarily fix the error by replacing first assertion (lines 50-55) with:

        if(sub_len != exp_len) {
            cr_assert_fail(
                    "< Incorrect Size >\n \nsubmitted = %zu\n \nexpected = %zu",
                                              sub_len,        exp_len
            );
        }
    

    But stll, the SIGSEGV gets triggered only when your solution is incorrect, and it indeed has a mistake. It does not handle the sz_out correctly.

  • Default User Avatar

    getting a SIGSEGV is tough because AFAIK you can't tell what caused it until you stop doing it, so examine all places in your code where you attempt to access memory, then consider the possibilities of how that could go wrong

  • Default User Avatar

    You "report" the number of items in the array via the nb_uppercase pointer, upon which you are attempting to allocate memory for the return value. Instead, set *nb_uppercase = (length of array), and initialize and allocate the return array with a seperate, new variable name.

  • Default User Avatar

    This comment is hidden because it contains spoiler information about the solution

  • Default User Avatar

    fair enough, it is now specified in the initial code

  • Custom User Avatar

    The unspecified encoding is an issue tho. Without this information, its not known how to interpred the characters in input.

  • Default User Avatar

    chars in C are not Unicode codepoints, they are 8-bit bytes. is the Unicode codepoint U+2660 (9824 in decimal) and cannot possibly fit in a 8-bit char with any of the most widely used Unicode encodings (you can see how it is represented by various encodings here). Your code accesses the last byte of the string and compares it to the UTF-8 decimal encoding of the codepoint: this cannot possibly work for characters beyond ASCII. In this kata, the encoding is indeed UTF-8, because it is natively supported on Linux, on which Codewars runs. The string "♠" will be represented by the following bytes in memory: {0xE2, 0x99, 0xA0, 0x00}. Indexing the string like string[idx] gets only one of these bytes, not the whole codepoint. However, one critical property of UTF-8 is that it is retrocompatible with ASCII, and therefore with <string.h> functions like strcpy() and strcmp(). Code like strcmp("♠", "♥"); will work fine. Knowing this, you can craft a better solution.

    Please use the question tag to ask for help by the way, an issue is a provable bug in the kata.

  • Custom User Avatar

    No, it's not the expected behavior, but it's also not what really happens. I would suspect that you are using printf without a newline, but it's hard to tell without seeing the code.

  • Default User Avatar

    This comment is hidden because it contains spoiler information about the solution

  • Custom User Avatar
  • Default User Avatar

    You can use a print statement at the top of your code.

  • Default User Avatar

    Can you paste here the test that you are failing? It looks like you are trying to access an element outside the string length.