Ad
  • Custom User Avatar

    This comment is hidden because it contains spoiler information about the solution

  • Custom User Avatar

    It's about expectations.
    To cast a u32 to a char, you have to use char::from_u32(). This returns an option, making it explicit that the result may not be a valid unicode value, but when it does (result is Some(char), then you know it is what you expected it to be based on the value held by the u32. The types match. This simply isn't the case with an i32, where you have a 50% chance to get it catastrophically wrong. So, as a programmer, you aren't allowed to simply cast signed to char that way. But it's your choice to "make a detour via u32 or smaller", because then it should be clear to you that casting a signed value to an unsigned value will yield "weird" results.
    Note that when casting to u8 first, there is no possibility that you'll get an invalid unicode value because the entirety of 0-255 is valid, hence you're allowed to directly cast u8 as char

  • Custom User Avatar

    Yeah, i see your point. But could one not argue that if u32 as char could panic depending on the u32 value, why is it any different than knowing the risk of a i32 panicking in the case of a -ve value for example?

  • Custom User Avatar

    char may not be what you think it is; it does not make sense to allow casting from a signed integer to a unicode scalar value. However, a signed integer can be cast to an unsigned of greater or equal size by simply interpreting the bits accordingly. Any unsigned up to 32bits can then be cast technically into a char, however it might not map into the valid unicode range and panic, but at least as far as the type system is concerned it's possible to do.

  • Custom User Avatar

    Had no idea you could chain casts like that – very nice! (though it seems redundant to not allow i32 as char ?)