Ad
  • Custom User Avatar

    That completely slipped my head, I was troubleshooting something and failed to remember to update that! Thanks for telling me!

  • Custom User Avatar

    Thanks for the note, I completely forgot to add a test for calculate_loss.

    For @test.it("Test Neural Network Initialization"):
    These checks ensure that your network's weights are of the correct shape as per the architecture defined in the initialization (SimpleNeuralNetwork(3, 4, 2)).

    And lastly @test.it("Test Forward Pass"):
    This test ensures that when your neural network receives an input, it processes the input correctly through its layers and produces an output with the expected dimensions.

    I thought future people attempting this would know what a forward pass was since it should be neural network fundamentals... but i'll be happy to update the description and clarify! Thanks a lot for your take on this, i'll update the description as soon as I implement the calculate loss test case. :)

  • Custom User Avatar

    Error margin is set up 0.55 and 5, but assert_approx_equals accepts maximum of relative/absolute error, which means the fixed tests accept +-0.55, and the random tests will accept everything.

    It also means no training is neccessary to pass the kata: see this

  • Custom User Avatar
        @test.it("Test Neural Network Initialization")
        def _():
            nn = SimpleNeuralNetwork(3, 4, 2)
            test.assert_equals(len(nn.hidden_weights), 3)
            test.assert_equals(len(nn.hidden_weights[0]), 4)
            test.assert_equals(len(nn.output_weights), 4)
            test.assert_equals(len(nn.output_weights[0]), 2)
    
        @test.it("Test Forward Pass")
        def _():
            nn = SimpleNeuralNetwork(3, 4, 2)
            inputs = np.random.rand(1, 3)
            output = nn.forward_pass(inputs)
            test.assert_equals(output.shape, (1, 2))
    

    These tests are testing the specific implementation details that was not mentioned in the description at all.

    Meanwhile, calculate_loss is not tested at all. So it's very unclear what is actually meant to be tested.

  • Custom User Avatar

    Lots of test cases added and small prompt change, hopefully should put people through hell :)

  • Custom User Avatar

    I see, i'll have that fixed soon!

  • Custom User Avatar

    This comment is hidden because it contains spoiler information about the solution

  • Custom User Avatar

    No random tests. Also percentage is always 100 in the current tests, so the grade_percentage method is not properly tested.

  • Custom User Avatar

    alright thanks, i'll do that once i get home on my pc.

  • Custom User Avatar

    you can unpublish the kata, then it is private

  • Custom User Avatar

    sorry I just made this for the first years at my trade school, do you have any idea how I could private these?

  • Custom User Avatar

    Trivial kata, a simple class inheritance with some properties and string formatting has been done before.

  • Custom User Avatar

    Trivial kata, finding an item in a list (in a list) has been done before

  • Custom User Avatar

    In addition, the unit of noise level should be specified: the standard unit for noise level is decibel, which is a logarithmic scale.

  • Custom User Avatar

    obstacles in random tests are in a completely different format: in sample test it is

    [
        { x1: 1, y1: 1, x2: 3, y2: 3, reduction: 20 }
    ]
    

    But in random tests it is something like

    [
      {
        obstacles: [
          { x1: 1, y1: 3, x2: 6, y2: 5, reduction: 40 },
          { x1: 2, y1: 2, x2: 6, y2: 9, reduction: 32 }
        ],
        source: { x: 1, y: 1 },
        reports: [
          { x: 0, y: 0, noiseLevel: 0 },
          { x: 1, y: 0, noiseLevel: 0 },
          { x: 2, y: 0, noiseLevel: 0 },
          { x: 3, y: 0, noiseLevel: 0 },
          ...
        ]
      },
      ...
    ]
    
  • Loading more items...