Ad
  • Custom User Avatar

    Error margin is set up 0.55 and 5, but assert_approx_equals accepts maximum of relative/absolute error, which means the fixed tests accept +-0.55, and the random tests will accept everything.

    It also means no training is neccessary to pass the kata: see this

  • Custom User Avatar
        @test.it("Test Neural Network Initialization")
        def _():
            nn = SimpleNeuralNetwork(3, 4, 2)
            test.assert_equals(len(nn.hidden_weights), 3)
            test.assert_equals(len(nn.hidden_weights[0]), 4)
            test.assert_equals(len(nn.output_weights), 4)
            test.assert_equals(len(nn.output_weights[0]), 2)
    
        @test.it("Test Forward Pass")
        def _():
            nn = SimpleNeuralNetwork(3, 4, 2)
            inputs = np.random.rand(1, 3)
            output = nn.forward_pass(inputs)
            test.assert_equals(output.shape, (1, 2))
    

    These tests are testing the specific implementation details that was not mentioned in the description at all.

    Meanwhile, calculate_loss is not tested at all. So it's very unclear what is actually meant to be tested.

  • Custom User Avatar

    No random tests. Also percentage is always 100 in the current tests, so the grade_percentage method is not properly tested.

  • Custom User Avatar

    Trivial kata, a simple class inheritance with some properties and string formatting has been done before.

  • Custom User Avatar

    Trivial kata, finding an item in a list (in a list) has been done before

  • Custom User Avatar

    obstacles in random tests are in a completely different format: in sample test it is

    [
        { x1: 1, y1: 1, x2: 3, y2: 3, reduction: 20 }
    ]
    

    But in random tests it is something like

    [
      {
        obstacles: [
          { x1: 1, y1: 3, x2: 6, y2: 5, reduction: 40 },
          { x1: 2, y1: 2, x2: 6, y2: 9, reduction: 32 }
        ],
        source: { x: 1, y: 1 },
        reports: [
          { x: 0, y: 0, noiseLevel: 0 },
          { x: 1, y: 0, noiseLevel: 0 },
          { x: 2, y: 0, noiseLevel: 0 },
          { x: 3, y: 0, noiseLevel: 0 },
          ...
        ]
      },
      ...
    ]
    
  • Custom User Avatar

    All noise levels are non-negative whole numbers.

    In actual tests:

    should calculate the correct noise level at a point
    
    The noise level calculation is incorrect: expected 0 to equal 33.333333333333336
    

    33.333333333333336 is not a integer.

    The description also never describes how noiseLevelAtPoint should calculate its value. Assuming inverse square law and 100 initial noise level the noise level would be 100 / 2, not 100 / 3.

  • Custom User Avatar

    The tests are coupling noiseLevelAtPoint and locateNoisyStudent: noiseLevelAtPoint is only tested in actual tests, but generateReports depends on noiseLevelAtPoint, so if it is incorrectly implemented locateNoisyStudent will receive invalid input, making sample tests useless. This is a very inappropriate design for testing.