Beta
Coping with NP-Hardness #1: 2-SAT
10 of 13ALowVerus
Loading description...
Algorithms
View
This comment has been reported as {{ abuseKindText }}.
Show
This comment has been hidden. You can view it now .
This comment can not be viewed.
- |
- Reply
- Edit
- View Solution
- Expand 1 Reply Expand {{ comments?.length }} replies
- Collapse
- Spoiler
- Remove
- Remove comment & replies
- Report
{{ fetchSolutionsError }}
-
-
Your rendered github-flavored markdown will appear here.
-
Label this discussion...
-
No Label
Keep the comment unlabeled if none of the below applies.
-
Issue
Use the issue label when reporting problems with the kata.
Be sure to explain the problem clearly and include the steps to reproduce. -
Suggestion
Use the suggestion label if you have feedback on how this kata can be improved.
-
Question
Use the question label if you have questions and/or need help solving the kata.
Don't forget to mention the language you're using, and mark as having spoiler if you include your solution.
-
No Label
- Cancel
Commenting is not allowed on this discussion
You cannot view this solution
There is no solution to show
Please sign in or sign up to leave a comment.
Alright, so I has been solving this kata for the huge amount of time, and all this time I had the error when I misunderstood one of the imporant aspects lol. Because of that, I spent about... 3 months on this kata, even though I was really close to the right answer? But still, I had a university study, and that error was completely on my side :s
Apart of that, this kata is really good, I wish the issues with it would be cleared soon and it would get out of Beta :D
thanks <3
also nice job my guy :-D
(your description is a real nightmare.... :s )
NOT <=> -v
given at the end while used in the very first lines)you should split it with some clear titles, to help you organize the "story" and your mind. For example: task / context / inputs / outputs / constraints (maybe context then task)
looks like this should actually be :
no?
No, output is a set containing chosen true or false values for each variable. Since the output is a set, an ordered list of T/F is insufficient, you need to know that "a" is associated with the False, and I do not guarantee that the variables will come in a flawless ordered list starting at A. See thr example tests for working examples, the putputs are compared to valid static answers.
That's the thing: I didn't even open the trainer because form the description alone, I don't get what I'm supposed to do.
And your answer is a bit worrying me more: sounds now like the output should be a dict... Is it?
(hint: I won't open the trainer to get the answer, because this kind of info should be provided in the description ;) )
changes hath been made
art thou satisfied
Inconsistent performance requirements across languages: max 4x2 in Python vs 25+ lengths in Haskell.
I'm not touching the Haskell tests. PistachioCake wya fam
( Haskell )
Is this example test correct? I think
Just []
would, or should, also be a correct answer.( I haven't checked the submit tests. It's possible it's in there also. )
It looks like the Python original uses a legacy Python version, a legacy testing framework, inadequate test headers, and very limited testing.
Also, as suggested elsewhere, please mention in the description that variable names will always be single letters.
There's a world to be gained there.
Note that I don't speak Python, so some of the above may be just plain wrong. But I sincerely doubt all of it is.
Depending on performance requirements, my estimated rating might be anything from 6 to 3. It'd be worth it to specify any such requirements, or lack thereof. That would make it easier to settle on a design before starting implementing.
python 3.6 is close enough to 3.8 that im not going to change it, the functionality is all there.
the variables dont necessarily have to be single letters, that shouldnt matter to the algo
optimal algo is O(n), and tests arebintended ti stress test so thats the only viable
but do they?
If variables don't have to be single letters, there should be tests with longer names. Maybe it shouldn't matter, but it does, esp. in Haskell, and unless you consider parsing variable names essential to the kata, I'd suggest specifying single letter names, because that's what's tested now anyway.
u rite u rite apparently past me didn't care that the full suite passed in 0.6s will fix brb
Tests now run 8s on my linear algo, the tests are described on the frontpage, and I'm clear on the test input parameters. No promises for the Haskell translation, I'm not touching that. Am I missing anything?
Holy mackerel
Ehm, yes, you're missing something. Tests should never take 8 seconds minimum, unless you want to force micro-optimisations ( which is generally a Bad, Bad Idea™ ).
If you want tests to enforce a certain Big-O complexity, let's say
O(n)
( not definingn
for now ), you want tests that succeed for anyO(Cn)
solution with any reasonableC
. You want to fail anyO(Cn^2)
solution. That meansn
must be orders of magnitude bigger thanC
can reasonably be. But to allow anyO(Cn)
solution with a reasonableC
to pass, tests should not take 8 seconds for them. Enforcing relatively low values forC
means asking for micro-optimisations, and asking for micro-optimisations ultimately leads to kata that have only a single solution, and to frustrated solvers. Neither of which I would consider A Good Thing.If you want to test for
O(n)
as opposed toO(n log n)
, in practice it can't be done;C
can have more variation thanlog n
.I think I saw
O(n^2)
solutions in Python. You should be able to fail those with tests that run in milliseconds for anO(n)
solution ifn
is big enough. That means larger tests, but fewer of them ( and, if they're random, probably not too much variation in length ). Too many smaller tests are less discriminating and take longer to run for compliant solutions!Finally, if you want a certain time complexity from your succeeding solutions, specify it in the description and add a
performance
tag. Different languages may need different numbers and / or sizes of actual tests to have consistent difficulty overall, so specifying absolute numbers makes translating to, and possible solving in, slower languages unnecessarily and inconsistently hard.we have specific languages conditional blocks for that
Still makes it "unnecessarily and inconsistently hard".
( Also, I hate hate hate merge conflicts. )
@B4B: Added python-specific description of inputs.
@Johan: Modified python tests to run in 2s. I need to run a truckload of tests, since I don't fully trust my test case generators. Some runs, all the test batches of a certain size fail, especially at higher volumes of clues. The problem here is that there are two outputs: either a list of valid values, or a flat FALSE for when no solution exists. And there is a chance that in some cases, if I only have 10 tests, everything will be FALSE, leading to inefficient solutions winning, or my clue graph will be too interconnected, leading to inefficient solutions winning. My current compromise is decent enough.
Also, yes, I do expect some performance enhancements, but not that much - hence my allowance of 5x as slow as the reference solution. My code isn't even that optimized.
Not really worried about linearithmic solutions, more concerned bout quadratics. Not even sure how you would tackle this linearithmically.
( Haskell )
Initial code is incorrect.
module
line should havewhere
instead ofwith
Fixed.
You have to start somewhere, but we're really waiting on the other
Issue
, the one that makes the kata unsolvable ..Haskell random tests:
I'm running into the same problem. Looks like the random tests expect a single solution, instead of accepting any valid one.
You can't test against a reference solution; you have to do property-based testing here.
To see that in action, look at the testing of my Performance LCS kata in Haskell.
Not only a single solution, but an incorrect one as well (not containing every variable).
If missing variables don't matter, one could argue that is not incorrect. It would deserve a description mention that that's acceptable though.
I believe this has been fixed now. If you provide a
Just answer
value, the tests will validate this solution; providing aNothing
value will still run the reference solution to check that there truly is no solution.The first iteration of the Haskell translation had 30 random tests, but for whatever reason (perhaps due to upgrading to
GHC 8.8.4
?), I've had to lower that. This might also be fixed by optimizing the reference solution, and I may try that later.If a variable is not assigned a value in your answer, the tests will assume any usage (whether it be
-x
orx
) isFalse
. I can't speak to how the Python tests approach that, so I'm hesitant to edit the description.Please verify it works, and resolve the issue if so.
Seems to work. I don't have a working solution yet, so I'm not entirely sure, so I don't feel I can resolve just yet.
I am somewhat surprised by the fact you "don't know how the Python tests approach [this]". Did you go entirely off the description? That should be workable, but can lead to problems like these when the description has problems in the original version. This might very well be an issue with the description. ( I would also like to see a mention that variable names will be single characters only, ever. This may matter more in Haskell than in Python. )
The reference solution seems to have problems. Test runtimes vary wildly, and my valid solution sometimes even times out.
When using my own as the reference solution, running
100
random tests consistently runs in negligible time ( see the fork ). Compilation takes slightly under three seconds, that's normal forGHC 8.8
(GHC 8.2
compiled in slightly under two seconds ).Would you prefer I leave this issue open, or would you like a new one, for the reference solution? Because that still is an issue.
Johan, I appreciate your thoroughness :p
My Haskell and Python solutions are similar algorithms, and I can see Python solutions that are effectively brute-forcing the solution space. On my machine, testing a similar Haskell algorithm with no reference solution and the current test setup (i.e. testing the runtime of these algorithms) ran for three minutes before I stopped it. For 100 tests, your solution runs in ~0.04 seconds, while mine runs in ~61.
The current Haskell test fixture is much more stringent than Python's. I'm using
Test.QuickCheck
to test it, while the Python tests are custom rolled and much smaller. I'm tempted to change the reference solution to yours for the time being, and then try to update the Haskell tests to be more like the Python ones. I'll resolve this issue once I rework the tests. How's that sound?I just beefed up the python tests. Can you yry again, see if you fail?
Also @Johan, when you say the reference solution, are you referring to Python or Haskell? not sure ehrther i need to look at my code agains
This thread is about Haskell.
Haskell: my solution passes the fixed tests, but fails at the random tests. There probably are some edge cases tests that should be added, but I'm not sure whether it's the reference solution or my solution that is incorrect and it's hard to tell when the random tests show the input in a different format:
[Right (('f',False),('h',True))]
.This should be fixed now. I'm not sure what kinds of edge cases need to be added, and I haven't messed with
Test.QuickCheck
before, so I'm not entirely sure how to fix that issue.Haskell: the tests should be made compatible with GHC 8.8.4.
Done. I apologize for my inexperience, and I massively appreciate your help and feedback with everything!
Haskell translation submitted, please review and approve.
Done. Gracias, senor.
If you have any unapproved translations, please make sure they're in
GHC 8.8
. This one should have been as well.There were two kata left in
8.2
but not8.8
, and I am working on those two. Now there are three. :[If you can't enable
8.8
without changes, you probably have aMonoid
definition which is missing aSemigroup
definition. ( I haven't yet solved this one, so I can't see the testing, and I can't see if you have any pending translations. )Question to both authors: how do we know the performance requirements are consistent between versions? What are the intended performance requirements of the original?
My feeling is that a correct, efficient implementation should have no performance issues at all. An inefficient one ( hey, I wrote one initially! they're definitely possible ) will have insurmountable problems with even a limited amount of tests.
Hi, I found a couple of issues:
Should be:
A better link for the 2-CNF might be here. But even then that article uses the notation from predicate logic, which does not marry up with your implementation. So examples will still be needed.
test.expect(is_satisfiable([['a']]) is ['a'])
The sample tests are incorrect, there's no way to return the same object as the one that's going to be expected.
Works in editor. What changes should be made? Should these be == instead?
IDK how you test it, but it can't work anywhere with any mutable object (and care should be taken with immutable objects). Yes, it should be
==
.ah, I see. fixed.