Loading collection data...
Collections are a way for you to organize kata so that you can create your own training routines. Every collection you create is public and automatically sharable with other warriors. After you have added a few kata to a collection you and others can train on the kata contained within the collection.
Get started now by creating a new collection.
Do you need to handle errors? Like if the array has no outliers or more than 1? Or the array is smaller than 3?
going over the "very large array" twice ends up taking twice as long, twice as much memory.
if the array was billions of members long - the 10 lines of code you saved would cost a lost of money.
this code may be impressive to newbies exclusively.
definitely not best practice, you use var, not strict equality operator and filter array twice
This solution is at least 8 years old.
stop using var please, is missleading
Thanks, I just started reading Groking algorithms to be more competent
There is no such thing as 'O(2n)' in big O notation. There are different notations which keeps the constant multiplier before function, such as 'tilde notation', see https://www.reddit.com/r/compsci/comments/1bl0q8/tilde_notation/. But you are right this solution is quite inefficient.
getOrDefault(key, defaultValue)
utlity method on Map will try to retrieve the value under key specified in first argumentkey
when it exists it will return the value and then you just increment that count by one, in other case when key is not existing in the map it will put the new key-value pair with value defined in the second argumentdefaultValue
which in our case is 0 and then increment it by one. This allows you to completelly skip if statement with.contains(key)
call.Check my solution and look for imperative style method if you want exact code.
Theres considerable performance improvement over the if statements as well, but only in larger datasets which does not really apply for this Kata, but its considered good practice. To understand why requires deeper knowledge about JVM and how the JIT works in Java, you can read about predictive branching in java if you want to know more about that. In my solution you can see both imperative approach and declarative using Java Streams API, which one of those will be more performant on the large datasets I cant really say because I didnt run any benchmarks on these, but if I should make a guess I would say Streams with parallelization using parallelStream would result in faster operations.
When it comes to performance always remember to judge it only if its actually problem for you, for small application most of the approaches are absolutely valid, but when you encounter bottlenecks or bad performance in larger application then consider optimizing.
how does it work
Lol, author says that array can be VERY big, and O(2n) solution gots top place when O(1) solution takes hardly more time to come with)
I'm not saying mine is better, jsut asking a question mate, i'm here to learn.
If we want to argue best practices, where's your validation?
Bruh have you seen yours lol
this is way too inefficient to be in the top solutions, isn't it ?
inefficient! but succinct. but overrated.
Loading more items...