How to find duplicate lines in a files and count how many time each line was duplicated

Sometimes we need to find duplicate lines in a files and count how many time each line was duplicated. We can do that next way:

find . -type f | xargs fgrep "Notice" | uniq -c | sort -nr

. for find means that search will be in current directory.
-type f for find means, we take into account only files.
-с for uniq means prefix lines by the number of occurrences.
-n for sort means compare according to string numerical value.
-r for sort means reverse the result of comparisons.
Another way to get same result:

fgrep -R "Notice" | uniq -c | sort -nr

-R for fgrep means read all files under each directory, recursively.

sources of information
source visit date
find-duplicate-lines-in-a-file-and-count-how-many-time-each-line-was-duplicated 2015.02.23