Time for a confession. I collect a lot of software. I have one disk filled with public-domain software. Some directories are "collections" like the Sun User Group tapes. It is likely that I might have the same program in two different directories. To prevent this waste of space, I create an index of directories and the path needed to reach them. If I have two directories with the same name, I would like to know about it. I might be able to delete one of the directories. A simple way to search for redundant directories is with the following command:
find . -type d -print | \ awk -F/ '{printf("%s\t%s\n",$NF,$0);}' | \ sort
[You might want to make this into an
alias or function . (
10.1
)
--JP
] The
find
(
17.1
)
command prints out all directories. The
awk
(
33.11
)
command uses the slash (
/
) as the field separator.
NF
is the number of fields, and
$NF
the last field.
$0
is the
awk
variable for the entire line. The output would tell you where all of the directories named
misc
are located:
misc ./X11/lib/X11/fonts/misc misc ./misc misc ./src/XView2/contrib/examples/misc misc ./src/XView2/fonts/bdf/misc misc ./src/XView2/lib/libxvin/misc misc ./src/XView2/lib/libxvol/misc misc ./src/XView2/misc
This could be converted into a shell script that takes arguments. If no arguments are specified, I want it to default to the argument
.
(dot):
${*-.} |
#!/bin/sh # usage: dir_path [directory ...] # list directory and path to directory find ${*-.} -type d -print | awk -F/ ' { printf ("%s\t%s\n",$NF,$0); }' | sort |
---|
[You could also use this great idea for finding duplicate files. Change the
-type
d
to
-type
f
. If you (or all the users on your system) want to use this a lot, run
dir_path
nightly with
cron
(
40.12
)
or
at
(
40.3
)
. Save the output to a "database" file. Use the speedy
look
command (
27.18
)
to search the database file. Article
17.19
shows another
find
database.
-JIK, JP
]
-