Friday 6 July 2018

Linux and solaris receipt for oracle dba

CHAPTER 4
image
Creating and Editing Files
If you want to survive in a Linux or Solaris environment, you have to be adept with at least one command-line text editor. DBAs use text editors on a daily basis to manipulate database initialization files, create scripts to automate tasks, modify OS scheduler jobs, and so on. In these environments, you won’t be an efficient database administrator unless you’re proficient with a text editor.
Dozens of text editors are available. To that end, there have been entire books written on the available text editors. The three most common command-line text editors in use are vi, vim, and emacs. This chapter focuses on the vi text editor (pronounced “vee-eye” or sometimes “vie”). We chose to concentrate on this editor for the following reasons:
  • The vi editor is universally available on all Linux/Solaris systems.
  • Linux/Solaris technologists tend to use vi more than any other editor.
  • You can’t consider yourself a true geek unless you know vi.
The goal of this chapter is to give you enough information to efficiently use the vi editor. We don’t cover every facet of vi; instead, we focus on the features that you’ll use most often to perform daily editing tasks. When you first use vi, you might wonder why anybody would use such a confusing text-editing tool. To neophytes, many aspects of vi initially seem counterintuitive.
Not to worry; with some explanation, examples, and hands-on practice, you’ll learn how to use this editing tool to efficiently create and manipulate text files. This chapter contains more than enough material to get you started with vi. The problems described are the most commonly encountered editing tasks.
If you are new to vi, we strongly encourage you to not just read the solutions in this chapter but to actually start up a vi session and practice entering the commands shown in the examples. It’s like riding a bicycle; you can’t learn how to ride until you physically get on the bike and attempt to go forward. It’s the same with vi. You can’t just read about how to use this tool; you have to type commands before the learning takes place.
Image Note  All the solutions and examples in this chapter also work nearly identically with the vim editor. The vi-improved (vim) editor provides many enhancements to vi. On many systems, you may be using vim and not realize it. The vi editor can be automatically defined as an alias or a soft link to the vim executable.
4-1. Creating a File
Problem
You need to create a text file.
Solution
To create a file named foo.txt, run the vi utility from the command line, as shown here:
$ vi foo.txt
You should now see a blank screen with several tilde (~) graphemes displayed in the far-left column of the screen. Within a file being edited by vi, the ~ character indicates a line that has no text in it. Depending on your version of vi, you might see the name of your file in the bottom-left corner:
"foo.txt" [New file]
To enter text, first type i. The lowercase i puts the editor in insert mode. You can now enter text into the file. To save your changes and exit from vi, first press Escape to get out of insert mode and into command mode, and then type :wq for write and quit:
:wq
You should now be back at the OS command prompt. You can verify that the new file has been created with the ls command:
$ ls foo.txt
foo.txt
Image Note  See recipe 4-2 to learn how to move your cursor around within a file.
How It Works
The most common way to start vi is to provide it with a file name to operate on:
$ vi <filename>
Several options are available when first invoking vi. Table 4-1 lists some of the more commonly used command-line choices.
Table 4-1. Some Helpful vi Command-Line Startup Options
Option
Action
vi
Starts editing session in memory.
vi <file>
Starts session and opens the specified file.
vi <file>*
Opens first file that matches the wildcard pattern. Use :n to navigate to the next matched file.
view <file>
Opens file in read-only mode.
vi -R <file>
Opens file in read-only mode.
vi -r <file>
Recovers file and recent edits after abnormal abort from editing session (such as a system crash).
vi +n <file>
Opens file at specified line number n.
vi + <file>
Opens file at the last line.
vi +/<pattern> <file>
Opens file at first occurrence of specified string pattern.
Once you start a vi session, it’s critical to understand that there are two distinct operating modes:
  • Command mode
  • Insert mode
The vi editor behaves very differently depending on its mode. When you first enter vi, you are in command mode by default. In this mode, you can enter commands that control the behavior of vi. For example, you can issue commands to do the following:
  • Save a file
  • Enter insert mode
  • Exit vi
When in command mode, everything you enter is interpreted as a command by vi. You can’t enter text into your file while in command mode; you must place vi in insert mode before you can start entering text. Table 4-2 lists several methods of placing vi in insert mode. Keep in mind that these commands are case sensitive. For example, the A command comprises pressing the Shift and A keys simultaneously.
Table 4-2. Common Techniques to Enter vi Insert Mode
Enter Insert Command
Action
i
Inserts text to the right of the cursor.
a
Inserts text at the end of the word to the right of the cursor (appends).
I
Inserts text at the beginning of the line.
A
Inserts text at the end of the line.
o
Inserts text below the current line.
O
Inserts text above the current line.
The easiest way to change from command mode to insert mode is to press i on the keyboard. You can then begin entering text at the place onscreen where your cursor is currently located. When in vi insert mode, you can perform two activities:
  • Enter text
  • Exit from insert mode
While in insert mode, you should see text at the bottom of your screen indicating that you are in the correct mode (this may vary depending on whether you’re using vi or vim):
-- INSERT --
You can now begin typing text.
To exit from insert mode (and back to command mode), press Escape. There’s nothing wrong with pressing Escape multiple times (other than wasting energy). If you are already in command mode and press Escape, you may hear a bell or a beep.
You can exit from vi (back to the OS prompt) after you are in command mode. To save the file and exit, type :wq (write quit):
:wq
If you made changes to a file and want to exit without saving, type :q!, as shown here:
:q!
Table 4-3 details some of the more common exit methods. Keep in mind that you have to be in command mode before you can type a vi exit command. If you don’t know what mode you’re in, press Escape to ensure that you’re in command mode. Notice that these commands are case sensitive. For example, the ZZ command is executed by simultaneously pressing Shift and the Z key twice.
Table 4-3. Useful vi Exit Commands
Exit Command
Action
:wq
Saves and exits.
ZZ
Saves and exits.
:x
Saves and exits.
:w
Saves the current edits without exiting.
:w!
Overrides file protections and saves.
:q
Exits the file.
:q!
Exits without saving.
:n
Edits next file.
:e!
Returns to previously saved version.
4-2. Maneuvering Within a File
Problem
You want to navigate efficiently within a text file while editing with vi.
Solution
The most intuitive way to move around is by using the up/down/right/left arrows. These keys will work whether you are in command mode or insert mode. However, you might encounter some keyboards on which the up/down/left/right arrows don’t work. In those cases, you have to use the J, K, H, and L keys to move you down, up, left, and right, respectively. You must be in command mode to navigate with these keys. If you’re in insert mode and try to use these keys, you’ll see a bunch of jjj kkkkk hhh llll letters onscreen.
Using these keys may seem cumbersome at first. However, you’ll notice after some time that you can navigate quickly using these keys because you don’t have to move your fingers from their natural typing positions.
How It Works
You can use a myriad of commands for moving around in a text file. Although some of these commands may seem confusing, the navigational commands will soon become second nature to you with a little practice. Keep in mind that the vi editor was designed to allow you to perform most tasks without having to move your hands from the standard keyboard position.
Table 4-4 contains commonly used commands to navigate within vi. Remember that you must be in command mode for these keystrokes to work. Notice that these commands are case sensitive. For example, to navigate to the top of the page, use the 1G command, which is composed of first pressing the 1 key and then simultaneously pressing the Shift and G keys.
Table 4-4. Common Navigation Commands
Command
Action
j (or down arrow)
Moves down a line.
k (or up arrow)
Moves up a line.
h (or left arrow)
Moves one character left.
l (or right arrow)
Moves one character right.
Ctrl+f (or Page Down)
Scrolls down one screen.
Ctrl+b (or Page Up)
Scrolls up one screen.
1G
Goes to first line in file.
:1
Goes to first line in file.
G
Goes to last line in file.
nG
Goes to n line number.
H
Goes to top of screen.
L
Goes to bottom of screen.
w
Moves one word forward.
b
Moves one word backward.
0
Goes to start of line.
$
Goes to end of line.
4-3. Copying and Pasting
Problem
You want to copy and paste text from one section of a file to another.
Solution
Use the yy command to yank (copy) lines of text. Use the p command to put (paste) lines of text elsewhere in the file. As with all vi commands, ensure that you are in command mode (press Escape) before using a command. The following example copies five lines of text from the current line that the cursor is on and four lines below the current line (for a total of five lines):
5yy
You should see an informational line at the bottom of the screen indicating success (or not) of placing the copied lines in the copy buffer:
5 lines yanked
To paste the lines that have been copied, use the p command. To put the lines beneath the current line your cursor is on, ensure that you are in command mode and issue the p command:
p
You should see an information line at the bottom that indicates the lines were pasted below the line your cursor is on:
5 more lines
How It Works
Copying and pasting are two of the most common tasks of editing a file. Sometimes you may want to cut and paste. This task is similar to the copying and pasting example in the “Solution” section of this recipe. Instead of using the yy command, use the dd (delete) command. For example, to cut five lines of text (inclusive with the line your cursor is currently on), issue the dd command:
5dd
You should see a message at the bottom of the screen indicating that the lines have been cut (deleted):
5 fewer lines
Those lines are now in the buffer and can be pasted anywhere in the file. Navigate to the line you want to place the previously cut lines after, and press the p command:
p
You should see an informational line at the bottom that indicates the lines were pasted after the line your cursor is on:
5 more lines
There are many commands for cutting and pasting text. Table 4-5 describes the copying, cutting, and pasting commands. Notice that these commands are case sensitive. For example, use the X command to delete one character to the left of the cursor, which means pressing the Shift and X keys simultaneously.
Table 4-5. Common Options for Copying, Deleting, and Pasting Text
Option
Action
yy
Yanks (copies) the current line
nyy
Yanks (copies) n number of lines
p
Puts yanked line(s) below the cursor
P
Puts yanked line(s) above the cursor
x
Deletes the character that the cursor is on
X
Deletes the character to the left of the cursor
dw
Deletes the word the cursor is on
dd
Deletes current line of text
ndd
Deletes n lines of text
D
Deletes to the end of the current line
4-4. Manipulating Text
Problem
You wonder whether there are some commands to modify the text you’re working on, such as changing a character from lowercase to uppercase.
Solution
Use the ~ (tilde) command to change the case of a character. For example, say you have a string in a file with the text of oracle and you want to change it to Oracle. Place your cursor over the o character. Press Escape to ensure that you’re in command mode. Type the ~ character (which requires you to press the Shift key and the ~ key at the same time). You should see the case of the character change from o to O.
How It Works
Several commands are available for manipulating text. Table 4-6 lists common options used to change text. Notice that these commands are case sensitive. For example, the C command is executed by pressing the Shift and C keys simultaneously.
Table 4-6. Common Options for Changing Text
Option
Action
r
Replaces the character that the curser is on with the next character you type
~
Changes the case of a character
cc
Deletes the current line and inserts text
C
Deletes to the end of the line and insert text
c$
Deletes to the end of the line and inserts text
cw
Deletes to the end of the word and inserts text
R
Types over the characters in the current line
s
Deletes the current character and inserts text
S
Deletes the current line and inserts text
4-5. Searching for and Replacing Text
Problem
You want to search for all occurrences of a string and replace it with another string.
Solution
If you want to search only for a string, use the / command. The following example searches for the string ora01 in the file:
/ora01
To search for the next occurrence of the string, use the n (next) command:
n
To search backward for a previous occurrence of the string, use the N command:
N
If you want to search for text and replace it, use the s option to search for text and replace it with an alternate string. The following example searches for the string ora01 and replaces it with ora02 everywhere in the file:
:%s/ora001/ora02/g
All occurrences of ora01 should now be displayed as ora02.
How It Works
Searching for strings is one of the most common tasks you’ll perform while editing database initialization files. Table 4-7 lists some of the more common options for searching for text.
Table 4-7. Common Options for Text Searching
Option
Action
/<pattern>
Searches forward for a string
?<pattern>
Searches backward for a string
n
Repeats the search forward
N
Repeats the search backward
f<character>
Searches forward for a character in the current line
F<character>
Searches backward for a character in the current line
4-6. Inserting One File into Another
Problem
While within vi, you want to copy in another file that exists in the current working directory.
Solution
Use the :r command to read in a file. This has the effect of copying in a file and pasting it starting at the current cursor location. This example reads a file named tnsnames.ora into the current file:
:r tnsnames.ora
The previous example assumes that the tnsnames.ora file is in your current working directory. If the file you want to bring in is not in your current working directory, you have need to specify a path name. This example reads in a file from a directory that is not the current working directory:
:r /oracle/product/11.1/network/admin/tnsnames.ora
If you have an OS variable that contains a path, you can use it directly. This example copies in a file contained in a path specified by a variable:
:r $TNS_ADMIN/tnsnames.ora
How It Works
You’ll often need to insert the content of a file into the current file you’re editing. Doing so is a quick and easy way to add text to your current file that you know is stored correctly in a separate file.
You have a few other interesting ways to read in files. This example copies in a file and places it at the beginning of the current file:
:0r tnsnames.ora
The following bit of code reads in the file at the end of the current file:
:$r tnsnames.ora
4-7. Joining Lines
Problem
You have one line of text just after the current line you are editing. You want to join the text after the current line to the end of the current line.
Solution
First, ensure that you are in command mode by pressing Escape. Place your cursor on the first line that you want to join with the line after it. Type the J (capital J) command to join the end of the current line to the start of the line after it.
An example helps to illustrate this concept. Say you have these two sentences in a file:
select table_name
from dba_tables;
If you want to join the first line to the second line, place your cursor anywhere on the first line and type the following:
J
You should now see both lines joined together:
select table_name from dba_tables;
How It Works
You’ll often use the J command to join two lines of code together in a text file. You can also join any number of consecutive lines. For example, suppose that you have the following three lines in a file:
Select
username
from dba_users;
You want the three lines to be joined together on one line. First, place your cursor anywhere on the first line and then type the following while in command mode:
3J
You should now see the three lines joined together, as shown here:
select username from dba_users;
4-8. Running Operating System Commands
Problem
While editing text within vi, you want to run an OS command.
Solution
First make sure you that are in command mode by entering Escape. Use the :! command to run OS commands. For example, the following bit of code runs the OS date command without exiting vi:
:!date
Here is the output for this example:
Sat Feb 10 14:22:45 MST 2008
~
Hit ENTER or type command to continue
Press Enter or Return to get back into vi command mode. To read the output of date directly into the file you’re editing, use this syntax:
:r !date
How It Works
Running OS commands from within vi saves you the hassle of having to exit the utility, run the OS command, and then re-enter the utility. DBAs commonly use this technique to perform tasks such as listing files in a directory, printing the date, or copying files.
The following example runs the ls (list) command from within vi to view files in the current working directory:
:!ls
Once the file of interest is identified, you can read it in with the :r syntax. This example reads the script1.sql file into the file currently being edited:
:r script1.sql
If you want to temporarily place yourself at the shell prompt and run several OS commands, type your favorite shell with the :! syntax. The following example enters the Bash shell:
:!bash
To return to vi, use the exit command to log out of the shell. At this point, you need to press Enter or Return to return to vi command mode:
Hit ENTER or type command to continue
4-9. Repeating a Command
Problem
You are typing commands over and over again. You wonder whether there is a way to repeat the previously entered command.
Solution
Use the . (period) command to repeat the previously entered command. For example, suppose you delete a large section of code, but want to delete only 10 lines at a time. To achieve this, first ensure you’re in command mode (by pressing Escape) and then enter the following:
10dd
To repeat the previous command, type a period:
.
You should see another 10 lines deleted. This technique allows you to quickly repeat the previously run command.
How It Works
You can use the . command to repeat any previously typed command and you will save a great deal of time and typing when using it. If you often retype lengthy commands, consider creating a shortcut to the keystrokes (see recipe 4-13 for details).
4-10. Undoing a Command
Problem
You want to undo the last command you typed.
Solution
To undo the last command or text typed, use the u command. Make sure that you are in command mode and then type u, as shown here:
u
You should see the effects of the command that was typed in previously to the u command being undone.
How It Works
The u command is handy for undoing the previous command. If you want to undo all commands entered on one line, use the U command. Your cursor must be on the last line you changed for this command to work.
If you want to undo changes since the last time you saved the file, type the following:
:e!
Sometimes the previous command is handy if you want to undo all edits to a file and start over. This is quicker than exiting the file with the :q! (quit without saving) command and reopening the file.
Image Note  The behavior of u and U can slightly vary depending on your version of vi. For example, with vim, you can use the u command to undo several previous edits.
4-11. Displaying Line Numbers
Problem
You want to display line numbers in your text file.
Solution
Use the set number command. The following command changes the screen to display the line number on the left side of each row:
:set number
You should now see line numbers on the left side of the screen. The following is a snippet from the init.ora file with the set number option enabled:
1 db_name=RMDB1
2 db_block_size=8192
3 compatible=10.2.0.1.0
4 pga_aggregate_target=200M
5 workarea_size_policy=AUTO
6 sga_max_size=400M
7 sga_target=400M
8 processes=200
How It Works
When you deal with files, it is nice to see a line number to assist with debugging. Use the set number and set nonumber commands to toggle the number display.
Image Tip  Press Ctrl+G (press the Ctrl and G keys simultaneously) to display the current line number.
4-12. Automatically Configuring Settings
Problem
You want to configure vi to start up with certain settings. For example, you want vi to always start up in the mode of displaying line numbers.
Solution
If you want to customize vi to automatically show line numbers, create a file named .exrc in your home directory and place the desired settings within it. The following example creates a file named .exrc in the home directory:
$ vi $HOME/.exrc
Enter the following text in the .exrc file:
set number
From now on, every time you start vi, the .exrc file is read, and the settings within it are reflected in the files being edited.
How It Works
Setting the line numbers to automatically appear is just one aspect that you can configure in the .exrc file. To view all settable attributes in your environment, issue the following command within vi:
:set all
Here is a very small snippet of the output:
--- Options ---
ambiwidth=single   joinspaces            softtabstop=0
noautoindent       keywordprg=man -s     startofline
noautoread         nolazyredraw          swapfile
noautowrite        lines=30              swapsync=fsync
noautowriteall     nolist                switchbuf=
background=light   listchars=eol:$       tabstop=8
Options that expect a value contain an equals (=) sign in the output. To view the current setting of a feature, use set and the option name. This example displays the term setting:
:set term
term=cygwin
To view which options are different from the defaults, use the set command with no options:
:set
If you use vim, you can place commands in the vim .vimrc startup file. The vim editor also executes the contents of the .exrc file if one is present.
Image Tip  You can also put shortcuts for commands in your .exrc file. Recipe 4-13 describes how to create command shortcuts (also referred to as command maps).
4-13. Creating Shortcuts for Commands
Problem
You are using a certain command over and over again and you wonder whether there is a way to create a shortcut for the command.
Solution
Use the map command to create a shortcut for a sequence of keystrokes. One set of keystrokes that is typed often is xp, which transposes two characters (it performs a delete and then a put). This example creates a macro for the xp command:
:map t xp
You can now use the t command to perform the same function as the xp command.
How It Works
Mapping commands is a useful way of creating shortcuts for frequently used sequences of keystrokes. If you want mappings defined for each vi session, place the mappings in your .exrc file (see recipe 4-12 for details).
To view all defined mappings, type :map without any arguments:
:map
Here is some sample output:
up      ^[OA    k
down    ^[OB    j
left    ^[OD    h
right   ^[OC    l
t       t       xp
To disable a mapping, use the :unmap command. The following example disables the t mapping:
:unmap t
4-14. Setting the Shell Default Text Editor
Problem
You’re editing the cron table on a new database server. The default editor used for cron is emacs, but you want to set your default editor to be vi.
Solution
Use the export command to set the default editor. The following example assumes that you’re using the Bash shell and sets the default editor to the vi utility:
$ export EDITOR=vi
You can verify that the variable has been set as follows:
$ echo $EDITOR
vi
If you’re using the C shell, you can set the EDITOR variable as follows:
$ setenv EDITOR vi
How It Works
Some utilities such as cron inspect the OS EDITOR variable to determine which editor to use (some older systems use the VISUAL variable as well). The following lines can be placed in the HOME/.bashrc startup file to ensure that the editor is automatically set when logging in:
export EDITOR=vi
export VISUAL=$EDITOR
We recommend that you set both the EDITOR and VISUAL variables because some utilities (such as SQL*Plus) reference one or the other.
4-15. Setting the SQL*Plus Text Editor
Problem
You start a SQL*Plus session and want to edit a file using the vi editor. However, you notice that you’re placed within an unfamiliar editor when you issue the EDIT command:
$ sqlplus / as sysdba
SQL> edit test
test.sql: No such file or directory
?
q
In this situation, you should set the default SQL*Plus text editor to be vi.
Solution
You can specify the default text editor used by SQL*Plus in one of two ways:
  • Set the OS EDITOR variable (see recipe 4-14 for details). By default, SQL*Plus will use the editor specified within the EDITOR variable.
  • Define the SQL*Plus _EDITOR variable. Setting this variable overrides any setting in the OS EDITOR variable.
This example sets the default editor to be used by a SQL*Plus session:
SQL> define _EDITOR=vi
Now when you subsequently edit a file, the vi editor is invoked by default:
SQL> edit test
~
"test.sql" [New File]
You can verify that the editor has been set by using the DEFINE command and specifying only the name of the variable you’re interested in viewing:
SQL> DEFINE _EDITOR
DEFINE _EDITOR         = "vi" (CHAR)
If you just type in DEFINE by itself, all defined variables will display:
SQL> DEFINE
DEFINE _DATE           = "18-APR-15" (CHAR)
DEFINE _USER           = "SYS" (CHAR)
...
DEFINE _EDITOR         = "vi" (CHAR)
How It Works
The solution section demonstrated how to set the default editor for a SQL*Plus session. You’ll most likely want to have the default editor defined automatically for you so that you don’t have to manually set the editor within each session. To automatically have the _EDITOR variable set, define the _EDITOR variable within the glogin.sql server profile file or the login.sql user profile file. For example, the following line can be placed in either the glogin.sql or login.sql file:
define _EDITOR=vi
The glogin.sql file is located in the ORACLE_HOME/sqlplus/admin directory. In Linux or Solaris, the login.sql file is executed if it exists in a directory contained within the SQLPATH OS variable. If the SQLPATH variable hasn’t been defined, SQL*Plus will look for login.sql in the current working directory from which SQL*Plus was invoked.
If the _EDITOR SQL*Plus variable is not set, the OS EDITOR variable will be used to set the SQL*Plus editor. If the EDITOR variable is not set, the VISUAL variable setting will be used. If neither the EDITOR nor VISUAL variable is set, the ed editor will be used as the default editor within SQL*Plus.
4-16. Toggling Syntax Text Color
Problem
You’re using vim or vi to edit a file, and the text appears as multicolored. You’re having a hard time reading the text and want to turn the coloring feature off.
Solution
Within the editor while in command mode, press the Shift key and : simultaneously. This action should place your prompt at the bottom-left side of the screen. You should see the prompt to the right of a : sign. To turn off the syntax coloring, type syntax off and press Return:
:syntax off
You should see the text font colors displayed in white now. If you want to turn the syntax coloring back on, do so as follows:
:syntax on
If you want the syntax coloring to be automatically disabled, you can place the syntax off command in the vim .vimrc startup file or the vi .exrc startup file (see recipe 4-12 for details).
How It Works
Depending on your environment, when you edit a file, some of its text may be shown in various colors. For example, a line that starts with a # sign will be interpreted as a comment line and will therefore appear as dark blue text. This is known as syntax coloring. Sometimes this syntax-colored text can be hard to read, depending on the terminal (e.g., it’s hard to see dark blue against a black terminal background). In these situations, the easiest way to resolve this issue is to turn off the syntax coloring.
If you prefer the syntax coloring to be enabled but are having a difficult time viewing text of a particular color, you can adjust specific syntax coloring schemes. This line of code sets the font color of a comment line to be dark gray:
highlight Comment ctermbg=DarkGray
There are various other colors you can control; for example:
highlight Constant ctermbg=Blue
highlight Cursor ctermbg=Green
highlight Normal ctermbg=Black
highlight Special ctermbg=DarkMagenta
highlight NonText ctermbg=Black
You may want to adjust these colors depending on your terminal type or your ability to see certain colors. For most files that a DBA would edit, the syntax coloring can be more distracting than helpful. However, if you’re editing a programming language source code file (e.g., Java) you’ll most likely find the syntax coloring quite helpful for identifying segments of code such as comments, constants, variables, and so on.



CHAPTER 5
image
Managing Files and Directories
A complex part of the job of every Oracle DBA job involves dealing with files and directories. Therefore, DBAs must be experts in file manipulation. Your job requires skills such as implementing database security, performing backups and recovery, monitoring, and troubleshooting performance issues. These critical tasks are all dependent on a command-line knowledge of managing files. Expert DBAs know how to administer files and navigate within the filesystem.
A file, which is the basic building block of a Linux/Solaris system, is a container for information stored on disk. You access a file by its file name. We use the terms file and file name synonymously in this book. File names can be up to 256 characters long and can contain regular letters; numbers; and the . (period), _ (underscore), and – (hyphen) characters.
A directory is like a folder; its purpose is to provide a logical container that facilitates working with groups of files. Every server has a root directory indicated by a forward slash (/); think of the forward slash as a tree falling forward from left to right. The / directory, which is the topmost directory on every server, is like an upside-down tree in which the trunk is the root directory, and the branches of the tree are subdirectories.
Figure 5-1 shows a partial hierarchy directory structure on an Oracle database server. Be aware that Figure 5-1 shows only a fraction of the directories typically created. The main point of the diagram is to give you an idea of the treelike directory structure that is used for a typical Oracle system. Because of the complexity of the directory structures, DBAs must be fluent with command–line directory navigation and file manipulation.
9781484212554_Fig05-01.jpg
Figure 5-1. Common directories used on an Oracle database server
This chapter discusses common problems and solutions that you’ll encounter when working with files and directories. It starts with the basics, such as viewing directory structures, and then progresses to more complicated topics such as finding certain types of files.
5-1. Showing the Current Working Directory
Problem
You’re logged on to a database server and you want to view the current directory path.
Solution
Use the pwd (print working directory) command to display the full pathname of your current working directory:
$ pwd
/home/oracle
From the previous output shows that /home/oracle is the current working directory.
Image Note  If you’re a Windows user, the Linux/Solaris pwd command is similar to the DOS cd command when issued with no options. The DOS cd command without any options simply prints the current working directory.
How It Works
In Linux/Solaris, the directory you are working in is defined to be your current working directory. The pwd command isn’t very complicated; it simply prints the current working directory. As simple as it is, you’ll be using it all the time. DBAs constantly use this command to verify that they are in the correct directory. Before you manipulate directories or files, it’s wise to verify that you are where you think you should be.
The pwd command has two interesting options: -L and -P. The -L option prints the logical path and is the default. It always prints the value of the OS PWD variable. For example, the following two commands always display the same directory:
$ echo $PWD
/home/oracle
$ pwd
/home/oracle
The -P option prints the actual physical path. These options are useful if you’re working on systems that have directories that have been navigated to via a symbolic link (see recipe 5-33 for a discussion on soft links). The -L option prints the directory name as defined by the symbolic link. The -P option displays the directory as defined by the actual physical path.
An example can help illustrate the value of knowing when to use the -P option. On a database server, there is a symbolic link defined to be oradev that exists under the root directory. Here is an example of the long listing of the symbolic link:
$ ls -altr /oradev
lrwxrwxrwx 1 root root 9 Apr 15 19:49 oradev -> /oradisk2
First, navigate to the directory via the symbolic link and issue a pwd command with the -L option:
$ cd /oradev
$ pwd -L
/oradev
Now without changing directories, use the pwd command with the -P option:
$ pwd -P
/oradisk2
If you work in environments that use symbolic links, it is important to understand the difference between the -L and -P options of the pwd command.
5-2. Changing Directories
Problem
You want to change your current working directory to a different location.
Solution
Use the cd (change directory) command to navigate within the filesystem. The basic syntax for this command is as follows:
cd <directory>
This example changes the current working directory to /orahome/app:
$ cd /orahome/app
It is usually a good idea to use the pwd command to verify that the cd command worked as expected:
$ pwd
/orahome/app
You can also navigate to a directory path that is stored in an OS variable. The next set of commands displays the contents of the TNS_ADMIN variable and then navigates to that directory:
$ echo $TNS_ADMIN
/orahome/app/oracle/product/12.1.0.2/db_1/network/admin
$ cd $TNS_ADMIN
$ pwd
/orahome/app/oracle/product/12.1.0.2/db_1/network/admin
If you attempt to navigate to a directory that doesn’t exist, you’ll receive an error similar to this one:
No such file or directory
The owner of a directory must have the execute permission set at the owner level of the directory before the owner can navigate to the directory. An example illustrates this concept; listed here are the permissions for a scripts directory that oracle owns:
$ ls -ld scripts
d---rwxrwx 2 oracle oinstall 4096 Jul 30 19:26 scripts
As the oracle user, you receive an error when attempting to navigate to the scripts directory:
$ cd scripts
-bash: cd: scripts: Permission denied
If you modify the directory to include the owner execute permission, you can now navigate to it successfully:
$ chmod 100 scripts
$ cd scripts
How It Works
The cd command is a powerful utility that you’ll use often in your DBA life. The following sections contain techniques to make you more effective when using this command.
Navigating HOME
If you don’t supply a directory to the cd command, the directory will be changed to the value in the variable of HOME by default. This example demonstrates the concept by viewing the current directory, displaying the value of HOME, and using cd to navigate to that directory:
$ pwd
/orahome/app/oracle/product/12.1.0.2/db_1
Next, display the contents of the HOME variable:
$ echo $HOME
/orahome/oracle
Change directories to the value contained in HOME by not supplying a directory name to the cd command:
$ cd
$ pwd
/orahome/oracle
In the Bash and Korn shells, the ~ (tilde) character is a synonym for the value contained in the HOME OS variable. The following two lines of code also change your directory to the HOME directory:
$ cd ~
$ cd $HOME
Navigating to the Parent Directory
The .. (two dots) directory entry contains the value of the parent directory of the current working directory. If you want to change your directory to the parent directory, use the following syntax:
$ cd ..
You can navigate up as many parent directories as there are in a given path by separating the .. strings with a forward slash character. For example, to navigate up three directories, use this command syntax:
$ cd ../../..
You can also use the .. directory entry to navigate up a directory tree and then down to a different subdirectory. In the following example, the current working directory is /orahome/oracle/scripts, and the cd command is used to navigate to /orahome/oracle/bin:
$ pwd
/orahome/oracle/scripts
$ cd ../bin
$ pwd
/orahome/oracle/bin
Navigating to a Subdirectory
To navigate to a subdirectory, specify the directory name without a forward slash in front of it. This example first prints the current working directory, navigates to the admin subdirectory, and finally verifies success with the pwd command:
$ pwd
/home/oracle
$ cd admin
$ pwd
/home/oracle/admin
Using Wildcards
You can also use the wildcard asterisk (*) character with the cd command to navigate to other directories. In this next example, the current working directory is /oracle, and the product subdirectory is the target directory:
$ cd p*
$ pwd
/oracle/product
When navigating to a subdirectory, you must specify enough of the directory to make its name unique to any other subdirectories beneath the current working directory. If multiple directories match a wildcard string, you might not get the desired directory navigation, depending on your OS version. Always verify your current working directory with the pwd command.
When using the Bash shell, you can also use the Tab key to complete keystroke sequences. For example, if you have only one subdirectory that starts with the letter p, you can cd to it as follows:
$ cd p<Tab>
In this example, there is only one subdirectory beneath the current working directory that starts with a p, so you now see the following on the terminal:
$ cd product/
Now you can press Enter or Return to complete the command. This feature of the Bash shell is known as tab completion (see recipe 2-2 for more details).
Navigating to the Previous Directory
The hyphen (-) character is commonly used to navigate to the previous working directory. In the next example, the current working directory is /oracle01, and the previous working directory is /oracle02. To navigate to /oracle02, provide - to the cd command, as shown here:
$ cd -
Another way to navigate to the previous working directory is via the OLDPWD variable, which contains the location of the previous directory. To navigate to the most recently visited directory, you can change directories, as shown here:
$ cd $OLDPWD
5-3. Creating a Directory
Problem
You want to store your SQL scripts in a special directory. To do this, you first need to create a directory.
Solution
Use the mkdir (make directory) command to create a new directory. This example creates the directory named scripts underneath the /home/oracle directory:
$ cd /home/oracle
$ mkdir scripts
Now use the cd and pwd commands to verify that the directory exists:
$ cd scripts
$ pwd
/home/oracle/scripts
When navigating to another directory, if the directory doesn’t exist, you’ll receive an error message similar to this:
No such file or directory
How It Works
Before you create a directory, you must have write permission on the parent directory to create a subdirectory. If you attempt to create a directory and don’t have write permission on either the user or group level, you’ll receive an error. This example attempts to create a directory named oradump under the / directory:
$ mkdir /oradump
mkdir: cannot create directory `/oradump’: Permission denied
The permissions on the / directory show that only the root user has write permissions (and is therefore the only user who can create a directory under /):
$ ls -altrd /
drwxr-xr-x 29 root root 4096 Apr 15 19:49 /
If you don’t have root access, you’ll need to work with your SA to create any desired directories under the / directory. See recipe 3-11 for examples of obtaining access to root privileges.
Sometimes you’ll find it convenient to create several directories in a path with one command. This example uses the -p (parent) option to create the directory backups and any parent directories that don’t already exist in the path:
$ mkdir -p /oradump/db/dev/backups
The previous directory creation technique is extremely handy when you need to create long complex directory structures and you don’t want to create them one directory at a time.
5-4. Viewing a List of Directories
Problem
You want to just list the directories, not the other regular files in your current working location.
Solution
Use the ls -l command in combination with grep to list only directories. Here is some sample ls -l output without using grep to filter anything:
$ ls -l
drwxr-x---  3 oracle dba   4096 Apr 25 19:44 orahome
drwxr-xr-x  9 oracle dba   4096 Dec 29 07:43 orainst
-rw-r-----  1 oracle dba 124506 Apr 25 20:10 ora.zip
-rw-r-----  1 oracle dba 112640 Apr 25 18:14 o.tar
-rw-r--r--  1 oracle dba     82 Apr  4 14:30 output.txt
Next add in the grep filter:
$ ls -l | grep ’^d’
drwxr-x---  3 oracle dba   4096 Apr 25 19:44 orahome
drwxr-xr-x  9 oracle dba   4096 Dec 29 07:43 orainst
In the preceding line of code, the ls -l output is piped to grep, which looks for files that begin with the d character. The caret ^ (caret) character is a regular expression that tells the grep command to match the d character at the beginning of the string. In this manner, you can list out just the directories.
How It Works
DBAs typically create an alias or function to facilitate typing the command shown in the “Solution” section of this recipe. This command creates an alias named lsd that can be used to list directories:
$ alias lsd="ls -l | grep ’^d’"
After the alias is created, type lsd. It will run the ls and grep commands. See recipe 2-7 for details on creating aliases and functions.
Another way to view directories is to use ls -p and grep for the forward slash character. The next example uses ls -p, which instructs the ls command to append a / on the end of every directory. The output of ls -p is piped to grep, which searches for the / character:
$ ls -p | grep /
orahome/
orainst/
When trying to list out a directory, it can sometimes be convenient to use a wildcard character. For example, say you want to determine all directories and files that are in the ORACLE_HOME directory that begin with the b character. To determine this, you attempt to issue this command:
$ ls $ORACLE_HOME/b*
The output of this command may not be what you expect. If a wildcard matches a directory name, the files contained in the directory (not the directory name) will be listed. In this example, the output contains all the files listed in the ORACLE_HOME/bin directory; here’s a short snippet of the output:
acfsroot             exp            lsnrctl0         plshprofO
adapters             expdp          lxchknlb         proc
adrci                expdpO         lxegen           procob
To avoid this behavior, use ls -d to list directories, not their contents. The following command lists all directories that begin with the letter b that are beneath ORACLE_HOME:
$ ls -d $ORACLE_HOME/b*
/orahome/app/oracle/product/12.1.0.2/db_1/bin
5-5. Removing a Directory
Problem
You want to remove a directory and all files that exist beneath that directory.
Solution
Use the rmdir command to remove a directory. This command can be used only to remove directories that don’t contain other files. In this example, the rmdir command is used to remove a directory named scripts that exists beneath the current working directory:
$ rmdir scripts
If the directory isn’t empty, you’ll see an error similar to this:
rmdir: scripts: Directory not empty
If you want to remove directories that contain files, use the rm -r (remove recursively) command. This example removes the directory scripts plus any files and subdirectories that exist beneath the scripts directory:
$ rm -r scripts
How It Works
If the rm -r command encounters any files that don’t have write permission enabled, a message like this will be displayed:
rm: remove write-protected regular file ’<file name>’?
If you want to remove the file, type y (for yes). If many files are write-protected (such as in oracle-owned directories), typing y over and over again can get tedious.
You can instruct rm to remove write-protected files without being prompted with the -f (force) option. This example removes all files beneath the subdirectory scripts without prompting for protected files:
$ rm -rf scripts
Sometimes when you’re removing old database installations, it is convenient to use the rm -rf command. This command will wipe out entire directory trees without asking for confirmation when deleting write-protected files. Make sure you know exactly what you’re removing before running this command.
Image Caution  Use the rm -rf command judiciously. This command will recursively remove every file and directory beneath the specified directory without prompting you for confirmation.
5-6. Listing Files
Problem
You want to see what files exist in a directory.
Solution
Use the ls (list) command to list the files (and directories) in a specified directory. This line of code uses the ls command without any options to list the files in the current working directory:
$ ls
Here is a partial listing of the output:
dlock.sql  dropem.sql  login.sql  proc.sql  rmfile.bsh
How It Works
The ls command without any options is not very useful; it displays only a limited amount of file information. One of the more useful ways to use ls is to list all the files, protections, ownership, sizes, and modification times—all sorted from most recently created to last. This is achieved with the -altr options:
$ ls -altr
Here is a partial listing of the output:
-rwxr-x--- 1 oracle dba 1543 Mar 29 16:09 proc.sql
-rw-r----- 1 oracle dba 1082 May  7 19:53 dlock.sql
-rw-r----- 1 oracle dba  442 May  7 19:53 login.sql
drwxr-x--- 4 oracle dba 4096 May  7 19:53 ..
-rw-r----- 1 oracle dba    0 May  7 19:54 dropem.sql
-rw-r----- 1 oracle dba    0 May  7 19:54 rmfile.bsh
The -a (all) option specifies that all files should be listed, including hidden files. The -l (long listing) option displays permissions, ownership, size, and modification time. The -t (time) option causes the output to be sorted by time (newest first). To have the latest file modified listed at the bottom, use the -r (reverse) option. Table 5-1 shows how to interpret the long listing of the first line of the previous output.
Table 5-1. Interpreting Long Listing Output
Table5-1
The first column of Table 5-1 has 10 characters. The first character displays the file type. Characters 2 through 10 display the file permissions. The characters r, w, and x indicate read, write, and execute privileges, respectively. A hyphen (-) indicates the absence of a privilege. The following output summarizes the first-column character positions and meanings of the long listing of a file:
File Type          User Perms   Group Perms   Other Perms
Column 1           2   3   4    5   6   7     8   9   10
-, d, l, s, c, b   r   w   x    r   w   x     r   w   x
In the first character of the first column of output, the hyphen indicates that it is a regular file. Similarly, if the first character is a d, it’s a directory. If the first character is an l, it’s a symbolic link. Table 5-2 lists the different file types.
Table 5-2. Long Listing First Character File Type Meanings
File Type Character
Meaning
-
Regular file
d
Directory
l
Symbolic link
s
Socket
c
Character device file
b
Block device file
The ls command may vary slightly between versions of Linux/Solaris. This command typically has more than 50 different options. Use the man ls command to view all features available on your system.
One last note: when listing a file with ls -l, you may notice an extra + at the end of the permissions; for example:
$ ls -l
drwxr-xr-x+  2 oracle     dba            3 May  2 15:02 scripts
-rw-r--r--+  1 oracle     dba         1627 May  2 15:12 act.sql
This means that your file has extended security permissions. Run the getfacl (get file access control lists) command to see the full permissions for the file; for example:
$ getfacl scripts
# file: scripts
# owner: oracle
# group: dba
user::rwx
group::r-x
mask::rwx
other::r-x
USING ECHO TO DISPLAY FILES
Interestingly, you can also use the echo command to list files. For example, you can use this command to list files in the current working directory:
$ echo *
The echo command is a built-in command (see recipe 2-15 for details on built-in commands). This means that if the filesystem that contains the ls executable is unavailable for some reason (perhaps because of corruption), you can still use the echo command to list files.
5-7. Creating a File Quickly
Problem
You’re setting up Oracle RMAN backups. You want to quickly create a file so that you can test whether the oracle user has the correct permissions to write to a newly created directory.
Solution
In the directory in which you want to determine whether you can create a file, use the touch command to quickly determine whether a file can be created. This example uses touch to create a file named test.txt in the current working directory:
$ touch test.txt
Now use the ls command to verify that the file exists:
$ ls -al
-rw-r----- 1 oracle dba 0 May  7 20:00 test.txt
From the output, the file is created and has nothing in it (indicated by a 0-byte size).
Image Note  See Chapter 4 for details on how to edit a text file.
How It Works
Sometimes you’ll just need to create a file to test being able to write to a backup location or check the functionality of some aspect of a shell program. You can use the touch command for these purposes. If the file you are touching already exists, the touch command will update the file’s last-modified date.
If you touch a file that already exists, its access time and modification time will be set to the current system time (this includes a date component). If you want to modify only the access time, use the -a option of touch. Similarly, the -m option will update only the modification time. Use the --help option to display all options available with touch on your system.
Note that you can also quickly create a file using the following cat command:
$ cat /dev/null> test.txt
Be careful when running the previous command; if the file already exists, concatenating /dev/null to a file will erase anything contained within the file.
5-8. Changing File Permissions
Problem
You want to change the permission on a file so that there is no public-level access.
Solution
Use the chmod command to alter a file’s permissions. This example changes the permission of the scrub.bsh file to 750:
$ chmod 750 scrub.bsh
A quick check with the ls command shows that the permissions are set correctly:
$ ls -altr scrub.bsh
-rwxr-x--- 1 oracle dba 0 May  7 20:07 scrub.bsh
The previous output indicates that the owner has read, write, and execute permissions; the group has read and execute; and the rest of the world has no permissions (see recipe 5-6 for a discussion on file permissions listed by the ls command).
Image Note  You must have root access or be the owner of the file or directory before you can change its permissions.
How It Works
DBAs often use the chmod command to change the permissions of files and directories. It is important that you know how to use this command. Correct file access is critical for database security. In many circumstances, you will not want to grant any public access to files that contain sensitive information.
You can change a file’s permissions by either using the numerical format (such as 750) or by using letters. When using the numerical format, the first number maps to the owner, the second number to the group, and the third number to all other users on the system. The permissions of 750 are translated to indicate read, write, and execute for the owner; read and execute for the group; and no permissions for other users. Inspect Table 5-3 for the translations of the numeric permissions.
Table 5-3. Meanings of Numeric Permissions
Table5-3
You can also change a file’s permissions by using letters, which is sometimes more intuitive to new Linux/Solaris users. When using letters, keep in mind that the o permission doesn’t designate “owner”; it specifies “other.” Table 5-4 lists the meanings of to whom the permissions are applied.
Table 5-4. To Whom the Permissions Are Applied
Who Letter
Meaning
u
User (owner)
g
Group
o
Other (all others on the system)
a
All (user, group, and other)
This next example makes the file executable by the user (owner), group, and other:
$ chmod ugo+x mvcheck.bsh
This line of code takes away write and execute permissions from the group (g) and all others (o) for all files that end with the extension of .bsh:
$ chmod go-wx *.bsh
You can use three operands to apply permissions: +, -, and =. The plus (+) character adds permissions, and the minus (-) character takes away privileges. The equals (=) sign operand assigns the specified permissions and removes any not listed. For example, the following two lines are equivalent:
$ chmod 760 mvcheck.bsh
$ chmod u=rwx,g=rw,o= mvcheck.bsh
A quick listing of the file verifies that the permissions are set as expected:
$ ls -altr mvcheck.bsh
-rwxrw---- 1 oracle dba 0 May  7 20:10 mvcheck.bsh
You can also recursively change file permissions in a directory and its subdirectories. Sometimes this is useful when installing software. The following bit of code recursively changes the permissions for all files in the current directory and any files in subdirectories to 711 (owner read, write, execute; group execute; other execute):
$ chmod -R 711 *.*
You can also use the chmod utility to change the permissions of files to match the settings on an existing file. This example changes all files ending with the extension of .bsh in the current directory to have the same permissions as the master.bsh file:
$ chmod --reference=master.bsh *.bsh
Default File Permissions
Default permissions are assigned to a file upon creation based on the umask setting. The file creation mask determines which permissions are excluded from a file. To view the current setting of your file creation mask, issue umask with no options:
$ umask
0022
You can also view the character version of the umask settings by using the -S option:
$ umask -S
u=rwx,g=rx,o=rx
When you create a regular text file, the permissions are set to the value of 0666 minus the umask setting. If the umask setting is 0022, the permissions of the file are set to 0644, or -rw-r--r--.
Set User ID on Execution
Another concept related to the chmod command is the setuid permission (sometimes referred to as suid). Inspect the permissions of the oracle binary file:
$ cd $ORACLE_HOME/bin
$ ls -l oracle
-rwsr-s--x   1 oracle   dba     126812248 Jun 12 15:24 oracle
Notice that the owner and group executable setting is an s (not an x), which indicates that the setuid permission bit has been set. This means that when somebody runs the program, it is run with the permissions of the owner of the file, not the permissions of the process running the file. This allows a user to run the oracle binary file as if it had the permissions of the oracle user. Therefore, server processes can execute the oracle binary file as if they were the owner (usually the oracle OS user) to read and write to database files.
To set the setuid permission, you must specify a preceding fourth digit to the numeric permissions when changing file permissions with chmod. If you want to enable the setuid permission on both the user and group level, use a preceding 6, as shown here:
$ chmod 6751 $ORACLE_HOME/bin/oracle
$ ls -l oracle
-rwsr-sr-x   1 oracle   dba      118965728 Jun 16  2014 oracle
If you want to enable the setuid permission only at the owner level, use a preceding 4, as shown here:
$ chmod 4751 $ORACLE_HOME/bin/oracle
$ ls -l oracle
-rwsr-x--x   1 oracle   dba      118965728 Jun 16  2014 oracle
As a DBA, it is important to be aware of the setuid permission because you may have to troubleshoot file permission issues, depending on the release of Oracle. For example, see MOS note 271598.1 for issues related to Enterprise Manager Grid Control and setuid dependencies. Additionally, you can run into Oracle accessibility issues when there are non-oracle users on the same server as the database software. In these situations, it’s important to understand how the setuid permission affects file access.
THE STICKY BIT
Do a long listing of the /tmp directory and inspect the permissions:
$ ls -altrd /tmp
drwxrwxrwt 4 root root 4096 May 10 17:24 /tmp
At first glance, it looks like all users have all permissions on files in the /tmp directory. However, notice that the “other” permissions are set to rwt. The last permission character is a t, which indicates that the sticky bit has been enabled on that directory. When the sticky bit is enabled, only the file owner can delete a file within that directory. The sticky bit is set with the following syntax:
chmod +t <shared directory>
or
chmod 3775 <shared directory>
Setting the sticky bit enables file sharing in a directory among many different users, but prevents users from deleting a file that they don’t own (within the directory that has the sticky bit enabled).
5-9. Changing File Ownership and Group Membership
Problem
You need to change a file’s file ownership and group membership so that it is owned by the oracle OS user and its group is dba.
Solution
You need root privileges to change the owner of a file. Use the chown (change owner) command to change a file’s owner and its group. This example changes the owner on the /var/opt/oracle directory to oracle and its group to dba:
# chown oracle:dba /var/opt/oracle
The file listing now shows that the directory owner is the oracle user, and the group it belongs to is dba:
$ ls -altrd /var/opt/oracle
drwxr-xr-x  2 oracle dba 4096 Dec 28 10:31 /var/opt/oracle
If you want to change only the group permissions of a file, use the chgrp command. You must be the file owner or have root privileges to change the group of a file. This example recursively changes the group to dba for all files with the extension of .sql in the current directory and all subdirectories:
$ chgrp -R dba *.sql
How It Works
When setting up or maintaining database servers, it is sometimes required to change the ownership on a file or directory. The following lines show the chown syntax for changing various combinations of the owner and/or group:
chown user file
chown user:group file
chown :group file
If you have root access, you can directly change file ownership. If you don’t have root privileges, sometimes SAs will grant you access to commands that require the root privilege through the sudo utility (see recipe 3-11 for details).
5-10. Viewing the Contents of a Text File
Problem
You want to view the contents of a text file, but you don’t want to open the file with an editor (such as vi) because you are afraid you might accidentally modify the file.
Solution
Use either the view, less, or more command to view only (not modify) a file’s contents. The view command will open a file using either the vi or vim editor in read-only mode. When you open a file in read-only mode, you are prevented from saving the file with the vi editor :wq (write and then quit) command. The following example views the initBRDSTN.ora file:
$ view initBRDSTN.ora
Using the view command is the same as running the vi -R command or the vim -R command (see Chapter 4 for more details about vi). To exit the view utility, enter the command :q.
Image Note  When viewing a file, you can force a write when exiting with the :wq! command.
If you want to display the contents of a file one page at a time, use a paging utility such as more or less. This example uses less to view the initBRDSTN.ora file:
$ less initBRDSTN.ora
The less utility will display a : (colon) prompt at the bottom-left corner of the screen. You can use the spacebar to go to the next page and use the up and down arrows to scroll through the documentation line by line. Enter q to exit less.
This next example uses the more command to page through the file:
$ more initBRDSTN.ora
Like the less utility, use the spacebar to display the next page and q to exit more.
How It Works
The more and less utilities are referred to as pagers because they display information onscreen one page at a time. These utilities have similar features, and one could argue that they are more or less the same. For the way that DBAs use these utilities, that’s mostly true. For hardcore geeks, the less utility is a bit more robust than more. Use the man less and man more commands to view all options available with these utilities.
When using either more or less, you can use vi commands to navigate within the displayed output. For example, if you want to search for a string, you can enter a forward slash and a string to search for text within the more or less output. This example searches for the string "sga_max_size" within the output of less: $ less initBRDSTN.ora
/sga_max_size
You can also use the cat command to quickly display the contents of a file to your standard output (usually your screen). This example dumps the output of the initBRDSTN.ora file to the screen:
$ cat initBRDSTN.ora
Using cat to display the contents of files works fine when you have small files. However, if the file is large, you’ll see a large amount of text streaming by too fast to make any sense. It’s almost always better to use view, less, or more (rather than cat) to view a file’s contents. These commands allow you to quickly inspect a file’s contents without risking accidental modifications.
5-11. Viewing Nonprinting Characters in a File
Problem
You’re trying to load text strings from a file into the database with a utility such as SQL*Loader, but the data appears to be corrupted after it is inserted into the target table. You want to view any control characters that may be embedded into the file.
Solution
Use the cat -v command to view nonprinting and control characters. This example displays nonprinting and control characters in the data.ctl file:
$ cat -v data.ctl
Image Note  The cat -v command does not display linefeed or Tab characters.
How It Works
When dealing with data being loaded into the database from text files, you sometimes might find that your SQL queries don’t behave as expected. For example, you might search for a string, yet the SQL query doesn’t return the expected data. This might occur because nonprinting characters are being inserted into the database. Use the cat -v command as described in this recipe to troubleshoot these kinds of data issues.
To illustrate viewing nonprinting characters, you can spool the following output from a SQL*Plus session:
SQL> spool out.txt
SQL> select chr(7) || ’ring the bell’ from dual;
SQL> exit;
Here you use cat to display the contents of the file out.txt:
$ cat out.txt

SQL> select chr(7) || ’ring the bell’ from dual;
CHR(7)||’RINGT
--------------
ring the bell

SQL> exit;
Notice the ^G ASCII ring bell or beep control character in the last line of the output when you use the -v option:
$ cat -v out.txt

SQL> select chr(7) || ’ring the bell’ from dual;
CHR(7)||’RINGT
--------------
^Gring the bell

SQL> exit;
5-12. Viewing Hidden Files
Problem
You’re trying to clean up your home directory and want to view the names of hidden configuration files and/or hidden directories.
Solution
Use the ls command with the -a (all) option. This bit of code lists all files using a long listing format and sorted in the reverse order in which they were modified:
$ ls -altr $HOME
Here is a sample of part of the output:
drwxr-xr-x 3 root   root     4096 Sep 29 13:30 ..
-rw-r--r-- 1 oracle oinstall  124 Sep 29 13:30 .bashrc
-rw-r--r-- 1 oracle oinstall   24 Sep 29 13:30 .bash_logout
-rw-r--r-- 1 oracle oinstall  223 Sep 29 13:53 .bash_profile
drwxr-xr-x 2 oracle oinstall 4096 Oct  2 17:55 db
drwxr-xr-x 2 oracle oinstall 4096 Oct 15 08:33 scripts
drwx------ 2 oracle oinstall 4096 Oct 15 08:34 .ssh
-rw------- 1 oracle oinstall 6076 Oct 15 13:19 .bash_history
-rw------- 1 oracle oinstall 5662 Oct 15 13:41 .viminfo
drwx------ 5 oracle oinstall 4096 Oct 15 13:55 .
Any of the files in the previous listing that begin with a . (dot or period) are classified as hidden files. When using the Bash shell, common hidden files in your home directory are .bash_profile, .bashrc, .bash_logout, and .bash_history (see recipe 2-5 for uses of these files).
If you want to list only hidden files, you can do so as follows:
$ ls -d .*
Here is the corresponding output:
.a             .bash_profile  .lesshst  .sh_history  .viminfo  .Xauthority
..  .bash_history  .history       .ocm      .ssh         .vnc
You may want to create an alias for the preceding command, such as this:
$ alias ls.=’ls -d .*’
How It Works
The only difference between a hidden file and a nonhidden file is that the hidden file begins with a . (dot or period) character. There isn’t anything secretive or secure about hidden files. Hidden files are usually well-known files with distinct purposes (such as storing environment configuration commands).
You may not want to muddle the output of an ls command with every file in a directory. The default behavior of the ls command is not to list hidden files. The -a option specifically tells the ls command to list all files, including hidden files. If you want to list all files except the . and .. files, use the -A option:
$ ls -A
Image Note  The . file is a special file that refers to the current working directory. The .. file refers to the parent directory of the current working directory.
5-13. Determining File Type
Problem
You want to display whether a file is a directory or a regular file.
Solution
Use the ls command with the -F option to display the file name and file type. This example lists file names and file types within the current working directory:
$ ls -F
Here is a partial listing of some sample output:
alert.log        gcc-3.4.6-3.1.x86_64.rpm  ora01/    ss.bsh*
anaconda-ks.cfg  install.log               ora02/    test/
The ls -F command appends a special character to the file name to indicate the file type. In the previous output, the file names appended with / are directories, and the file name appended with * is an executable file.
Image Tip  Another method of determining file type is to use the ls --color command, which colorizes the file depending on its type.
You can also use the file command to display characteristics of a file. This command will display whether the file is an ASCII file, a tar file, or a details executable file. For example, one way that DBAs use the type command is to tell whether the oracle binary file is 32-bit or 64-bit. The following shows the oracle binary file on a 64-bit server:
$ file $ORACLE_HOME/bin/oracle
Here is the corresponding output:
/orahome/app/oracle/product/12.1.0.2/db_1/bin/oracle: setuid setgid ELF 64-bit LSB executable,
AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs),
not stripped
When using the Bash shell, if the file of interest is located in a directory in your PATH variable, you can use command substitution to provide input to the file command. Command substitution takes the output of the command enclosed in $() and provides that as input to a given command; for example:
$ file $(which oracle)
In the preceding line of code, the output of the which command is used as input to the file command.
How It Works
You can display an indicator for a file by using the -F option of the ls command. Table 5-5 describes the different file name type indicators. File type indicators allow you to filter the output and look for a certain file type. For example, to list all directories, search the output of the ls -F command for the / character:
$ ls -F | grep /
DBAs often encapsulate strings of commands like this within aliases or functions, which allows them to create shortcuts to long commands (see recipe 2-7 for details).
Table 5-5. File Type Indicator Characters and Meanings
Indicator Character
Description
/
The file is a directory.
*
The file is an executable.
=
The file is a socket (a special file used in process to process communication).
@
The file is a symbolic link (see recipe 5-33 for more details).
|
The file is a named pipe (a special file used in process-to-process communication).
Image Tip  Use the type command to determine the characteristics of a command file. It will show whether the command is a utility, a built-in command, an alias, or a function.
The stat command is another useful command for displaying file characteristics. This command prints in human-readable format the contents of an inode. An inode (pronounced “eye-node”) is a Linux/Solaris data structure that stores information about the file. The next example displays the inode information for the oracle binary file:
$ stat $ORACLE_HOME/bin/oracle
Here is the corresponding output:
  File: `/orahome/app/oracle/product/12.1.0.2/db_1/bin/oracle’
  Size: 323762476       Blocks: 632992     IO Block: 4096   regular file
Device: fd00h/64768d    Inode: 34838017    Links: 1
Access: (6751/-rwsr-s--x)  Uid: ( 2000/  oracle)   Gid: (  500/     dba)
Access: 2015-04-21 11:32:14.000000000 -0600
Modify: 2014-12-29 09:17:28.000000000 -0700
Change: 2014-12-29 09:17:28.000000000 -0700
You can obtain some of the preceding output from the ls command. However, notice that the stat output also contains information such as the number of blocks allocated; the inode device type; and the last time a file was accessed, the last time a file was modified, or when its status was changed.
5-14. Finding Differences Between Files
Problem
You have two databases that were supposed to be set up identically. You want to see any differences in the initialization files.
Solution
Use the diff (difference) command to identify differences in files. The general syntax for this command is as follows:
$ diff <file1> <file2>
This example uses diff to view the differences between two files named initDEV1.ora and initDEV2.ora:
$ diff initDEV1.ora initDEV2.ora
Here is some sample output showing the differences in the files:
6,7c6,7
< sga_max_size=400M
< sga_target=400M
---
> sga_max_size=600M
> sga_target=600M
20a21
> # star_transformation_enabled=true
How It Works
The key to understanding the output from diff is that it provides you with instructions on how to make file1 look like file2. The output tells you how to append, change, and delete lines. These instructions are signified by a, c, or d in the output.
Lines prepended by < are from file1. Lines prepended by > are from file2. The line numbers to the left of a, c, or d apply to file1. The line numbers to the right of a, c, or d apply to file2.
From the previous output in this recipe’s solution, the first line, 6,7c6,7, is translated to mean “change lines 6 and 7 in file1 to lines 6 and 7 in file2.” The second-to-last line of the output is 20a21, which means “after line 20 in file1, append line 21 from file2.”
The output of diff is known as a difference report, which can be used in conjunction with the patch command to make file1 look like file2. Before you can use the patch command, you first have to save the difference report output in a file. The following example stores the difference output in a file named init.diff:
$ diff initDEV1.ora initDEV2.ora > init.diff
To convert initDEV1.ora to initDEV2.ora, use the patch command with the difference report output:
$ patch initDEV1.ora init.diff
You can also use the sdiff (side-by-side) utility to display file differences. The sdiff output is usually easier to interpret than the diff command because differences are juxtaposed visually in the output. The following example uses sdiff to display the differences between two files:
$ sdiff initDEV1.ora initDEV2.ora
Here is a snippet of the side-by-side differences:
sga_max_size=400M             | sga_max_size=600M
sga_target=400M               | sga_target=600M
...
                              >#star_transformation_enabled=true
Image Tip  Use the diff3 utility to compare the differences between three files.
5-15. Comparing Contents of Directories
Problem
You want to ensure that two different directories have identical contents in terms of the number of files, file names, and file contents.
Solution
You can use the diff command to display any differences in two directories in terms of file names and file contents. This example compares the files in the /ora01/upgrade directory with the files in the /cvsroot/prod_scripts directory:
$ diff /cvsroot/prod_scripts /ora01/upgrade
If there are no differences, you won’t see any output. If there’s a file that exists in one directory but not in the other directory, you’ll see a message similar to this:
Only in /ora01/upgrade: tab.sql
If there are differences in any files in each directory, you’ll see a message similar to the following:
22c22
< # cd to udump
---
> # cd to udump directory.
See recipe 5-14 for details on interpreting the output of the diff utility. If you want to see only the names of the files that are different (and not how the files differ), use the --brief option:
$ diff --brief /cvsroot/prod_scripts /ora01/upgrade
How It Works
Occasionally you may need to compare the contents of directories when maintaining database environments. In these situations, use the diff command to compare the contents of one directory with another.
If you want to recursively look in subdirectories and compare files with the same name, you can use the -r option. This example recursively searches through subdirectories and reports any differences in files that have the same name:
diff -r /cvsroot/prod_scripts /ora01/upgrade
You can also use the long listing of the recursive option to achieve the same result:
diff --recursive /cvsroot/prod_scripts /ora01/upgrade
5-16. Copying Files
Problem
You want to make a copy of a file before you modify it.
Solution
This first example shows how to use the cp (copy) command to create a copy of a file. For example, cp is used here to make a backup of the listener.ora file:
$ cp listener.ora listener.old.ora
You can verify that the copy worked with the ls command:
$ ls listener*.ora
listener.old.ora listener.ora
If you need to copy a file over the network, you can use a command-line utility such as scp, ftp, rsync, wget, or curl. We’ll show an example of using scp in this recipe. There are examples of using scp and rsync to copy directories and files in recipe 5-17. We generally don’t use ftp because it’s not considered to be a secure way to copy files over the network. There are examples of using wget and curl to download files in recipe 5-36.
Having said that, let’s look at scp; the basic syntax for scp is as follows:
scp [options] sourcefile destinationfile
The source/destination can be directories and/or files. The source/destination directory/file in the preceding syntax line can take one of the following general forms:
  • directory/file
  • host:directory/file
  • user@host:directory/file
In the next line of code, a file is being copied from the remote host to the local host. The remote user is oracle, the remote host is srv2, the remote file is initTRG.ora in the /u01 directory, and the file is being copied to the local current working directory (signified by a dot):
$ scp oracle@srv2:/u01/initTRG.ora  .
The scp command prompts you for the password of the remote user. If you’ve never copied from the remote server, you’ll also be prompted to ensure that you want to copy from the specified remote server.
This example copies a file from the local server to the remote host:
$ scp startup.sql oracle@srv2:.
Inspect the preceding syntax carefully. The local file being copied is startup.sql and exists in the current working directory. The file is copied using the remote user oracle, to the remote server srv2, and to the oracle user’s HOME directory (specified by a dot immediately following the colon). Assuming that the remote HOME directory is /home/oracle, the following example copies the file to the same directory as the preceding example:
$ scp startup.sql oracle@srv2:/home/oracle
How It Works
DBAs often need to create copies of files. For example, the cp utility provides a method to create backups of files or quickly replicate directories. The cp command has this basic syntax:
cp [options] source_file target_file
Be careful when copying files. If the target file exists prior to issuing the copy command, it will be overwritten with the contents of the source file. If you want to be warned before overwriting an existing file, use the -i (interactive) option. In this example, there already exists a file named init.old.ora:
$ cp -i init.ora init.old.ora
cp: overwrite `init.old.ora’?
Now you can answer y or n (for yes or no, respectively) depending on whether you want the target file overwritten with the source. Many DBAs create a shortcut command for cp that maps to cp -i (see recipe 2-7 for details on how to create shortcuts). For example, this code helps prevent you from accidentally overwriting previously existing files:
$ alias cp=’cp -i’
You can also copy files directly into an existing directory structure using this syntax:
cp [options] source_file(s) directory
If the destination is a directory, the cp command will copy the file (or files) into the directory. The directory will not be overwritten. This example copies all files in the current working directory with the extension .sql to the scripts directory:
$ cp *.sql scripts
When you copy a file, the original timestamp and file permissions may differ between the original file and the file newly created by the copy command. Sometimes it’s desirable to preserve the original attributes of the source file. For example, you may want to make a copy of a file, but for troubleshooting purposes want to still be able to view the original timestamp and ownership. If you want to preserve the original timestamp, ownership, and file permissions, use the -p (preserve) option:
$ cp -p listener.ora listener.old.ora
You can also use the cp utility to create the directory structure associated with the source file by using the --parents option. For this command to work, the destination must be a directory. This example creates a network/admin/log directory and copies any files ending with the extension of .ora to a directory beneath the destination ~/backup directory:
$ cp --parents network/admin/*.ora ~/backup
Any files with the extension of .ora in the source directory should now exist in the ~/backup/network/admin destination directory.
5-17. Copying Directories
Problem
You want to copy all files and subdirectories beneath a directory to a new location.
Solution
Use the cp command with the -r option to recursively copy all files in a directory and subdirectories. This example copies all files in the /orahome/scripts directory tree to the /orahome/backups directory:
$ cp -r /orahome/scripts /orahome/backups
The /orahome/backups directory now should have an identical copy of the files and subdirectories in the /orahome/scripts source directory. Be aware that existing files in the destination directory will be overwritten if they have the same name as files being copied from the source directory. If you want to be prompted before a file is overwritten, also use the -i (interactive) option:
$ cp -ri /orahome/scripts /orahome/backups
If you need to copy directories (and files) securely over the network, use the scp (secure copy) command. The basic syntax for scp is as follows:
scp [options] sourcefile destinationfile
The source/destination can be directories and/or files. The source/destination directory/file in the preceding syntax line can take one of the following general forms:
  • directory/file
  • host:directory/file
  • user@host:directory/file
To recursively copy and preserve directories and files use the -r and -p options of the scp command. This example recursively copies the scripts directory (and any subdirectories and files) from the local box to a remote box named rmougdev2 as the oracle user:
$ scp -rp scripts oracle@rmougdev2:/home/oracle/scripts
In the preceding line of code, if the destination directory does not exist, it will be created. If the directory already exists, a subdirectory named scripts will be created underneath the existing scripts directory.
The scp command will prompt you for the password of the remote user. If you’ve never copied from the remote server, you’ll also be prompted to ensure that you want to copy from the specified remote server.
How It Works
As part of the daily routine, DBAs and developers often copy directories and files from one location to another. The location could be local or remote. You might require this if you are installing software or if you just want to ensure that you have a backup of files copied to a different location.
Another powerful utility used to synchronize directories is the rsync command. The basic syntax for rsync is as follows:
rsync [options] sourcefiles destinationfile
By default, the rsync tool will transfer only the differences that it finds between the source and destination. This makes it an extremely flexible and efficient method to synchronize one directory tree with another.
If the source and destination are on the same server, ordinary file and directory names can be used. Use the -r and -a options to recursively copy a directory tree and preserve permissions and ownership; the --delete option also specifies removing any files that exist in the destination that do not exist in the source. This example ensures that two local directories have the exact same directory structure and files; in other words, it ensures that test2 is identical to test1:
$ rsync -ra --delete /home/oracle/test1/ /home/oracle/test2/
You can copy locally to a remote server or from a remote server to your local server. If the directory or file is remote, it takes the following general form:
user@host:port/filename
For example, you can use rsync to synchronize a remote directory structure with a local directory structure. This line of code recursively copies the contents of the local scripts directory to the remote rmougdev2 server as the oracle user:
$ rsync -ra --delete --progress scripts/ oracle@rmougdev2:/home/oracle/scripts
You’ll be prompted for the oracle user’s password. The / at the end of the source folder ensures that if the destination folder exists, rsync will synchronize the two directories. If the destination folder doesn’t exist, it will be created. Without the / at the end of the source directory, if the destination directory already exists, a subdirectory will be created underneath it.
The rsync command is very flexible and powerful. If you’ve never used it, you should become familiar with it and incorporate it into your bag of file transfer tricks.
5-18. Moving Files and Directories
Problem
You want to rename or relocate a file.
Solution
Use the mv (move) command to relocate a file or rename it. This example renames a file from initdw.ora to a new name of initDWDB.ora:
$ mv initdw.ora initDWDB.ora
You can also use the mv command to relocate a file to a different directory. This bit of code moves a file from the current working directory to its parent directory:
$ mv scrub.sql ..
Quite often, you’ll need to move a file from the current working directory to a subdirectory. This example moves a file from the current working directory to a subdirectory named scripts:
$ mv scrub.sql scripts
In the previous line of code, if the scripts subdirectory didn’t exist, you would end up renaming the scrub.sql file to a file named scripts. In other words, the destination subdirectory must exist before you issue the mv command (otherwise you’ll end up renaming the file).
It is also possible to relocate directories. The following example moves the scripts directory to the sqlscripts directory:
$ mv scripts sqlscripts
In the preceding line of code, if the sqlscripts directory already exists, the scripts directory is created as a subdirectory beneath the sqlscripts directory. This might seem a little confusing if you’re not expecting this behavior. One way to think of this is that the mv command does not overwrite directories if they already exist.
How It Works
The mv command is used to relocate or rename a file or a directory. The mv utility uses the following syntax:
mv [options] source(s) target
Be aware that the mv command will unceremoniously overwrite a file if it already exists. For example, say you have the following two files in a directory:
$ ls
initdw.ora init.ora
If you move initdw.ora to the name of init.ora, it will overwrite the contents of the init.ora file without prompting you. To protect yourself against accidentally overwriting files, use the -i (interactive) option:
$ mv -i initdw.ora init.ora mv: overwrite `init.ora’?
You can now enter a y or an n to indicate a yes or no answer, respectively. You can easily implement the mv command as mv -i via a function or an alias to protect yourself against erroneously overwriting files (see recipe 2-7 for details on command shortcuts).
Table 5-6 describes the various results of the mv operation, depending on the status of the source and target.
Table 5-6. Results of Moving File(s) and Directories
Source
Target
Outcome
File
File doesn’t exist.
Source file is renamed to target.
File
File exists.
Source file overwrites target.
File(s)
Directory exists.
Source file(s) are moved to target directory.
Directory
Directory doesn’t exist.
Source directory is renamed to target.
Directory
Directory exists.
Source directory created as a subdirectory beneath target directory.
5-19. Renaming a File or Directory
Problem
You want to change the name of a file or directory.
Solution
Use the mv (move) command to rename a file. For example, the following line of code renames a file from credb1.sql to credatabase.sql:
$ mv credb1.sql credatabase.sql
You can also rename a directory. The following renames a directory from dev to test:
$ mv dev test
Be aware that when renaming directories, if you attempt to rename a directory to the name of an existing directory, a new directory will be created as a subdirectory beneath the already existing directory. See Table 5-6 for details on the behavior of the mv command.
How It Works
You can also use the rename command to change file names. The rename utility has the following syntax:
rename oldname newname files
This command has a big advantage over the mv command because it allows you to rename several files at once. For example, here is a way to rename all files in a directory that ends with the extension of .trc to the new extension of .trace:
$ rename .trc .trace *.trc
You can also use rename to change the name of just one file. Here the file initDEV.ora is renamed to initTEST.ora:
$ rename initDEV.ora initTEST.ora initDEV.ora
5-20. Removing a File
Problem
You want to remove a file from disk.
Solution
First, use the ls command to identify the files you want to remove. In this example, any files with the extension of .trc are displayed:
$ ls -altr *.trc
After visually verifying the files you want to remove, use the rm command to permanently delete files:
$ rm *.trc
How It Works
Be very careful when using the rm command. Once the files have been removed, the only way to get them back is from a backup (if there is one). DBAs can get in a lot of trouble by accidentally removing files.
DBAs typically are logged on to a server as the oracle OS user. This special user is usually the owner of all the critical database files, so this user can remove database files, even if they are currently in use.
Because the rm command doesn’t prompt you for confirmation, we recommend that you always use the ls command to verify which files will be removed.
If you want confirmation before removing a file, use the -i option:
$ rm -i *.trc
You will now be prompted for confirmation before each file is deleted:
rm: remove regular file `rmdb1_j001_11186.trc’?
Type y to have the file removed or n if you want to keep the file. This method takes longer, but gives you some reassurance that you’re deleting the correct files.
Another technique for preventing the accidental deletion of the wrong files is to use the !$ variable. The !$ character contains the last string entered on the command line. For example, to use !$ to remove files, first use the ls command to list the files targeted for deletion:
$ ls *.trc
Here is some sample output:
ora.trc
Now the value *.trc is stored in the !$ parameter. You can use rm to remove the files listed by the previous ls command:
$ rm !$
If you’re ever unsure of the contents of the !$ variable, use the echo command to display its contents:
$ echo !$
echo *.trc
ora.trc
5-21. Removing Protected Files Without Being Prompted
Problem
You want to remove all the files associated with an old installation of the database. However, when you issue the rm (remove) command, you are presented with this prompt:
rm: remove write-protected regular empty file
You wonder whether you can run the rm command without being prompted.
Solution
There are two techniques for removing write-protected files: rm -f and yes. This example uses rm -rf (remove, recursive, force) to recursively remove all files beneath a directory without being prompted:
$ rm -rf /oracle/product/11.0
This example uses the yes command to recursively remove all files beneath a directory without being prompted:
$ yes | rm -r /oracle/product/11.0
If you type the yes command without any options, the subsequent output will be a repeating y on your screen until you press Ctrl+C. You can pipe the output of the yes command to another command that is expecting a y or n as input for it to proceed.
How It Works
Be very careful when using the removal methods described in the “Solution” section of this recipe. These techniques allow you to easily remove entire directories and subdirectories with one command. Use these techniques only when you’re absolutely sure you don’t need a directory’s contents. Consider using tar or cpio to recursively back up a directory tree before you delete it (see Chapter 6 for details)
5-22. Removing Oddly Named Files
Problem
Somehow a file was created with the odd name of -f, and apparently it cannot be removed with the rm (remove) command. You wonder how you can remove it using the rm command.
Solution
First use the ls command to view the oddly named file:
$ ls
-f
You can attempt to remove the file with the rm command:
$ rm -f
However, the rm command thinks -f is the force argument to the command and does nothing with the -f file. To remove the file, specify the current path with the file name, as shown here:
$ rm ./-f
How It Works
Files with odd names are occasionally created by accident. Sometimes you can type a command with the wrong syntax and end up with a file with an undesirable name. For example, the following will create a file name of -f:
$ ls > "-f"
Now when you list the contents of the directory, you’ll see a file named -f:
$ ls
-f
Worse yet, you might have a malicious user on your system who creates a file like this:
$ ls > "-r home"
Be extremely careful in this situation. If you attempt to remove the file without specifying a path, the command will look like this:
$ rm -r home
If you happen to have a directory named home in the current directory, this command will remove the home directory. To remove the file, use the current path ./, as shown here:
$ rm "./-r home"
In the previous command, you need to enclose the pathname and file name in quotes because there is a space in the file name. Without quotes, the rm command will attempt to remove a file named ./-r and another file named home.
5-23. Finding Files
Problem
You want to locate a certain file on the database server.
Solution
Use the find command to search for a file. The most basic way to search for a file is to instruct find to look for a file recursively in the current working directory and any of its subdirectories. The following command looks in the current directory and any subdirectories for any file that begins with the string "alert" and ends with the extension of .log:
$ find . -name "alert*.log"
Here’s some sample output that indicates the location of the found file relative to the current working directory:
./RMDB1/admin/bdump/alert_RMDB1.log
How It Works
It’s well worth the effort to spend some time getting to know the find command. This command will allow you to easily search for files from the command line. Because this utility is used in so many different ways, we decided to include individual recipes to document these tasks. The next several recipes of this chapter show examples of how DBAs use the find command.
If your OS account doesn’t have correct access permissions on a directory or file, find will display an error message. This example changes directories to the / directory and issues a find command:
$ cd /
$ find . -name "alert*.log"
Here is a partial listing of output, indicating that there is no access to certain directories:
find: ./proc/11686/task/11686/fd: Permission denied
find: ./proc/11688/task/11688/fd: Permission denied
find: ./proc/15638/task/15638/fd: Permission denied
To eliminate those error messages, send the error output to the null device:
$ find . -name "alert*.log" 2>/dev/null
5-24. Finding Strings in Files
Problem
You want to search for a string in a text file that could be located somewhere beneath a given directory path.
Solution
Use a combination of the find and grep commands to search for a string that exists in a file in a directory tree. The first example uses find to locate all SQL files beneath a directory and pipes the output to xargs, which executes the grep command to search for a create database string:
$ find . -name "*.sql" | xargs grep -i "create database"
If your system supports it, consider displaying the string being searched for in color:
$ find . -name "*.sql" | xargs grep -i --color "create database"
You can also use the find command with exec, grep, and print to search for strings within files. The following command is equivalent to the prior command that uses xargs:
$ find . -name "*.sql" -exec grep -i "create database" ’{}’ \; -print
In the previous line of code, the find command finds all files in a directory tree with the extension of *.sql. The output is passed to the -exec ’{}’ command, which feeds each file found to the grep -i command. The \; marks the end of the -exec command, and -print displays any files found.
You can also use command substitution to achieve the same functionality; for example:
$ grep -i "create database" $(find . -name "*.sql")
Depending on your version of the OS, the grep command may support the -r (recursive search) option. The following command recursively searches all subdirectories and files beneath the current working directory for the create database string:
$ grep -ir "create database" .
The preceding command can take a long time, depending on the number of files it searches through. Prior examples in this solution section are more efficient because they search for a particular type of file and then search within the file for a string.
How It Works
Searching through files for a particular string is a very common task. Although the “Solution” section demonstrated several techniques for accomplishing this task, there are a few other examples that you may find relevant. For example, suppose that you want to display only the file name, not instances of the search string. To achieve this, use the -q option of grep. This example searches trace files for the word error and displays only the file name containing the search string:
$ find . -name "*.trc" -exec grep -qi "error" ’{}’ \; -print
When you troubleshoot issues, it is also helpful to see the file names and at what time the file was last modified:
$ find . -name "*.trc" -exec grep -qi "error" ’{}’ \; \
-printf "%p %TY-%Tm-%Td %TH:%TM:%TS %Tz\n"
Image Note  On some systems, the -q option may not be available. For example, similar functionality on Solaris would be implemented with the -l option. Use man grep to display all options available on your server.
Sometimes you want to search for the incidence of two or more strings in a file. Use grep with the -e option to accomplish this. This command searches for the "error" or "ora-" strings:
$ find . -name "*.trc" -exec grep -ie "error" -e "ora-" ’{}’ \; -print
You can also use egrep to search for multiple strings in a file:
$ find . -name "*.trc" -exec egrep "error|ora-" ’{}’ \; -print
Occasionally, you might have the need to inspect a binary file. For example, suppose that when using a spfile (server parameter file), you set a parameter erroneously as follows:
SQL> alter system set processes=10000000 scope=spfile;
System altered.
You then subsequently discover the bad setting when you attempt to stop and start the database:
ORA-00821: Specified value of sga_target 512M is too small
In this situation, you can’t even start your database in nomount mode, so you can’t use the ALTER SYSTEM command to modify the spfile. However, you can use the strings command to extract text strings from the binary spfile to quickly create a text-based init.ora file:
$ cd $ORACLE_HOME/dbs
$ strings spfileORA12CR1.ora >initORA12CR1.ora
Now modify the newly created init.ora file so that the value causing the problem is eliminated and then rename the spfile so that Oracle automatically uses the init.ora file when starting the instance.
This situation is just one example of how a DBA might have to use the strings command; the important thing to keep in mind is that this utility provides you with a way to look for text strings in binary files.
DOES DATABASE WRITER WRITE TO DATAFILES IN BACKUP MODE?
Back in the days before RMAN, a misconception existed with some DBAs that the database writer stops writing to datafiles while a datafile’s tablespace is in hot backup mode. The following example uses the strings command to verify that the database writer does indeed continue to write to datafiles, even while in backup mode.
First verify that a string does not exist in a datafile:
$ strings users01.dbf | grep -i denver
Verify that nothing is returned by the previous command. Next create a table and place it in the USERS tablespace:
SQL> create table city(name varchar2(50)) tablespace users;
Next alter a tablespace into backup mode:
SQL> alter tablespace users begin backup;
Now insert a string into the CITY table:
SQL> insert into city values(’Denver’);
Connect as SYS and run the following command to flush modified blocks from memory to disk:
SQL> alter system checkpoint;
From the OS command line, search for the "denver" string in the USERS database file:
$ strings users01.dbf | grep -i denver
You should see the following output:
Denver
This verifies that the database writer continues to write to datafiles, even while the corresponding tablespace is in backup mode. Don’t forget to take the USERS tablespace out of backup mode.
5-25. Finding a Recently Modified File
Problem
You recently created a file, but can’t remember where it is located on the server. You want to find any files with a recent creation date.
Solution
Use the find command with the -mmin (modified minutes) option to find very recently modified files. This example finds any files that have changed in the last 30 minutes beneath the current working directory:
$ find . -mmin -30
To find all files that were modified more than 30 minutes ago, use the + sign instead of the - sign:
$ find . -mmin +30
Sometimes when you troubleshoot issues, it is helpful to additionally pinpoint the exact time the file was modified; you can use the stat command to accomplish this:
$ find . -mmin -30 -exec stat -c "%n %y" {} \;
Additionally, the -printf option will show the time of file modification:
$ find . -mmin -30 -printf "%p %TY-%Tm-%Td %TH:%TM:%TS %Tz\n"
Here’s some sample output:
./dbcreate.sql 2015-05-09 10:21:46 -0700
./.mozilla/firefox/q5xf2w9k.default 2015-05-09 12:07:05 -0700
How It Works
The find command with a time-related option is useful for locating files that have recently been updated or changed. This command can be useful when you can’t remember where you placed recently modified or downloaded files.
If you’re using a version of find that does not support the -mmin option, try the -ctime option instead. The following command locates any files that have changed on the server in the last day beneath the ORACLE_HOME directory:
$ find $ORACLE_HOME -ctime -1
Many options are available when you are trying to find a file. For example, use the -amin (access minutes) option to find a file based on when it was last accessed. This line of code finds all files that were accessed beneath the current working directory exactly 60 minutes ago:
$ find . -amin -60
Table 5-7 describes a subset of time-related options commonly used with the find command.
Table 5-7. Commonly Used Time-Related Options to Find Files
Option
Description
-amin
Finds files accessed more than +n, less than -n, or exactly n minutes ago
-atime
Finds files accessed more than +n, less than -n, or exactly n days ago
-cmin
Finds files changed more than +n, less than -n, or exactly n minutes ago
-ctime
Finds files changed more than +n, less than -n, or exactly n days ago
-mmin
Finds files modified more than +n, less than -n, or exactly n minutes ago
-mtime
Finds files modified more than +n, less than -n, or exactly n days ago
-newer <file>
Finds files modified more recently than <file>
A wide variety of options are available with the find command. Use the man find command to display the options available on your system.
5-26. Finding and Removing Old Files
Problem
You noticed that there are thousands of trace files being created in a diagnostic directory that consume disk space. You want to find old trace files and remove them.
Solution
Use the find command to locate files older than a certain age. Once the old files are identified, use the rm command to remove them. The following example identifies files greater than 14 days old and removes them all with one line of code:
$ find $ORACLE_BASE/diag/rdbms/dwrep/DWREP/trace/*.trc -type f -mtime +14 -exec rm -f {} \;
The preceding command finds all files (the option -type f indicates a regular file) in the specified directory and its subdirectories that are older than 14 days. The rm command is executed (-exec) once for each file name located by the find command. The function of {} is to insert each file returned (by find) into the rm -f command line. When using the -f (force) option, you will not be prompted if you really want to remove write-protected files (files without write permission enabled); \; denotes the end of the exec command line.
You can also use the find command in conjunction with xargs to find and remove old files:
$ find $ORACLE_BASE/diag/rdbms/dwrep/DWREP/trace/*.trc -mtime +14 | xargs rm
In the preceding line of code, the xargs command provides as input to the rm command any file names returned by the find command.
Another variation of this is to use command substitution $(<command>); for example:
$ rm $(find $ORACLE_BASE/diag/rdbms/dwrep/DWREP/trace/*.trm -mtime +14)
In the preceding line of code, any file names returned by the find command enclosed by $() will be removed by the rm command.
You might be wondering why you cannot directly pipe the standard output of the find command to be used as standard input to the rm command. For example, this does not work:
$ find $ORACLE_BASE/diag/rdbms/trg1/TRG/trace/*.trm -mtime +14 | rm
rm: missing operand
Some commands (e.g., rm and kill) don’t directly accept another command’s standard output as standard input. To use standard output of the find command as standard input to the rm command, you have to use one of the techniques described in this “Solution section” (e.g., exec, xargs, or command substitution).
How It Works
An active database will regularly produce trace files as part of its normal operations. These files often contain detailed information about potential problems or issues with your database. You usually don’t need to keep trace and audit files lying around on disk forever. As these files grow older, the information in them becomes less valuable.
DBAs will typically write a small shell script to clean up old files. This shell script can be run automatically on a periodic basis from a utility such as cron. See Chapter 7 for details on shell scripting and Chapter 11 for techniques for details on automating tasks through cron.
5-27. Finding the Largest Files
Problem
Your database is experiencing availability issues because a disk is 100 percent full. You want to locate the largest files in a directory tree.
Solution
Use the find command to locate files recursively in a directory tree. The following command sends the output of the find operation to the sort and head commands to restrict the output to just the five largest files located in any directory beneath the current working directory:
$ find . -ls | sort -nrk7 | head -5
Here is a sample of the output:
6602760 820012 -rw-r-----     1 oracle    oinstall 838868992 Jan 21 14:55
./RMDB1/undotbs01.dbf
6602759 512512 -rw-r-----     1 oracle    oinstall 524296192 Jan 21 14:55
./RMDB1/system01.dbf
6602758 51260 -rw-r-----     1 oracle    oinstall 52429312 Jan 20 22:00
./RMDB1/redo03a.log
6602757 51260 -rw-r-----     1 oracle    oinstall 52429312 Jan 19 06:00
./RMDB1/redo02a.log
6602756 51260 -rw-r-----     1 oracle    oinstall 52429312 Jan 21 14:55
./RMDcB1/redo01a.log
The -nrk7 option of the preceding sort command orders the output numerically, in reverse order, based on the seventh column position. As shown in the output, the output is sorted largest to smallest. The top listing shows that the largest file is about 800MB in size.
How It Works
You can also use the find command to look for certain types of files. To look for a file of a particular extension, use the -name option. For example, the following command looks for the largest files beneath the current working directory and subdirectories that have an extension of .log:
$ find . -name "*.log" -ls | sort -nrk7 | head
DBAs often create shortcuts (via shell functions or aliases) that encapsulate long strings of commands. This line of code shows how to create an alias command shortcut:
$ alias flog=’find . -name "*.log" -ls | sort -nrk7 | head’
Command shortcuts can save time and prevent typing errors. See recipe 2-7 for details on creating functions and aliases.
5-28. Finding a File of a Certain Size
Problem
You’re running out of disk space, and you want to recursively locate all files beneath a directory that exceed a certain size.
Solution
Use a combination of the find command with the -size option to accomplish this task. This example uses the -size option to find any files more than 1GB in the current working directory and any subdirectories:
$ find . -size +1000000k
Here’s a small snippet of the output:
./ORA1212/sysaux01.dbf
./ORA12CR1/users01.dbf
./ORA12CR1/undotbs01.dbf
If you want to see the size of the file, use the stat command to do so:
$ find . -size +1000000k -exec stat -c "%n %s" {} \;
Here’s the corresponding output:
./ORA1212/sysaux01.dbf 1073750016
./ORA12CR1/users01.dbf 5368717312
./ORA12CR1/undotbs01.dbf 4294975488
How It Works
You can use the -size option of the find command in a number of useful ways. For example, if you want to find a file smaller than a certain size, use the – (minus) sign. This line of code finds any files smaller than 20MB beneath the directory named /home/oracle:
$ find . -size -20000k
If you want to find a file of an exact size, leave off the plus or minus sign before the size of the file designator. This example finds all files with the size of 16,384 bytes:
$ find . -size 16384c
5-29. Sorting Files by Size
Problem
You want to list files from largest to smallest.
Solution
The ls -alS command displays the long listing of all files sorted from largest to smallest; for example:
$ ls -alS
Here is a sample of the output:
total 4001584
-rwxr----- 1 oracle oinstall 2039488512 Jan 21 16:39 o1_mf_undotbs1_3gpysv9n_.dbf
-rwxr----- 1 oracle oinstall 983834624 Jan 21 16:37 o1_mf_sysaux_3gpystwj_.dbf
-rwxr----- 1 oracle oinstall 775954432 Jan 21 16:39 o1_mf_system_3gpysttv_.dbf
-rwxrwxr-x 1 oracle oinstall 176168960 Jan 21 02:31 o1_mf_temp_3gpz8s70_.tmp
To eliminate directories from the output, use the following technique:
$ ls -lS | grep ’^-’
If you want to reverse the order of the sort (smallest to largest), include the -r (reverse) switch:
$ ls -arlS
How It Works
If there are many files in a directory, you can combine ls and head to just list the “top n” files in a directory. The following example restricts the output of ls to the first five lines:
$ ls -alS | head -5
If you’re using Solaris, it might not have the -S option for the ls command. On Solaris systems, use a command such as the following to display files sorted by size:
$ ls -l | sort -nrk5 | head
Also be aware that the sort column (5 in the preceding line of code) may differ, depending on the long listing of the output.
5-30. Finding the Largest Space-Consuming Directories
Problem
You have a mount point that is out of space and you need to identify which directories are consuming the most space.
Solution
Use the du command to report on disk usage. The following example reports the top five directories consuming the most disk space beneath the current working directory:
$ du -S . | sort -nr | head -5
The -S (do not include size of subdirectories) option instructs du to report the amount of space used in each individual directory. By default, the output of space used is reported in kilobytes. Here’s a sample of the output:
1068448 ./lib
680104  ./assistants/dbca/templates
550140  ./bin
260136  ./rdbms/audit
227868  ./inventory/Scripts/ext/lib
If you want to report the cumulative space consumed by a directory, including its subdirectories, leave off the -S option:
$ du . | sort -nr | head -5
Here is the corresponding output:
6197828 .
1074732 ./lib
695236  ./assistants
684212  ./assistants/dbca
680104  ./assistants/dbca/templates
When not using the -S option, the top directory will always report the most consumed space because it is an aggregate of its disk space plus any spaced used by its subdirectories.
On some systems, there may not be an -S option. For example, on Solaris the -o option performs the same feature as the Linux -S option:
$ du -o . | sort -nr | head -10
Use man du to list all options available on your database server.
How It Works
The du command recursively lists the amount of disk space used by a directory and every subdirectory beneath it. If you don’t supply a directory name as an argument, du starts with the current working directory by default. The du command reports the amount of space consumed and the name of the directory on one line.
The du command has a variety of useful options. For example, the -s (summary) option is used to report a grand total of all space used beneath a directory and its subdirectories. This command reports on the total disk space used beneath the /orahome directory:
$ du -s /orahome
3324160 /orahome
You can also use the -h option to make the output more readable:
$ du -sh /orahome
3.2G /orahome
5-31. Truncating an Operating System File
Problem
You have a large trace file that is being written to by a database process. You know that the trace file doesn’t contain anything that needs to be retained. The trace file has filled up a disk, and you want to make the size of the file 0 bytes without removing the file because you know that a database process is actively writing to the file.
Solution
Copy the contents of /dev/null to the file. You can use either the cat command or the echo command to accomplish this. This example uses the cat command to make an existing log file 0 bytes in size:
$ cat /dev/null > listener.log
The other way to zero out the file is with the cp command. This example copies the contents of /dev/null to the trace file:
$ cp /dev/null listener.log
How It Works
One of us recently had a database that hung because one of the mount points was full, which prevented Oracle from writing to disk and subsequently hung the database. Upon further inspection, it was discovered that an Oracle Net trace file had grown to 4GB in size. The file had grown large because a fellow DBA had enabled verbose tracing in this environment and had forgotten to monitor the file or inform the other DBAs about this new level of tracing.
In this case, there was an Oracle Net process actively writing to the file, so we didn’t want to simply move or remove the file because we weren’t sure how the background process would react. In this case, it is safer to make the file 0 bytes. The /dev/null device is colloquially called the bit bucket. It is often used for a location to send output when you don’t need to save the output. It can also be used to make a file 0 bytes without removing the file.
Image Caution  Zeroing out a file permanently deletes its contents. Use the techniques in this recipe only if you’re certain you don’t need the information contained within the file.
5-32. Counting Lines and Words in a File
Problem
You want to count the number of lines and words in a shell script.
Solution
Use the wc (word count) command to count the number of lines and words in a file. This example counts the number of words in the rmanback.bsh shell script:
$ wc rmanback.bsh
35  204 1361 rmanback.bsh
The preceding output indicates that there are 35 lines, 204 words, and 1,361 characters in the file.
How It Works
If you want to see only the number of lines in a file, use wc with the -l option:
$ wc -l rmanback.bsh
35 rmanback.bsh
Similarly, if you want to display only the number of words, use the -w option:
$ wc -w rmanback.bsh
204 rmanback.bsh
If you want to see the line count of all files in a directory from smallest to largest, use the following:
$ wc -l *.* | sort -nk1
The preceding command pipes the output of wc to the sort command (sorting on the first column of the output).
5-33. Creating a Second Name for a File
Problem
When performing a new install of the Oracle binaries, your initialization parameter file is located in an Oracle Flexible Architecture (OFA) directory such as /ora01/admin/DBS/pfile. When starting a database, Oracle looks for the initialization file in the ORACLE_HOME/dbs directory by default.
You don’t want to maintain the initialization file in two separate directories. Instead, you want to create a link from the OFA directory to the default directory.
Solution
Use the ln -s command to create a soft link to another file name. The following creates a soft link for the physical file in /ora01/admin/DEV/pfile/initDEV.ora to the link of /ora01/product/12.1.0/dbs/initDEV.ora:
$ ln -s /ora01/admin/DEV/pfile/initDEV.ora /ora01/product/12.1.0/dbs/initDEV.ora
A long listing of the soft link shows it pointing to the physical file:
$ ls -altr /ora01/product/12.1.0/dbs/initDEV.ora
lrwxrwxrwx    1 oracle dba    39    Apr 15 15:58 initDEV.ora ->
/ora01/admin/DEV/pfile/initDEV.ora
If you need to remove a soft link, you can use the rm or unlink commands. As a precaution, you may want to create a copy of the file before you remove the soft link. Be careful that you remove the soft link, not the physical file. For this example, the soft link is removed (and not the physical file):
$ unlink /ora01/product/12.1.0/dbs/initDEV.ora
The physical file located in the /ora01/admin/DEV directory should still exist.
How It Works
A soft link (also referred to as a symbolic link) creates a file that acts as a pointer to another physical file. Soft links are used by DBAs when they need a file to appear as if it were in two separate directories, but physically resides in only one location.
The technique described in the solution of this recipe is commonly used by Oracle DBAs to manage the initialization file. This technique allows DBAs to view and edit the file from either the soft link name or the actual physical file name.
5-34. Creating a Second Name for a Directory
Problem
You want to physically move a datafile to a different disk location without having to change any of the Oracle metadata.
Solution
Use soft links to make a directory look like it exists, when it is really just a pointer to a physical location. This example shows how to move a tablespace datafile from one mount point to another, without having to change the datafile’s name as it appears in the data dictionary. In this example, the datafile will be moved from /oradisk1/DBS to /oradisk2/DBS.
On this server, the following physical mount points exist:
/oradisk1/DBS
/oradisk2/DBS
A long listing shows the ownership of the mount points as follows:
$ ls -altrd /oradisk*
drwxr-xr-x  3 oracle oinstall 4096 Apr 15 19:17 /oradisk2
drwxr-xr-x  3 oracle oinstall 4096 Apr 15 19:19 /oradisk1
Create the following soft link as the root user:
# ln -s /oradisk1 /oradev
Here’s a simple test to help you understand what is happening under the hood. Change directories to the soft link directory name:
$ cd /oradev/DBS
Notice that if you use the built-in Bash pwd command, the soft link directory is reported:
$ pwd
/oradev/DBS
Compare that with the use of the pwd utility located in the /bin directory, which reports the actual physical location:
$ /bin/pwd
/oradisk1/DBS
Image Note  You can also make the Bash built-in pwd command display the physical location by using the -P (physical) option (see recipe 5-1 for more details).
Next, create a tablespace that references the soft link directory. Here’s an example:
SQL> CREATE TABLESPACE td01
     DATAFILE ’/oradev/DBS/td01.dbf’ SIZE 50M;
A query from V$DATAFILE shows the soft link location of the datafile:
SQL> select name from v$datafile;
Here’s the output pertinent to this example:
/oradev/DBS/td01.dbf
Next, shut down your database:
SQL> shutdown immediate;
Now move the datafile to the new location:
$ mv /oradisk1/DBS/td01.dbf /oradisk2/DBS/td01.dbf
Next (as root) remove the previously defined soft link:
# rm /oradev
Now (as root) redefine the soft link to point to the new location:
# ln -s /oradisk2 /oradev
Now (as oracle) restart the database:
SQL> startup
If everything goes correctly, your database should start. You have physically moved a datafile without having to change any data dictionary metadata information.
How It Works
Using soft links on directories gives you some powerful options when you relocate datafiles. This technique allows you to make Oracle think that a required directory exists when it is really a soft link to a different physical location.
The techniques in the “Solution” section of this recipe are useful when duplicating databases to a remote server using RMAN. In this situation, you can use symbolic links to make the auxiliary database server look similar to the source database server filesystem. It provides a method for relocating databases to servers with different mount points from the original server in which you can make a mount point or directory look like it exists to Oracle when it is really a soft link.
5-35. Viewing a Large File
Problem
The database has dumped a large trace file, and you’re troubleshooting the issue and are attempting to view the file with a text editor (e.g., vi). You receive the following error:
Tmp file too large
:
You need to somehow read this file to diagnose the problem.
Solution
In this scenario, if you want to scroll through the file, you can use a tool such as more or less that allow you to view portions of the file; for example:
$ more TRG_m000_1489.trc
If you know the information you’re interested in is near the end of the file, you can use tail to create a separate file containing the content you’re interested in:
$ tail -100000 TRG_m000_1489.trc > out.txt
You can also use a utility such as split with the -l (lines) option to break the file into pieces:
$ split -l 100000 TRG_m000_1489.trc
The original file is still intact, but you should now see several files that begin with an x character. Each x file contains a portion of the original trace file based on the number of lines that you specified; for example:
$ ls x*
xaa  xab  xac  xad  xae  xaf
You should now be able to view these smaller files individually with the text editor. You have some control over the names of the split files; for example, the following names the split files with the string "new":
$ split -l 100000 TRG_m000_1489.trc new
A quick listing verifies this:
$ ls new*
newaa  newab  newac ...
How It Works
Sometimes databases dump large trace files when encountering problems. If a file is too big to fit in the memory area being used by the text editor, you won’t be able to directly view it; you’ll have to use one of the techniques discussed in the “Solution” section to view the file.
The more, less, and tail commands operate on smaller portions of the large file, so they can present the large file in a piecemeal fashion. The split command is very useful for taking a large file and breaking it into smaller pieces.
Depending on your version of the OS, the split command may be equipped with the -n parameter, which allows you to specify the number of chunks a file is divided into. For example, to create four split files that have roughly the same size, use the following:
$ split -n4 TRG_m000_1489.trc
If the -n option isn’t available, you can use the expr command and command substitution to calculate the sizes. The following example splits the trace file into four equal pieces based on line count:
$ split -l $(expr $(wc -l TRG_m000_1489.trc | awk ’{print $1}’) / 4) TRG_m000_1489.trc
5-36. Downloading Files
Problem
You want to use a command-line tool to download a file from a remote web site.
Solution
There are multiple tools for downloading files from the Internet; this recipe focuses on two feature-rich utilities: curl and wget. First up is curl.
curl
The curl (transfer a URL) command is an extremely robust tool for downloading files from remote web sites using common network protocols (e.g., HTTP, HTTPS, FTP, FTPS, and so on). For example, suppose you want to download a useful DBA shell script from the github.com web site. You can do so as follows:
$ curl -kL github.com/ardentperf/racattack/raw/master/makeDVD/auto.sh -o auto.sh
The -k option is for an unsecured download, and -L signifies the location. The -o option allows you to specify the name of the file created locally. If successful, you should now have a copy of the auto.sh file.
Here’s an example of downloading a file from an FTP site:
$ curl http://ftp.gnu.org/gnu/wget/wget-1.5.3.tar.gz -o wget-1.5.3.tar.gz
In this manner, you can use the command line to download files from the Internet. This section contains only a few examples of how to use curl. There are many options and features available with the robust downloading tool. Use the curl --help command to display all options.
wget
The wget (network downloader) utility can also be used to download files from remote web sites. For example, here we download a file from the github.com web site:
$ wget https://github.com/ardentperf/racattack/raw/master/makeDVD/auto.sh
Here’s another example of using wget, in which a tar file is downloaded from an FTP site:
$ wget http://ftp.gnu.org/gnu/wget/wget-1.5.3.tar.gz
If you want to rename a file, use the -O option; for example:
$ wget http://ftp.gnu.org/gnu/wget/wget-1.5.3.tar.gz -O my.tar.gz
You can also create a text file with the names of files you want to download and then use the -i option to instruct wget to download the file names within the text file. For example, suppose the file names are placed in a file named download.txt:
$ wget -i download.txt
In this way, you can efficiently automate file download from the Internet from the command line. We’ve only scratched the surface of the features available with wget. Use wget --help for a quick reference of all parameters available.
How It Works
It’s occasionally useful to be able to download files from remote servers. These files could be scripts that a DBA has posted or Oracle installation software. The curl and wget commands allow you to download remote files, provided that you have the download URL address. The basic syntax for these commands is as follows:
$ curl "download_url" -o file_name
$ wget "download_url" -O file_name
Files can be downloaded in this manner from any web site that allows downloads. For example, it is possible to download files from the My Oracle Support (MOS) web site. If you have an authenticated account, you can download files directly by using the following syntax (you must have a valid username and password to do this):
$ wget --http-user=user@domain.com --ask-password "file_url" -O file_name
Using this method to download files allows you to script and automate tasks that otherwise would require using a web browser to initiate a download. There’s nothing wrong with using a web browser (indeed, this is how most files are downloaded), but if you have the need to automate a download task via the command line, curl and wget are extremely flexible and powerful utilities for this task.




CHAPTER 6
image
Archiving and Compressing Files
Most people who work with computers realize that the task of copying many files from one location to another is more efficient if the files can be bundled together and copied as a single unit. This is especially true when copying hundreds or thousands of files from one location to another. For example, in a Windows environment, if you have hundreds of files in a folder, it is fairly easy to click and drag the folder (containing the files) and copy it to a different location. This copy task would be time-consuming and error-prone if you individually copied each file within the folder.
On Linux/Solaris systems, tar, cpio, and zip are utilities that DBAs often use to group files together into one file (such as a Windows folder). Bundling a group of files together into one file is known as creating an archive. Archiving tools allow you to back up all files in a directory structure and preserve any file characteristics such as permissions, ownership, and contents. The archive file is used to move or copy the files as a single unit to a different location.
The tar utility was originally used to bundle (or archive) files together and write them to tape, which is why it’s called tape archive, or tar for short. Although tar was originally used to write files to tape, its bundling capability is mainly what DBAs and developers use even today.
The cpio utility gets its name from its capability to copy files in and out of archived files. This command-line utility is also widely used by DBAs to bundle and move files.
The zip utility is another popular tool for bundling files. This utility is especially useful for moving files from one OS platform to another. For example, you can use zip to bundle and move a group of files from a Windows server to a Linux server.
Network performance can sometimes be slow when large archive files are moved from one server to another. In these situations, it is appropriate to compress large files before they are remotely transferred. Many compression programs exist, but the most commonly used are gzip, bzip2, and xz. The gzip and bzip2 utilities are widely available on most Linux/Solaris platforms. The xz utility is a newer tool and has a more efficient compression algorithm than the gzip and bzip2 compression tools.
Most of the utilities described in this chapter are frequently used by DBAs, SAs, and developers. Which utility you use for the task at hand depends on variables such as personal preference, standards defined for your environment, and features of the utility. For example, downloading installation files that are bundled with cpio means you have to be familiar with this utility. In other situations, you might use tar because the person receiving the file has requested that the file be in that format.
DBAs spend a fair amount of time moving large numbers of files to and from database servers. To do your job efficiently, it is critical to be proficient with archiving and compression techniques. In this chapter, we cover common methods that DBAs use to bundle and compress files. We also cover the basics of generating checksums, which are used to verify that bundled files are copied successfully from one server to another. First up is the tar utility.
6-1. Bundling Files Using tar
Problem
You want to package several database scripts into one file using the tar utility.
Solution
This first example uses the tar utility with the -cvf options to bundle all files ending with the string .sql that exist in the current working directory:
$ tar -cvf prodrel.tar *.sql
The -c (create) option specifies that you are creating a tar file. The -v (verbose) option instructs tar to display the names of the files included in the tar file. The -f (file) option directly precedes the name of the tar archive file. The file that is created in this example is named prodrel.tar.
Image Note  It is standard to name the tar file with the extension.tar. A file created with tar is colloquially referred to as a tarball.
If you want to include all files in a directory tree, specify the directory name from which you want the tar utility to begin bundling. The following command bundles all files in the /home/oracle/scripts directory (and any files in its subdirectories):
$ tar -cvf prodrel.tar scripts
Here is some sample output:
tar: Removing leading `/’ from member names
/home/oracle/scripts/
/home/oracle/scripts/s2.sql
tar: /home/oracle/scripts/prodrel.tar: file is the archive; not dumped
/home/oracle/scripts/s1.sql
If you want to view the files that you’ve just bundled use the -t (table of contents) option:
$ tar -tvf prodrel.tar
Here’s the corresponding output:
drwxr-xr-x oracle/dba        0 2015-05-10 11:19:55 home/oracle/scripts/
-rw-r--r-- oracle/dba      601 2015-05-10 11:14:30 home/oracle/scripts/s2.sql
-rw-r--r-- oracle/dba       22 2015-05-10 11:14:12 home/oracle/scripts/s1.sql
Note that if you retrieve files from this tarfile, the prior output shows the directories that will be created and where the scripts will be placed.
If you need to add one file to a tar archive, use the -r (append) option:
$ tar -rvf prodrel.tar newscript.sql
This example adds a directory named scripts2 to the tar file:
$ tar -rvf prodrel.tar scripts2
How It Works
DBAs, SAs, and developers often use the tar utility to bundle a large number of files together as one file. Once files have been packaged together, they can be easily moved as a unit to another location such as a remote server.
The tar command has the following basic syntax:
$ tar one_mandatory_option [other non-mandatory options] [tar file] [other files]
When running tar, you can specify only one mandatory option, and it must appear first on the command line (before any other options). Table 6-1 describes the most commonly used mandatory options.
Table 6-1. Mandatory tar Options
Option
Description
-c, --create
Creates a new archive file.
-d, --diff, --compare
Compares files stored in one tar file with other files.
-r, --append
Appends other files to tar file.
-t, --list
Displays the names of files in tar file. If other files are not listed, displays all files in tar file.
-u, --update
Adds new or updated files to tar file.
-x, --extract, --get
Extracts files from the tar file. If other files are not specified, extracts all files from tar file.
-A, --catenate, --concatenate
Appends a second tar file to a tar file.
Formatting Options
There are three methods for formatting options when running the tar command:
  • Short
  • Old (historic)
  • Mnemonic
The short format uses a single hyphen (-) followed by single letters signifying the options. Most of the examples in this chapter use the short format. This format is preferred because there is minimal typing involved.
The old format is similar to the short format except that it doesn’t use the hyphen. Most versions of tar still support the old syntax for backward compatibility with older Linux/Solaris distributions. We mention the old format here only so that you’re aware of it; we don’t use the old format in this chapter.
The mnemonic format uses the double-hyphen format followed by a descriptive option word. This format has the advantage that it is easier to understand which options are being used. For example, this line of code clearly shows that you’re creating a tar file, using the verbose output, for all files in the /home/oracle/scripts directory (and its subdirectories):
$ tar --create --verbose --file prodrel.tar /home/oracle/scripts
The -f or --file option must come directly before the name of the tar file you want to create. You receive unexpected results if you specify the f option anywhere, but just before the name of the tar file. Look carefully at this line of code and subsequent error message:
$ tar -cfv prodrel.tar *.sql
tar: ora01.tar: Cannot stat: No such file or directory
This line of code attempts to create a file named v and put in it a file named prodrel.tar, along with files in the current working directory ending with the *.sql extension.
Compressing
If you want to compress the files as you archive them, use the -z option (for gzip) or the -j option (for bzip2). The next example creates a compressed archive file of everything beneath the /home/oracle/scripts directory:
$ tar -cvzf prodrel.tar /home/oracle/scripts
Depending on the tar version, the previous command might not add an extension such as .gz to the name of the archive file. In that case, you can specify the file name with a .gz extension when creating the file or you can rename the file after it has been created.
If you’re using a non-GNU version of tar, you might not have the z or j compression options available. In this case, you have to explicitly pipe the output of tar to a compression utility such as gzip:
$ tar -cvf - /home/oracle/scripts | gzip > prodrel.tar.gz
Copying Directories
You can also use tar to copy a directory from one location to another on a box. This example uses tar to copy the scripts directory tree to the /home/oracle/backup directory. The /home/oracle/backup directory must be created before issuing the following command:
$ tar -cvf - scripts | (cd /home/oracle/backup; tar -xvf -)
The previous line of code needs a bit of explanation. The tar command uses standard input (signified with a hyphen [-]) as the tar file name, which is piped to the next set of commands. The cd command changes directories to /home/oracle/backup and then extracts to standard output (signified with a -). This gives you a method for copying directories from one location to another without having to create an intermediary tarball file.
Image Note  You can use the tree command to display a directory structure (and files contained within); for instance:
$ tree /home/oracle/scripts
Here is some sample output:
/home/oracle/scripts
|-- s1.sql
`-- s2.sql
You can also verify the structure of the backup directory:
$ tree /home/oracle/backup
Here’s the corresponding output:
/home/oracle/backup
`-- scripts
    |-- s1.sql
    `-- s2.sql
You can also copy a directory tree from your local server to a remote box. This is a powerful one-line combination of commands that allows you to bundle a directory, copy it to a remote server, and extract it remotely:
$ tar -cvf - <locDir> | ssh <user@remoteNode> "cd <remoteDir>; tar -xvf -"
For instance, the following command copies everything in the dev_1 directory to the remote ora03 server as the oracle user to the home/oracle directory:
$ tar -cvf - dev_1 | ssh oracle@ora03 "cd /home/oracle; tar -xvf -"
You’ll be prompted for the remote user password when you run the prior command. If you take out the user, ssh assumes that you’re trying to access the remote server as your username.
6-2. Unbundling Files Using tar
Problem
You want to retrieve files from a bundled tar file.
Solution
Use the -x option to extract files from a tar file. It is usually a good idea to first create a new directory and extract the files in the newly created directory. This way, you don’t mix up files that might already exist in a directory with files from the archive. This example creates a directory and then copies the tar file into the directory before extracting it:
$ mkdir tarball
$ cd tarball
At this point, it is worth viewing the files in the tar file (using the -t option). This code shows you the directories that will be created and where scripts will be restored:
$ tar -tvf prodrel.tar
drwxr-xr-x oracle/dba        0 2015-05-10 11:29:53 home/oracle/scripts/
-rw-r--r-- oracle/dba      601 2015-05-10 11:14:30 home/oracle/scripts/s2.sql
-rw-r--r-- oracle/dba       22 2015-05-10 11:14:12 home/oracle/scripts/s1.sql
The preceding output shows that the home directory will be created beneath the current working directory. It also shows that the scripts directory will be created with two SQL files.
Now copy the tar file to the current directory and extract the files from it:
$ cp ../prodrel.tar .
$ tar -xvf prodrel.tar
Here’s the corresponding output that shows the directories and files that were extracted:
home/oracle/scripts/
home/oracle/scripts/s2.sql
home/oracle/scripts/s1.sql
You can also use the tree command to confirm the directory structure and files therein:
$ tree
.
|-- home
|   `-- oracle
|       `-- scripts
|           |-- s1.sql
|           `-- s2.sql
`-- prodrel.tar
How It Works
The -x option allows you to extract files from a tar file. When extracting files, you can retrieve all files in the tar file or you can provide a list of specific files to be retrieved. The following example extracts one file from the tar file:
$ tar -xvf prodrel.tar scripts/s1.sql
You can also use pattern matching to retrieve files from a tar file. This example extracts all files that end in .sql from the tar file:
$ tar -xvf prodrel.tar *.sql
If you don’t specify any files to be extracted, all files are retrieved:
$ tar -xvf prodrel.tar
ABSOLUTE PATHS VS. RELATIVE PATHS
Some older, non-GNU versions of tar use absolute paths when extracting files. This line of code shows an example of specifying the absolute path when creating an archive file:
$ tar -cvf orahome.tar /home/oracle
Specifying an absolute path with non-GNU versions of tar can be dangerous. These older versions of tar restore the contents with the same directories and file names from which they were copied, so any directories and file names that previously existed on disk are overwritten.
When using older versions of tar, it is much safer to use a relative pathname. This example first changes directories to the /home directory and then creates an archive of the oracle directory (relative to the current working directory):
$ cd /home
$ tar -cvf orahome.tar oracle
This code uses the relative pathname (which is safer than using the absolute path). Having said that, you don’t have to worry about absolute vs. relative paths on most Linux/Solaris systems because these systems use the GNU version of tar. This version strips off the leading / and restores files relative to where your current working directory is located.
Use the man tar command if you’re not sure whether you have a GNU version of the tar utility. Near the top, you should see text such as “tar - The GNU version of the tar archiving utility”. You can also use the tar -tvf <tarfile name> command to preview which directories and files will be restored to which locations.
6-3. Finding Differences in Bundled Files Using tar
Problem
You wonder whether there have been any changes to files in a directory since you last created a tar file.
Solution
Use the -d (difference) option of the tar command to compare files in a tar file with files in a directory tree. The following example finds any differences between the tar file prodrel.tar and the scripts directory:
$ tar -df prodrel.tar scripts
The preceding command displays any differences with the physical characteristics of any of the files. Here is some sample output:
scripts/s1.sql: Mod time differs
scripts/s1.sql: Size differs
How It Works
Showing differences between what’s in a tar file and the current files on disk can help you determine whether you need to create or update the tar file. If you find differences and want to update the tar file to make it current, use the -u option. This feature updates and appends any files that are different or have been modified since the tarball was created. This line of code updates or appends to the tar file any changed or new files in the scripts directory:
$ tar -uvf prodrel.tar scripts
This output indicates that s1.sql has been updated:
scripts/
scripts/s1.sql
6-4. Bundling Files Using cpio
Problem
You want to use cpio (copy files to and from an archive) to bundle a set of files into one file.
Solution
When using cpio to bundle files, specify -o (for out or create) and -v (verbose). It is customary to name a bundled cpio file with the extension of .cpio. The following command takes the output of the ls command and pipes it to cpio, which creates a file named backup.cpio:
$ ls | cpio -ov > backup.cpio
To list the files contained in a cpio file, use the -i (copy-in mode), t (table of contents), and -v (verbose) options:
$ cpio -itv < backup.cpio
Here’s an alternate way to view the contents of a cpio file using the cat command:
$ cat backup.cpio | cpio -itv
If you want to bundle up a directory tree with all files and subdirectories, use the find command on the target directory. The following line of code pipes the output of the find command to cpio, which bundles all files and subdirectories in the current working directory and below:
$ find . -depth | cpio -ov > backup.cpio
If possible, don’t back up a pathname starting with a / (forward slash). Our recommendation is that you navigate to the directory above the one you want to back up and initiate the cpio command from there. For example, suppose that you want to back up the /home/oracle directory (and subdirectories and files). Use the following:
$ cd $HOME
$ cd ..
$ find oracle -depth -print | cpio -ov > orahome.cpio
In this manner, the files are placed in a directory structure that starts with the directory specified in the find command.
You can also copy a directory using cpio. The following example copies the scripts directory (and any subdirectories and files) to the /home/oracle/backup directory.
$ find scripts -print | cpio -pdm /home/oracle/backup
In the preceding line of code, the -p switch invokes cpio in passthrough mode (pipes output to input). The d option instructs cpio to create leading directories, and the m option preserves the original timestamp on files.
The cpio utility can also be used to copy a directory tree from one server to another. This example copies the local orascripts directory to the remote server via ssh, in which it extracts the files into the orascripts directory on the remote server:
$ find orascripts -depth -print | cpio -oaV | ssh oracle@cs-xvm ’cpio -imVd’
It is also possible to do the reverse of the preceding code: copy a directory tree from a remote server to a local server:
$ ssh oracle@cs-xvm "find orascripts -depth -print | cpio -oaV" | cpio -imVd
How It Works
The cpio utility is a flexible and effective tool for copying large amounts of files. The key to understanding how to package files with cpio is to know that it accepts as input a piped list of files from the output of commands such as ls or find. Here is the general syntax for using cpio to bundle files:
$ [ls or find command] | cpio o[other options] > filename
In addition to the examples shown in the solution section of this recipe, there are a few other use cases worth exploring. For example, you can specify that you want only those file names that match a certain pattern. This line of code bundles all SQL scripts in the scripts directory:
$ find scripts -name "*.sql" | cpio -ov > mysql.cpio
If you want to create a compressed file, pipe the output of cpio to a compression utility such as gzip:
$ find . -depth | cpio -ov | gzip > backup.cpio.gz
The -depth option tells the find command to print the directory contents before the directory. This behavior is especially useful when bundling files that are in directories with restricted permissions.
To add a file to a cpio bundle, use the -A (append) option. Also specify the -F option to specify the name of the existing cpio file. This example adds any files with the extension of .sql to an existing cpio archive named backup.cpio:
$ ls *.sql | cpio -ovAF backup.cpio
To add a directory to an existing cpio file, use the find command to specify the name of the directory. This line of code adds the backup directory to the backup.cpio file:
$ find backup | cpio -ovAF backup.cpio
6-5. Unbundling Files Using cpio
Problem
You just downloaded some software installation files, and you notice that they are bundled as cpio files. You wonder how to retrieve files from the cpio archive.
Solution
Use cpio with the idmv options when unbundling a file. The -i option instructs cpio to redirect input from an archive file. The -d and -m options are important because they instruct cpio to create directories and preserve file modification times, respectively. The -v option specifies that the file names should be printed as they are extracted.
The following example first creates a directory to store the scripts before unbundling the cpio file:
$ mkdir disk1
$ cd disk1
After copying the archive file to the disk1 directory, use cpio to unpack the file:
$ cpio -idvm < backup.cpio
You can also pipe the output of the cat command to cpio as an alternative way of extracting the file:
$ cat backup.cpio | cpio -idvm
You can also uncompress and unbundle files in one concatenated string of commands. This command allows you to easily uncompress and extract media distributed as compressed cpio files:
$ cat backup.cpio.gz | gunzip | cpio -idvm
How It Works
You’ll occasionally work with files that have been bundled with the cpio utility. These files might be installation software or a backup file received from another DBA. The cpio utility is used with the -i option to extract archive files. Here is the general syntax to unbundle files using cpio:
$ cpio -i[other options] < filename
You can extract all files or a single file from a cpio archive. This example uses the cpio utility to extract a single file named rman.bsh from a cpio file named dbascripts.cpio:
$ cpio -idvm rman.bsh < dbascripts.cpio
An alternative way to unpack a file is to pipe the output of cat to cpio. Here is the syntax for this technique:
$ cat filename | cpio -i[other options]
Note that you can use cpio to unbundle tar files. This example uses cpio to extract files from a script named script.tar:
$ cpio -idvm < script.tar
6-6. Bundling Files Using zip
Problem
Your database design tool runs on a Windows box. After generating some schema creation scripts, you want to bundle the files on the Windows server and copy them to the Linux or Solaris box. You wonder whether there is a common archiving tool that works with both Windows and Linux/Solaris servers.
Solution
Use the zip utility if you need to bundle and compress files and transfer them across hardware platforms. This example uses zip with the -r (recursive) option to bundle and compress all files in the /home/oracle directory tree (it includes all files and subdirectories):
$ zip -r ora.zip /home/oracle
If you want to view the files listed in the zip file, use unzip -l:
$ unzip -l ora.zip
You can also specify files that you want included in a zip file. The following command bundles and compresses all SQL files in the current working directory:
$ zip sql.zip *.sql
Use the -g (grow) option to add to an existing zip file. This example adds the file script.sql to the sql.zip file:
$ zip -g sql.zip script.sql
You can also add a directory to an existing zip archive. This line adds the directory backup to the sql.zip file:
$ zip -gr sql.zip backup
How It Works
The zip utility is widely available on Windows and Linux/Solaris servers. Files created by zip on Windows can be copied to and extracted on a Linux or Solaris box. The zip utility both bundles and compresses files. Although the compression ratio achieved by zip is not nearly as efficient as gzip, bzip2, or xz, the zip and unzip utilities are popular because the utilities are portable across many OS platforms. If you need cross-platform portability, use zip to bundle and unzip to unbundle.
Image Tip  Run zip -h at the command line to get the help output.
6-7. Unbundling Files Using zip
Problem
Your database-modeling tool runs on a Windows box. After generating some schema creation scripts, you want to bundle the files on the Windows server, copy them to the Linux box, and unbundle them.
Solution
To uncompress a zipped file, first create a target directory location, move the zip file to the new directory, and finally use unzip to unbundle and uncompress all files and directories included in the zip file. The example in this solution performs the following steps:
  1. Creates a directory named march
  2. Changes the directory to the new directory
  3. Copies the zip file to the new directory
  4. Unzips the zip file
    $ mkdir march
    $ cd march
    $ cp /mybackups/mvzip.zip .
    $ unzip mvzip.zip
You should see output indicating which directories are being created and which files are being extracted. Here’s a small snippet of the output for this example:
inflating: mscd642/perf.sql
creating: mscd642/ppt/
inflating: mscd642/ppt/chap01.ppt
inflating: mscd642/ppt/chap02.ppt
How It Works
The unzip utility lists, tests, and extracts files from a zipped archive file. You can use this utility to unzip files, regardless of the OS platform on which the zip file was originally created. It is handy because it allows you to easily transfer files between servers of differing OSs (e.g., Linux, Solaris, Windows, and so on).
You can also use the unzip command to extract a subset of files from an existing zip archive. The following example extracts upgrade.sql from the upgrade.zip file:
$ unzip upgrade.zip upgrade.sql
Similarly, this example retrieves all files that end with the extension of *.sql:
$ unzip upgrade.zip *.sql
Sometimes you want to add only those files that exist in the source directory but don’t exist in the target directory. First, recursively zip the source directory. In this example, the relative source directory is scripts:
$ zip -r /home/oracle/ora.zip scripts
Then cd to the target location and unzip the file with the -n option. In this example, there is a scripts directory beneath the /backup directory:
$ cd /backup
$ unzip -n /home/oracle/ora.zip
The -n option instructs the unzip utility to not overwrite existing files. The net effect is that you unbundle only those files that exist in the source directory but don’t exist in the target directory.
6-8. Bundling Files Using find
Problem
You want to find all trace files over a certain age and bundle them into an archive file. The idea is that once you bundle the files, you can remove the old trace files.
Solution
You have to use a combination of commands to locate and compress files. This example finds all trace files that were modified more than two days ago and then bundles and compresses them:
$ find /ora01/admin/bdump -name "*.trc" -mtime +2 | xargs tar -czvf trc.tar.gz
This example uses cpio to achieve the same result:
$ find /ora01/admin/bdump -name "*.trc" -mtime +2 | cpio -ov | gzip > trc.cpio.gz
In this manner you can find, bundle, and compress files.
How It Works
You often have to clean up old files on database servers. When dealing with log or trace files, it can be desirable to first find, bundle, and compress the files. At some later time, you can physically delete the files after they’re not needed anymore (see Chapter 5 for examples of finding and removing files). We recommend that you encapsulate the code in this recipe in a shell script and run it regularly from a scheduling utility such as cron (see Chapter 10 for details on automating jobs).
6-9. Compressing and Uncompressing Files
Problem
Before copying a large file over the network to a remote server, you want to compress it.
Solution
Several utilities are available for compressing and uncompressing files. The gzip, bzip2, and xz utilities are widely used in Linux and Solaris environments. Each of them is briefly detailed in the following sections.
gzip
This example uses gzip to compress the dbadoc.txt file:
$ gzip dbadoc.txt
The gzip utility adds an extension of .gz to the file after it has been compressed. To uncompress a file compressed by gzip, use the gunzip utility:
$ gunzip dbadoc.txt.gz
The gunzip utility uncompresses the file and removes the .gz extension. The uncompressed file has the original name it had before the file was compressed.
Sometimes there is a need to peer inside a compressed file without uncompressing it. The following example uses the -c option to send the contents of the gunzip command to standard output, which is then piped to grep to search for the string dba_tables:
$ gunzip -c dbadoc.txt.gz | grep -i dba_tables
You can also use the zcat utility to achieve the same effect. This command is identical to the previous command:
$ zcat dbadoc.txt.gz | grep -i dba_tables
bzip2
The bzip2 utility is newer and more efficient than gzip. By default, files compressed with bzip2 are given a .bz2 extension. This example compresses a trace file:
$ bzip2 scrdv12_ora_19029.trc
To uncompress a bzip2 compressed file, use bunzip2. This utility expects a file to be uncompressed to be named with an extension of one of the following: .bz2, .bz, .tbz2, .tbz, or .bzip2. This code uncompresses a file:
$ bunzip2 scrdv12_ora_19029.trc.bz2
The bzip2 utility uncompresses the file and removes the .bz2 extension. The uncompressed file has the original name it had before the file was compressed.
Sometimes you need to view the contents of a compressed file without uncompressing it. The following example uses the -c option to send the contents of the bunzip2 command to standard output, which is then piped to grep to search for the string error:
$ bunzip2 -c scrdv12_ora_19029.trc.bz2 | grep -i error
xz
The xz compression utility, which is relatively new to the compression scene, creates smaller files than gzip and bzip2. Here’s an example of compressing a file using xz:
$ xz DWREP_mmon_7629.trc
This code creates a file with an .xz extension. If you need extreme compression, you can use the -e and -9 options:
$ xz -e -9 DWREP_mmon_7629.trc
To list details about the compressed file, use the -l option:
$ xz -l DWREP_mmon_7629.trc.xz
Here’s some sample output:
Strms  Blocks   Compressed Uncompressed  Ratio  Check   Filename
    1       1     72.5 KiB  1,055.2 KiB  0.069  CRC64   DWREP_mmon_7629.trc.xz
To uncompress a file, use the -d option:
$ xz -d DWREP_mmon_7629.trc.xz
Sometimes you need to view the contents of a compressed file without uncompressing it. The following example uses the -c option to send the contents of the xz command to standard output, which is then piped to grep to search for the string error:
$ xz -d -c DWREP_mmon_7629.trc.xz | grep -i error
How It Works
DBAs often move files from one location to another. This action frequently includes moving files to remote servers. Compressing files before transferring them is critical to being able to copy large files. Although several compression utilities are available; the most widely used are gzip, bzip2, and xz.
The gzip utility is widely available in the Linux and Solaris environments. The bzip2 utility is a newer and more efficient compression algorithm than gzip. The bzip2 tool is CPU-intensive, but achieves high compression ratios. The xz compression tool is newer than gzip and bzip2. If you require the compressed file to be as small as possible, use xz. This tool uses more system resources, but achieves higher compression ratios.
Image Note  There is an older compression utility aptly named compress. Files compressed with this utility are given a .Z or .z extension (and can be uncompressed with the uncompress utility). This utility is less efficient than the other compression utilities mentioned in this recipe. We mention it in this chapter only because you may run into files compressed with this utility on older servers.
6-10. Validating File Contents
Problem
You just copied a file from one server to another. You need to verify that the destination file has the same contents as the source file.
Solution
Use a utility such as sum to compute a checksum on a file before and after the copy operation. This example uses the sum command to display the checksum and number of blocks within a file:
$ sum backup.tar
24092 78640
In the preceding output, the checksum is 24092, and the number of blocks in the file is 78640. After copying this file to a remote server, run the sum command on the destination file to ensure that it has the same checksum and number of blocks. Table 6-2 lists the common utilities used for generating checksums.
Table 6-2. Common Linux Utilities Available for Generating Checksum Values
Checksum Utility
Description
sum
Calculates checksum and number of blocks
cksum
Computes checksum and count of bytes
md5sum
Generates 128-bit Message-Digest algorithm 5 (MD5) checksum and can detect file changes via --check option
sha1sum
Calculates 160-bit SHA1 (Secure Hash Algorithm 1) checksum and can detect file changes via --check option
Image Note  When transferring files between different versions of the OS, the sum utility may compute a different checksum for a file, depending on the version of the OS.
How It Works
When moving files between servers or compressing and uncompressing, it is prudent to verify that a file contains the same contents as it did before the copy or compress/uncompress operation. The most reliable way to do this is to compute a checksum, which allows you to verify that a file wasn’t inadvertently corrupted during a transmission or compression.
A checksum is a value that is calculated that allows you to verify a file’s contents. The simplest form of a checksum is a count of the number of bytes in a file. For example, when transferring a file to a remote destination, you can then compare the number of bytes between the source file and the destination file. This checksum algorithm is very simplistic and not entirely reliable. However, in many situations, counting bytes is the first step to determining whether a source and destination file contain the same contents. Fortunately, many standard utilities are available to calculate reliable checksum values.
DBAs also compute checksums to ensure that important files haven’t been compromised or modified. For example, you can use the md5sum utility to compute and later check the checksum on a file to ensure that it hasn’t been modified in any way. This example uses md5sum to calculate and store the checksums of the listener.ora, sqlnet.ora, and tnsnames.ora files:
$ cd $TNS_ADMIN
$ md5sum listener.ora sqlnet.ora tnsnames.ora >net.chk
You can then use md5sum later to verify that these files haven’t been modified since the last time a checksum was computed:
$ md5sum --check net.chk listener.ora: OK sqlnet.ora: FAILED tnsnames.ora: OK
md5sum: WARNING: 1 of 3 computed checksums did NOT match
The preceding output shows that the sqlnet.ora file has been modified sometime after the checksum was computed. You can detect changes and ensure that important files have not been compromised.


No comments:

Post a Comment