We'll send a magic link to your inbox. Email Address. All Sign in options. Enter a Email Address. Choose your interests Get the latest news, expert insights and market research, sent straight to your inbox. Newsletter Topics Select minimum 1 topic. The default file name is the name of the data file, and the default file extension or file type is.
A discard file name specified on the command line overrides one specified in the control file. If a discard file with that name already exists, then it is either overwritten or a new version is created, depending on your operating system. Parent topic: Specifying the Discard File. You can specify a different number of discards for each data file. Or, if you specify the number of discards only once, then the maximum number of discards specified applies to all files.
When the discard limit is reached, processing of the data file terminates and continues with the next data file, if one exists. The list shows different ways that you can specify a name for the discard file from within the control file. To specify a discard file with file name circular and default file extension or file type of. To specify a discard file named notappl with the file extension or file type of. To specify a full path to the discard file forget. An attempt is made to insert every record into such a table.
Therefore, records may be rejected, but none are discarded. Case study 7, Extracting Data from a Formatted Report, provides an example of using a discard file. A file name specified on the command line overrides any discard file that you may have specified in the control file. If there is a match using the equal or not equal specification, then the field is set to NULL for that row.
Any field that has a length of 0 after blank trimming is also set to NULL. This specification is used for every date or timestamp field unless a different mask is specified at the field level. A mask specified at the field level overrides a mask specified at the table level. Datetime and Interval Data Types for information about specifying datetime data types at the field level.
Oracle Database Globalization Support Guide. The following sections provide a brief introduction to some of the supported character encoding schemes.
Data can be loaded in multibyte format, and database object names fields, tables, and so on can be specified with multibyte characters. In the control file, comments and object names can also use multibyte characters. Unicode is a universal encoded character set that supports storage of information from most languages in a single character set. Unicode provides a unique code value for every character, regardless of the platform, program, or language.
A character in UTF-8 can be 1 byte, 2 bytes, or 3 bytes long. AL32UTF8 and UTF8 character sets are not compatible with each other as they have different maximum character widths four versus three bytes per character. Multibyte fixed-width character sets for example, AL16UTF16 are not supported as the database character set. This alternative character set is called the database national character set. Only Unicode character sets are supported as the database national character set.
However, the Oracle database supports only UTF encoding with big-endian byte ordering AL16UTF16 and only as a database national character set, not as a database character set. When data character set conversion is required, the target character set should be a superset of the source data file character set. Otherwise, characters that have no equivalent in the target character set are converted to replacement characters, often a default character such as a question mark?
This causes loss of data. If they are specified in bytes, and data character set conversion is required, then the converted values may take more bytes than the source values if the target character set uses more bytes than the source character set for any character that is converted. This will result in the following error message being reported if the larger target value exceeds the size of the database column:.
You can avoid this problem by specifying the database column size in characters and also by using character sizes in the control file to describe the data. Another way to avoid this problem is to ensure that the maximum column size is large enough, in bytes, to hold the converted value. Character-Length Semantics.
Rows might be rejected because a field is too large for the database column, but in reality the field is not too large. A load might be abnormally terminated without any rows being loaded, when only the field that really was too large should have been rejected. Parent topic: Input Character Conversion.
Normally, the specified name must be the name of an Oracle-supported character set. However, because you are allowed to set up data using the byte order of the system where you create the data file, the data in the data file can be either big-endian or little-endian. Therefore, a different character set name UTF16 is used. All primary data files are assumed to be in the same character set. Byte Ordering. Oracle Database Globalization Support Guide for more information about the names of the supported character sets.
Control File Character Set. If the control file character set is different from the data file character set, then keep the following issue in mind. To ensure that the specifications are correct, you may prefer to specify hexadecimal strings, rather than character string values. If hexadecimal strings are used with a data file in the UTF Unicode encoding, then the byte order is different on a big-endian versus a little-endian system.
For example, "," comma in UTF on a big-endian system is X'c'. On a little-endian system it is X'2c00'. This allows the same syntax to be used in the control file on both a big-endian and a little-endian system. For example, the specification CHAR 10 in the control file can mean 10 bytes or 10 characters.
These are equivalent if the data file uses a single-byte character set. However, they are often different if the data file uses a multibyte character set. To avoid insertion errors caused by expansion of character strings during character set conversion, use character-length semantics in both the data file and the target database columns.
Byte-length semantics are the default for all data files except those that use the UTF16 character set which uses character-length semantics by default. The following data types use byte-length semantics even if character-length semantics are being used for the data file, because the data is binary, or is in a special binary-encoded form in the case of ZONED and DECIMAL :.
This is necessary to handle data files that have a mix of data of different data types, some of which use character-length semantics, and some of which use byte-length semantics. The SMALLINT length field takes up a certain number of bytes depending on the system usually 2 bytes , but its value indicates the length of the character string in characters. Character-length semantics in the data file can be used independent of whether character-length semantics are used for the database columns.
Therefore, the data file and the database columns can use either the same or different length semantics. The fastest way to load shift-sensitive character data is to use fixed-position fields without delimiters. To improve performance, remember the following points:. If blanks are not preserved and multibyte-blank-checking is required, then a slower path is used.
This can happen when the shift-in byte is the last byte of a field after single-byte blank stripping is performed. Additionally, when an interrupted load is continued, the use and value of the SKIP parameter can vary depending on the particular case. The following sections explain the possible scenarios. In a conventional path load, data is committed after all data in the bind array is loaded into all tables.
If the load is discontinued, then only the rows that were processed up to the time of the last commit operation are loaded. There is no partial commit of data. Parent topic: Interrupted Loads. In a direct path load, the behavior of a discontinued load varies depending on the reason the load was discontinued. Space errors when loading data into multiple subpartitions that is, loading into a partitioned table, a composite partitioned table, or one partition of a composite partitioned table :.
If space errors occur when loading into multiple subpartitions, then the load is discontinued and no data is saved unless ROWS has been specified in which case, all data that was previously committed will be saved. The reason for this behavior is that it is possible rows might be loaded out of order. This is because each row is assigned not necessarily in order to a partition and each partition is loaded separately. If the load discontinues before all rows assigned to partitions are loaded, then the row for record "n" may have been loaded, but not the row for record "n-1".
Space errors when loading data into an unpartitioned table, one partition of a partitioned table, or one subpartition of a composite partitioned table:. In either case, this behavior is independent of whether the ROWS parameter was specified. When you continue the load, you can use the SKIP parameter to skip rows that have already been loaded.
Parent topic: Discontinued Direct Path Loads. This means that when you continue the load, the value you specify for the SKIP parameter may be different for different tables. If a fatal error is encountered, then the load is stopped and no data is saved unless ROWS was specified at the beginning of the load. In that case, all data that was previously committed is saved. This means that the value of the SKIP parameter will be the same for all tables. When a load is discontinued, any data already loaded remains in the tables, and the tables are left in a valid state.
If the direct path load method is used, then any indexes on the table are left in an unusable state. You can either rebuild or re-create the indexes before continuing, or after the load is restarted and completes. Other indexes are valid if no other errors occurred.
See Indexes Left in an Unusable State for other reasons why an index might be left in an unusable state. To continue the discontinued load, use the SKIP parameter to specify the number of logical records that have already been processed by the previous load.
At the time the load is discontinued, the value for SKIP is written to the log file in a message similar to the following:. This message specifying the value of the SKIP parameter is preceded by a message indicating why the load was discontinued. Note that for multiple-table loads, the value of the SKIP parameter is displayed only if it is the same for all tables.
To combine multiple physical records into one logical record, you can use one of the following clauses, depending on your data:. In the following example, integer specifies the number of physical records to combine. For example, two records might be combined if a pound sign were in byte position 80 of the first record.
If any other character were there, then the second record would not be added to the first. If the condition is true in the current record, then the next physical record is read and concatenated to the current physical record, continuing until the condition is false. If the condition is false, then the current physical record becomes the last physical record of the current logical record. THIS is the default. If the condition is true in the next record, then the current physical record is concatenated to the current logical record, continuing until the condition is false.
For the equal operator, the field and comparison string must match exactly for the condition to be true.
Then the current position is advanced until no more adjacent whitespace characters are found. This allows field values to be delimited by varying amounts of whitespace.
Enclosed fields are read by skipping whitespace until a nonwhitespace character is encountered. If that character is the delimiter, then data is read up to the second delimiter. Any other character causes an error. If two delimiter characters are encountered next to each other, a single occurrence of the delimiter character is used in the data value.
However, if the field consists of just two delimiter characters, its value is null. The syntax for delimiter specifications is:. BY An optional keyword for readability.
If the data is not enclosed, the data is read as a terminated field. X'hexstr' The delimiter is a string that has the value specified by X'hexstr' in the character encoding scheme, such as X'1F' equivalent to 31 decimal. AND This keyword specifies a trailing enclosure delimiter that may be different from the initial enclosure delimiter. If the AND clause is not present, then the initial and trailing delimiters are assumed to be the same.
Only valid when loading data from a LOB file. Sometimes the same punctuation mark that is a delimiter also needs to be included in the data. To make that possible, two adjacent delimiter characters are interpreted as a single occurrence of the character, and this character is included in the data. For example, this data:. For this reason, problems can arise when adjacent fields use the same delimiters. For example, with the following specification:. The default maximum length of delimited data is bytes.
Therefore, delimited fields can require significant amounts of storage for the bind array. A good policy is to specify the smallest possible maximum value. See Determining the Size of the Bind Array. Trailing blanks can only be loaded with delimited datatypes. If conflicting lengths are specified, one of the lengths takes precedence.
A warning is also issued when a conflict exists. This section explains which length is used. If you specify a starting position and ending position for one of these fields, then the length of the field is determined by these specifications. If you specify a length as part of the datatype and do not give an ending position, the field has the given length. If starting position, ending position, and length are all specified, and the lengths differ, then the length given as part of the datatype specification is used for the length of the field.
For example, if. If a delimited field is specified with a length, or if a length can be calculated from the starting and ending position, then that length is the maximum length of the field. The actual length can vary up to that maximum, based on the presence of the delimiter. If a starting and ending position are both specified for the field, and if a field length is specified in addition, then the specified length value overrides the length calculated from the starting and ending position.
If the expected delimiter is absent and no maximum length has been specified, then the end of record terminates the field. The length of a date field depends on the mask, if a mask is specified. For example, assume the mask is specified as follows:. Then "May 3, " would occupy 11 character positions in the record, while "January 31, " would occupy If starting and ending positions are specified, however, then the length calculated from the position specification overrides a length derived from the mask.
A specified length such as "DATE 12 " overrides either of those. If the date field is also specified with terminating or enclosing delimiters, then the length specified in the control file is interpreted as a maximum length for the field.
When a datafile created on one platform is to be loaded on a different platform, the data must be written in a form that the target system can read. For example, if the source system has a native, floating-point representation that uses 16 bytes, and the target system's floating-point numbers are 12 bytes, the target system cannot directly read data generated on the source system. The best solution is to load data across a Net8 database link, taking advantage of the automatic conversion of datatypes.
This is the recommended approach, whenever feasible. Problems with interplatform loads typically occur with native datatypes. In some situations, it is possible to avoid problems by lengthening a field by padding it with zeros, or to read only part of the field to shorten it for example, when an 8-byte integer is to be read on a system that uses 4-byte integers, or vice versa. Note, however, that incompatible byte-ordering or incompatible datatype implementation may prevent this.
Datafiles written using these datatypes are longer than those written with native datatypes. They may take more time to load, but they transport more readily across platforms. However, where incompatible byte-ordering is an issue, special filters may still be required to reorder the data. It does not apply to the direct path load method. Because a direct path load formats database blocks directly, rather than using Oracle's SQL interface, it does not use a bind array.
Multiple rows are read at one time and stored in the bind array. The bind array must be large enough to contain a single row. Otherwise, the bind array contains as many rows as can fit within it, up to the limit set by the value of the ROWS parameter.
Although the entire bind array need not be in contiguous memory, the buffer for each field in the bind array must occupy contiguous memory. To minimize the number of calls to Oracle and maximize performance, large bind arrays are preferable. In general, you gain large improvements in performance with each increase in the bind array size up to rows.
Increasing the bind array size to be greater than rows generally delivers more modest improvements in performance. The size in bytes of rows is typically a good value to use. The remainder of this section details the method for determining that size.
It is not usually necessary to perform the detailed calculations described in this section. This section should be read when maximum performance is desired, or when an explanation of memory usage is needed. The bind array never exceeds that maximum. If that size is too large to fit within the specified maximum, the load terminates with an error. The bind array's size is equivalent to the number of rows it contains times the maximum length of each row.
The maximum length of a row is equal to the sum of the maximum field lengths, plus overhead. Many fields do not vary in size. These fixed-length fields are the same for each loaded row. There is no overhead for these fields. The maximum lengths describe the number of bytes, or character positions, that the fields can occupy in the input data record.
That length also describes the amount of storage that each field occupies in the bind array, but the bind array includes additional overhead for fields that can vary in size. When specified without delimiters, the size in the record is fixed, but the size of the inserted field may still vary, due to whitespace trimming.
So internally, these datatypes are always treated as varying-length fields--even when they are fixed-length fields. A length indicator is included for each of these fields in the bind array. The space reserved for the field in the bind array is large enough to hold the longest possible value of the field. The length indicator gives the actual length of the field for each row.
On most systems, the size of the length indicator is 2 bytes. On a few systems, it is 3 bytes. To determine its size, use the following control file:. This control file loads a 1-character field using a 1-row bind array. In this example, no data is actually loaded because a conversion error occurs when the character "a" is loaded into a numeric column deptno.
The bind array size shown in the log file, minus one the length of the character field is the value of the length indicator. Note: A similar technique can determine bind array size without doing any calculations.
Multiply by the number of rows you want in the bind array to determine the bind array size. Table through Table summarize the memory requirements for each datatype. It is composed of 2 numbers. They can consume enormous amounts of memory--especially when multiplied by the number of rows in the bind array.
It is best to specify the smallest possible maximum length for these fields. This can make a considerable difference in the number of rows that fit into the bind array. Imagine all of the fields listed in the control file as one, long data structure--that is, the format of a single row in the bind array. It is especially important to minimize the buffer allocations for such fields.
Such generated data does not require any space in the bind array. If you want all inserted values for a given column to be null, omit the column's specifications entirely.
See also Specifying Field Conditions for details on the conditional tests. The condition has the same format as that specified for a WHEN clause. The column's value is set to null if the condition is true. Otherwise, the value remains unchanged. This specification may be useful if you want certain data values to be replaced by nulls. The value for a column is first determined from the datafile.
It is then set to null just before the insert takes place. Totally blank fields for numeric or DATE fields cause the record to be rejected. If an all-blank CHAR field is surrounded by enclosure delimiters, then the blanks within the enclosures are loaded.
Otherwise, the field is loaded as null. Blanks and tabs constitute whitespace. Depending on how the field is specified, whitespace at the start of a field leading whitespace and at the end of a field trailing whitespace may, or may not be, included when the field is inserted into the database.
This section describes the way character data fields are recognized, and how they are loaded. In particular, it describes the conditions under which whitespace is trimmed from fields. See Preserving Whitespace for more information.
The information in this section applies only to fields specified with one of the character-data datatypes:. There are two ways to specify field length. If a field has a constant length that is defined in the control file, then it has a predetermined size.
If a field's length is not known in advance, but depends on indicators in the record, then the field is delimited. Fields that have a predetermined size are specified with a starting position and ending position, or with a length, as in the following examples:. In the second case, even though the field's exact position is not specified, the field's length is predetermined.
Delimiters are characters that demarcate field boundaries. Enclosure delimiters surround a field, like the quotation marks in:. Termination delimiters signal the end of a field, like the comma in:. If a predetermined size is specified for a delimited field, and the delimiter is not found within the boundaries indicated by the size specification, then an error is generated.
For example, if you specify:. If a comma is found, then it delimits the field. When a starting position is not specified for a field, it begins immediately after the end of the previous field.
Figure illustrates this situation when the previous field has a predetermined size. If the previous field is terminated by a delimiter, then the next field begins immediately after the delimiter, as shown in Figure When a field is specified both with enclosure delimiters and a termination delimiter, then the next field starts after the termination delimiter, as shown in Figure In Figure , both fields are stored with leading whitespace.
Fields do not include leading whitespace in the following cases:. When optional enclosure delimiters are specified for the field, and the enclosure delimiters are not present These cases are illustrated in the following sections. The next field starts at the next nonwhitespace character.
Figure illustrates this case. Leading whitespace is also removed from a field when optional enclosure delimiters are specified but not present. If none is found, then the first nonwhitespace character signals the start of the field.
This situation is shown in Figure Note: If enclosure delimiters are present, leading whitespace after the initial enclosure delimiter is kept, but whitespace before this delimiter is discarded. Trailing whitespace is only trimmed from character-data fields that have a predetermined size.
It is always trimmed from those fields. If a field is enclosed, or terminated and enclosed, like the first field shown in Figure , then any whitespace outside the enclosure delimiters is not part of the field.
Any whitespace between the enclosure delimiters belongs to the field, whether it is leading or trailing whitespace. See Preserving Whitespace for details on how to prevent trimming. Whitespace trimming is described in Trimming Blanks and Tabs. It also leaves trailing whitespace intact when fields are specified with a predetermined size. This keyword preserves tabs and blanks. For example, if the field. Otherwise, the leading whitespace is trimmed.
Both words must be specified. In general, any SQL function that returns a single value can be used. The column name and the name of the column in the SQL string must match exactly, including the quotation marks, as in this example of specifying the control file:. The SQL string must be enclosed in double quotation marks. To quote the column name in the SQL string, you must use escape characters. The SQL string appears after any other specifications for a given column.
If the string is recognized, but causes a database error, the row that caused the error is rejected. To refer to fields in the record, precede the field name with a colon :. Field values from the current record are substituted. The following example illustrates how a reference is made to the current field:. In this example, only the :field1 that is not in single quotation marks is interpreted as a column name.
For more information on the use of quotation marks inside quoted strings, see Specifying Filenames and Objects Names. Also, they cannot reference filler fields. A field specified as:. This example could store numeric input data in formatted form, where field1 is a character column in the database. This field would be stored with the formatting characters dollar sign, period, and so on already in place. You have even more flexibility, however, if you store such values as numeric quantities or dates.
You can then apply arithmetic functions to the values in the database, and still select formatted values for your reports. Column objects in the control file are described in terms of their attributes. In the datafile, the data corresponding to each of the attributes of a column object is in a datafield similar to that corresponding to a simple relational column. Example shows a case in which the data is in predetermined size fields. The first six characters italicized specify the length of the forthcoming record.
Loading Nested Column Objects Example shows a control file describing nested column objects one column object nested in another column object.
An object can have a subset of its attributes be null, it can have all of its attributes be null an attributively null object , or it can be null itself an atomically null object. In fields corresponding to column objects, you can use the NULLIF clause to specify the field conditions under which a particular attribute should be initialized to null.
Example demonstrates this. Although the preceding is workable, it is not ideal when the condition under which an object should take the value of null is independent of any of the mapped fields. In such situations, you can use filler fields. You can map a filler field to the field in the datafile indicating if a particular object is atomically null or not and use the filler field in the field condition of the NULLIF clause of the particular object. This is shown in Example Loading Object Tables The control file syntax required to load an object table is nearly identical to that used to load a typical relational table.
Example demonstrates loading an object table with primary key object identifiers OIDs. By looking only at the preceding control file you might not be able to determine if the table being loaded was an object table with system-generated OIDs real OIDs , an object table with primary key OIDs, or a relational table. Note that you may want to load data that already contains real OIDs and may want to specify that, instead of generating new OIDs, the existing OIDs in the datafile should be used.
Example demonstrates loading real OIDs with the row-objects. The OID in the datafile is a character string and is interpreted as a digit hexadecimal number. The digit hexadecimal number is later converted into a byte RAW and stored in the object table.
Note that the arguments can be specified either as constants or dynamically using filler fields. Example demonstrates real REF loading. The first argument is the table name followed by arguments that specify the primary key OID on which the REF column to be loaded is based. Example demonstrates loading primary key REFs. The LOB data instances can be in predetermined size fields, delimited fields, or length-value pair fields.
The following examples illustrate these situations. This is a very fast and conceptually simple format in which to load LOBs, as shown in Example Example shows an example of loading LOB data in delimited fields.
The space reserved for the field in the bind array is large enough to hold the longest possible value of the field. The length indicator gives the actual length of the field for each row. On most systems, the size of the length indicator is 2 bytes.
On a few systems, it is 3 bytes. To determine its size, use the following control file:. This control file loads a 1-byte CHAR using a 1-row bind array. In this example, no data is actually loaded because a conversion error occurs when the character a is loaded into a numeric column deptno. The bind array size shown in the log file, minus one the length of the character field is the value of the length indicator. Table through Table summarize the memory requirements for each datatype. They can consume enormous amounts of memory—especially when multiplied by the number of rows in the bind array.
It is best to specify the smallest possible maximum length for these fields. Consider the following example:. This can make a considerable difference in the number of rows that fit into the bind array. Imagine all of the fields listed in the control file as one, long data structure—that is, the format of a single row in the bind array. It is especially important to minimize the buffer allocations for such fields. In general, the control file has three main sections, in the following order: Sessionwide information Table and field-list information Input data optional section Example shows a sample control file.
Comments in the Control File Comments can appear anywhere in the command section of the file, but they should not appear within the data. Precede any comment with two hyphens, for example: --This is a comment All text to the right of the double hyphen is ignored, until the end of the line.
Operating System Considerations The following sections discuss situations in which your course of action may depend on the operating system you are using. Specifying a Complete Path If you encounter problems when trying to specify a complete path name, it may be due to an operating system-specific incompatibility caused by special characters in the specification.
Therefore, you should avoid creating strings with an initial quotation mark. Using the Backslash as an Escape Character If your operating system uses the backslash character to separate directories in a path name, and if the version of the Oracle database running on your operating system implements the backslash escape character for filenames and other nonportable strings, then you must specify double backslashes in your path names and use single quotation marks.
Escape Character Is Sometimes Disallowed The version of the Oracle database running on your operating system may not implement the escape character for nonportable strings. Specifying Datafiles To specify a datafile that contains the data to be loaded, use the INFILE keyword, followed by the filename and optional file processing options string. Note: The information in this section applies only to primary datafiles. If you have data in the control file as well as datafiles, you must specify the asterisk first in order for the data to be read.
It specifies the datafile format. It also optimizes datafile reads. The syntax used for this string is specific to your operating system. See Specifying Datafile Format and Buffering. For example, the following excerpt from a control file specifies four datafiles with separate bad and discard files: INFILE mydat1. If you have specified that a bad file is to be created, the following applies: If one or more records are rejected, the bad file is created and the rejected records are logged.
Note: On some systems, a new version of the file may be created if a file with the same name already exists. Examples of Specifying a Bad File Name To specify a bad file with filename sample and default file extension or file type of. Criteria for Rejected Records A record can be rejected for the following reasons: Upon insertion, the record causes an Oracle error such as invalid data for a given datatype.
The record violates a constraint or tries to make a unique index non-unique. A discard file is created according to the following rules: You have specified a discard filename and one or more records fail to satisfy all of the WHEN clauses specified in the control file.
If no records are discarded, then a discard file is not created. Description of the illustration discard. Examples of Specifying a Discard File Name The following list shows different ways you can specify a name for the discard file from within the control file: To specify a discard file with filename circular and default file extension or file type of.
This will result in the following error message being reported if the larger target value exceeds the size of the database column: ORA inserted value too large for column You can avoid this problem by specifying the database column size in characters and also by using character sizes in the control file to describe the data. Character-Length Semantics Byte-length semantics are the default for all datafiles except those that use the UTF16 character set which uses character-length semantics by default.
Interrupted Load s Loads are interrupted and discontinued for a number of reasons. Discontinued Conventional Path Loads In a conventional path load, data is committed after all data in the bind array is loaded into all tables.
Space errors when loading data into multiple subpartitions that is, loading into a partitioned table, a composite partitioned table, or one partition of a composite partitioned table : If space errors occur when loading into multiple subpartitions, the load is discontinued and no data is saved unless ROWS has been specified in which case, all data that was previously committed will be saved.
Load Discontinued Because of Fatal Errors If a fatal error is encountered, the load is stopped and no data is saved unless ROWS was specified at the beginning of the load. Status of Tables and Indexes After an Interrupted Load When a load is discontinued, any data already loaded remains in the tables, and the tables are left in a valid state. If the condition is false, then the current physical record becomes the last physical record of the current logical record.
THIS is the default. NEXT If the condition is true in the next record, then the current physical record is concatenated to the current logical record, continuing until the condition is false. If the last nonblank character in the current physical record meets the test, then the next physical record is read and concatenated to the current physical record, continuing until the condition is false.
If the condition is false in the current record, then the current physical record is the last physical record of the current logical record. The string must be enclosed in double or single quotation marks. The comparison is made character by character, blank padding on the right if necessary. X'hex-str' A string of bytes in hexadecimal format used in the same way as str. X'1FB' would represent the three bytes with values 1F, B0, and 33 hexadecimal.
The default is to exclude them. This means that the default schema will not necessarily be the one you specified in the connect string, if there are logon triggers present that get executed during connection to a database. You cannot recover the data that was in the table before the load, unless it was saved with Export or a comparable utility.
To update existing rows, use the following procedure: Load your data into a work table. Drop the work table. This option is suggested for use when either of the following situations exists: Available storage is limited. The remainder of this section details important ways to make use of that behavior. Extracting Multiple Logical Records Some data storage and transfer media have fixed-length physical records.
Distinguishing Different Input Record Formats A single datafile might contain records in a variety of formats. Distinguishing Different Input Row Object Subtypes A single datafile may contain records made up of row objects inherited from the same base row object type.
See Also: Loading Column Objects for more information about loading object types. Size Requirements for Bind Arrays The bind array must be large enough to contain a single row. Performance Implications of Bind Arrays Large bind arrays minimize the number of calls to the Oracle database and maximize performance. Calculations to Determine Bind Array Size The bind array's size is equivalent to the number of rows it contains times the maximum length of each row.
Determining the Size of the Length Indicator On most systems, the size of the length indicator is 2 bytes. Note: A similar technique can determine bind array size without doing any calculations. Multiply by the number of rows you want in the bind array to determine the bind array size. Calculating the Size of Field Buffers Table through Table summarize the memory requirements for each datatype. It is composed of 2 numbers.
Such generated data does not require any space in the bind array. Note: A double quotation mark in the initial position cannot be preceded by an escape character. Specifies that a datafile specification follows. Name of the file containing the data. If your data is in the control file itself, use an asterisk instead of the filename. This is the file-processing options string.
Note: Filenames that include spaces or punctuation marks must be enclosed in single quotation marks. Note: This example uses the recommended convention of single quotation marks for filenames and double quotation marks for everything else. If the condition is true in the current record, then the next physical record is read and concatenated to the current physical record, continuing until the condition is false.
If the condition is true in the next record, then the current physical record is concatenated to the current logical record, continuing until the condition is false. This test is similar to THIS, but the test is always against the last nonblank character. Specifies the starting and ending column numbers in the physical record. A string of characters to be compared to the continuation field defined by start and end, according to the operator. A string of bytes in hexadecimal format used in the same way as str.
Note: Terminator strings can contain one or more characters. Note: Enclosure strings can contain one or more characters. The size of the INT datatype, in C.
0コメント