Constructors
JobConfigurationLoad ({bool ? allowJaggedRows , bool ? allowQuotedNewlines , bool ? autodetect , Clustering ? clustering , List <ConnectionProperty > ? connectionProperties , String ? createDisposition , bool ? createSession , List <String > ? decimalTargetTypes , EncryptionConfiguration ? destinationEncryptionConfiguration , TableReference ? destinationTable , DestinationTableProperties ? destinationTableProperties , String ? encoding , String ? fieldDelimiter , HivePartitioningOptions ? hivePartitioningOptions , bool ? ignoreUnknownValues , String ? jsonExtension , int ? maxBadRecords , String ? nullMarker , ParquetOptions ? parquetOptions , bool ? preserveAsciiControlCharacters , List <String > ? projectionFields , String ? quote , RangePartitioning ? rangePartitioning , String ? referenceFileSchemaUri , TableSchema ? schema , String ? schemaInline , String ? schemaInlineFormat , List <String > ? schemaUpdateOptions , int ? skipLeadingRows , String ? sourceFormat , List <String > ? sourceUris , TimePartitioning ? timePartitioning , bool ? useAvroLogicalTypes , String ? writeDisposition } )
JobConfigurationLoad.fromJson (Map json_ )
Properties
allowJaggedRows
↔ bool ?
Accept rows that are missing trailing optional columns.
read / write
allowQuotedNewlines
↔ bool ?
Indicates if BigQuery should allow quoted data sections that contain
newline characters in a CSV file.
read / write
autodetect
↔ bool ?
Indicates if we should automatically infer the options and schema for CSV
and JSON sources.
read / write
clustering
↔ Clustering ?
[Beta] Clustering specification for the destination table.
read / write
connectionProperties
↔ List <ConnectionProperty > ?
Connection properties.
read / write
createDisposition
↔ String ?
Specifies whether the job is allowed to create new tables.
read / write
createSession
↔ bool ?
If true, creates a new session, where session id will be a server
generated random id.
read / write
decimalTargetTypes
↔ List <String > ?
Defines the list of possible SQL data types to which the source decimal
values are converted.
read / write
destinationEncryptionConfiguration
↔ EncryptionConfiguration ?
Custom encryption configuration (e.g., Cloud KMS keys).
read / write
destinationTable
↔ TableReference ?
The destination table to load the data into.
read / write
destinationTableProperties
↔ DestinationTableProperties ?
[Beta] [Optional] Properties with which to create the destination
table if it is new.
read / write
encoding
↔ String ?
The character encoding of the data.
read / write
fieldDelimiter
↔ String ?
The separator for fields in a CSV file.
read / write
hashCode
→ int
The hash code for this object.
read-only inherited
hivePartitioningOptions
↔ HivePartitioningOptions ?
Options to configure hive partitioning support.
read / write
ignoreUnknownValues
↔ bool ?
Indicates if BigQuery should allow extra values that are not represented
in the table schema.
read / write
jsonExtension
↔ String ?
If sourceFormat is set to newline-delimited JSON, indicates whether it
should be processed as a JSON variant such as GeoJSON.
read / write
maxBadRecords
↔ int ?
The maximum number of bad records that BigQuery can ignore when running
the job.
read / write
nullMarker
↔ String ?
Specifies a string that represents a null value in a CSV file.
read / write
parquetOptions
↔ ParquetOptions ?
Options to configure parquet support.
read / write
preserveAsciiControlCharacters
↔ bool ?
Preserves the embedded ASCII control characters (the first 32 characters
in the ASCII-table, from '\x00' to '\x1F') when loading from CSV.
read / write
projectionFields
↔ List <String > ?
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity
properties to load into BigQuery from a Cloud Datastore backup.
read / write
quote
↔ String ?
The value that is used to quote data sections in a CSV file.
read / write
rangePartitioning
↔ RangePartitioning ?
[TrustedTester] Range partitioning specification for this table.
read / write
referenceFileSchemaUri
↔ String ?
User provided referencing file with the expected reader schema, Available
for the format: AVRO, PARQUET, ORC.
read / write
runtimeType
→ Type
A representation of the runtime type of the object.
read-only inherited
schema
↔ TableSchema ?
The schema for the destination table.
read / write
schemaInline
↔ String ?
The inline schema.
read / write
schemaInlineFormat
↔ String ?
The format of the schemaInline property.
read / write
schemaUpdateOptions
↔ List <String > ?
Allows the schema of the destination table to be updated as a side effect
of the load job if a schema is autodetected or supplied in the job
configuration.
read / write
skipLeadingRows
↔ int ?
The number of rows at the top of a CSV file that BigQuery will skip when
loading the data.
read / write
sourceFormat
↔ String ?
The format of the data files.
read / write
sourceUris
↔ List <String > ?
The fully-qualified URIs that point to your data in Google Cloud.
read / write
timePartitioning
↔ TimePartitioning ?
Time-based partitioning specification for the destination table.
read / write
useAvroLogicalTypes
↔ bool ?
If sourceFormat is set to "AVRO", indicates whether to interpret logical
types as the corresponding BigQuery data type (for example, TIMESTAMP),
instead of using the raw type (for example, INTEGER).
read / write
writeDisposition
↔ String ?
Specifies the action that occurs if the destination table already exists.
read / write