Skip to main content

Data Insert

The Data Insert is a tool to help you take a local data file and upload the data to your platform.

🔎 NOTE: Coginiti currently supports Data Insert to BigQuery, DB2, Hive, Netezza, Postgres, Amazon Redshift, Snowflake, and Greenplum. Coginiti will be adding support for other database platforms in future releases.

To start the data import, go to the [Data Upload] menu as shown below.  Note that Data Insert now supports an unlimited number of rows.

Screenshot_2022-07-22_at_14.57.51.png

When you start the import, you will be presented with a six-step process in which to identify a file to upload, describe the behavior you want and then perform the load process.

 

Choosing Connection and File

  1. Pick a database connection where you will upload data to.
  2. Add a local file, either CSV or Excel, using the [Add File]
  3. Confirm the file type
  4. For CSV files, choose a delimiter used to separate fields in the file.
    For Excel files, choose the sheet with data to be uploaded.
  5. If you click the [With header] check box the parser will assume the first row in the file is a header if not - it will treat the first row as a data row.
  6. You can click [Update preview] to show the first few rows in the file.

🔎 NOTE: Also you can use the Data Upload Template by clicking on the drop-down [Upload Template]. See Data Upload Template documentation.

Screenshot_2022-07-26_at_16.18.56.png

 

Setting File Properties

You can choose Date Format and/or Date & Time Format. In the hints, you will see the date and time formats supported for each database platform, as well as examples of Pattern and Valid input for setting the date and time correctly.

Screenshot_2022-08-29_at_09.35.39.png

🔎 NOTE: See the list below for the date and time formats supported by each database:

BigQuery
DB2
Hive
Netezza
Postgres
Amazon Redshift
Snowflake
Greenplum

 

Setting Data Range

In Step ‘3. Data Range’ you can choose a numeric limiter for both the number of columns you want to parse and then the number of rows to load. This step is valuable if you have very wide and/or very deep files and you want to work with fewer columns or rows.

🔎 NOTE:  It is helpful with very large files to load the first 100 rows to ensure everything works well versus letting it run for an hour only to find out it didn’t work.

Screenshot_2022-07-26_at_16.31.03.png

 

Choosing Columns

In Step ‘4. Columns’ you can select individual columns. Let’s say you have 200 columns and you know you only want the first 10, in Step 3 you can choose columns 1 to 10 and in Step 4 you can better preview and determine the order of the columns you want to load.

Screenshot_2022-07-26_at_16.32.06.png

🔎 TIP:  You can add multiple columns at once. To do so, just hit Ctrl (Windows) or Command (Mac) and with your cursor select which columns you want to move, then hit the arrow to the right or left and the column will be moved.

 

Configuring Output Columns

In Step ‘5. Data Types’ you can control column data type and nullability.

Coginiti uses the first 1000 rows to best guess Data Type and Precision, but we recommend that you ensure our estimates are correct and change them if necessary.

Additionally, all columns are nullable by default.

🔎 NOTE:  Also you can edit the name of the chosen column.

Screenshot_2022-07-27_at_11.23.44.png

 

Setting Output Details and Loads

In Step ‘6. Target Database’ you will supply the database you’ll load to, schema, table name, and other behaviors that will be followed during a load execution.

Screenshot_2022-07-27_at_11.26.08.png

 

Click Start Upload to begin inserting data to the table.

mceclip1.png

In the left corner, you can see status about success or failure for the start of the insert process; you will see an additional message about success or failure for the result of the data insert.  Insert job status can also be found in the Notifications area.

 
mceclip0.png
 
Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request