Release Notes - October 2022
  • 07 Dec 2022
  • Dark
    Light
  • PDF

Release Notes - October 2022

  • Dark
    Light
  • PDF

Article Summary

Note

This is a machine-translated version of the original Japanese article.
Please understand that some of the information contained on this page may be inaccurate.

Hello! We will bring you the release information for October 2022!


Data Catalog

Supports 🎉 metadata CSV import

Basic metadata values can now 🎉 be imported using a CSV file.
Here's a quick import procedure:
For details on CSV file format and usage restrictions, please refer to Metadata Import.

  1. Prepare a CSV file according to the format.
    image

  2. Click Data Catalog Settings> then Import Metadata.

  3. Select the import target and upload and import the CSV file.
    image

  4. If the import is successful, the values in the basic metadata are overwritten as follows:
    image

Enhanced summary statistics display

The minimum and maximum values of type related to date and time are now displayed.
You can check the summary statistics in "Column Information" and "Preview" of the table information.
image

Connection Information

Supports connection to Oracle Autonomous Database

You can now upload a wallet file when you select Use tnsnames.ora file in "Connection Method" of the Oracle Database connection information.
By uploading the wallet file, you can connect to Oracle Autonomous Database.
image

Forwarding settings

Destination BigQuery table partitioning enhancements 🎉

In the transfer setting STEP2 "Output Options" of the transfer destination Google BigQuery, you can now set the time unit that is the basis for the table partition in more detail.

  • With this change, you can now select the time unit of table division from 4 types (hour, day, month, year).
  • Split by import time and divided by hourly column, both division methods support the above four types of table division in time units.

By dividing the table into smaller pieces, you can improve query execution performance and reduce query execution costs.
For more information about split tables, refer to Overview of split tables.

image

Workflow

Supports 🎉 loop execution using Amazon Redshift queries

  • You can now loop through tasks in a workflow based on Amazon Redshift query results.
    • You can set the expansion value of a custom variable in a loop execution based on the results of an Amazon Redshift query.
    • You can now define workflows in which the expanded value of a custom variable changes with each execution.

In addition, we will briefly introduce the procedure for setting up loop execution below.

  1. On the flow edit screen of the workflow definition, click the button on the task you want to loop as follows.
    image

  2. Enable loop execution for the custom variable and select Loop in the Amazon Redshift query result for the loop type.
    007-p3.png

  3. In Target Custom Variable, specify any custom variable.
    image

  4. Enter the various fields and click Save.
    image


That's all for this release.
If you have any releases that interest you, please feel free to contact our Customer Success Representative.
Happy Data Engineering!


Was this article helpful?