Newly Added Features in InfoSphere
DataStage 11.7.x
New in 11.7 Base Version
1.
By using DataStage Product Insights, you can connect your
InfoSphere DataStage installation to IBM Cloud Product Insights. Such
connection gives you ability to review your installation details and metrics
like CPU usage, memory usage, active jobs, jobs that failed, and jobs that
completed.
2.
Data Masking stage supports Optim Data Privacy Providers version
11.3.0.5.
Connectivity Enhancements :
New in 11.7 Fix Pack 1
A. Cassandra connector
is supported. It has the following features:
1.
DataStax Enterprise (DSE) data platform built on the Apache
Cassandra is supported. DataStax Enterprise Java Driver is used to connect to
the Cassandra database.
2.
The connector reads data from and writes data to Cassandra
database in parallel and sequential modes.
3.
You can provide SQL statement (SELECT, INSERT, UPDATE, DELETE)
to read or write data.
4.
Reading and writing data of a single column in JSON format is
supported.
5.
Custom types of codecs are supported.
6.
You can specify a consistency level for each read query and
write operation.
7.
You can modify data by inserting, updating, deleting an entire
row or a set of specified columns.
B. Azure Storage connector is supported. You can use it to
connect to the Azure Blob storage and Azure File Storage and perform the
following operations:
1.
Read data from or write data to Azure Blob and File Storage.
2.
Import metadata about files and folders in Azure Blob Storage
and Azure File Storage.
C. HBase connector supports the following
features:
1.
Metadata can be imported on the table level and higher.
2.
You can run data lineage on a table level.
3.
ILOG JRules connector supports Decision Engine rules in all
engine modes (Core, J2EE, J2SE).
D. Kafka connector
supports the following features:
1.
Connector supports new secure Kafka connections, including SASL/PLAIN,
SASL/SSL and SSL with user authentication.
2.
Apart from String/Varchar, new message types are supported:
Integer, Small Integer, Double and Byte array.
3.
Kafka partitioning - fetching key and partition number from
Kafka - is supported. The partitioning type is preserved for the write mode.
E. Data lineage for the
Hive connector is enhanced in the following ways:
1.
Data flow to the column level is supported.
2.
The following URL formats of JDBC drivers are supported:
jdbc:ibm:hive (DataDirect) and jdbc:hive2. For the jdbc:ibm:hive driver
version, the database name is 'ibm'. For the jdbc:hive2 driver version, the
database name is set by using the entire URL, which is JDBC default behavior.
3.
You can use the URL attribute Database to set the database
schema.
4.
You can use the URL path as the database schema.
F.
Db2 connector supports Db2 12 for z/OS.
G. SFDC API 42 is
supported.
H. Sybase IQ 16.1 is
supported.
I.
Greenplum connector supports Greenplum database 5.4.
New in 11.7 Base Version
:
1. HBase connector is supported.
You can use HBase connector to connect to tables stored in the HBase database
and perform the following operations:
A. Read data from or write data to HBase database.
B. Read data in parallel mode.
C. Use HBase table as a lookup table in sparse or normal mode.
Kerberos keytab locality is supported.
2. Hive connector
supports the following features:
A.
Modulus partition mode and minimum maximum partition mode during
the read operation are supported.
B.
Kerberos keytab locality is supported.
C.
Connector supports connection to Hive on Amazon EMR.
3.
Kafka connector supports the following features:
4.
Continuous mode, where incoming topic messages are consumed
without stopping the connector.
5.
Transactions, where a number of Kafka messages is fetched within
a single transaction. After record count is reached, an end of wave marker is
sent to the output link.
6.
TLS connection to Kafka.
7.
Kerberos keytab locality is supported.
8.
IBM MQ version 9 is supported.
9.
IBM InfoSphere Data Replication CDC Replication version 11.3 is
supported.
10.
SQL Server Operator supports SQL Server Native Client 11.
11.
Sybase Operator supports unichar and univarchar datatypes in
Sybase Adaptive Server Enterprise.
12.
Amazon S3 connector supports connecting by using a HTTP proxy
server.
13.
File connector supports the following features:
14.
Native HDFS FileSystem mode is supported.
15.
You can import metadata from the ORC files.
16.
New data types are supported for reading and writing the Parquet
formatted files: Date / Time and Timestamp.
17.
JDBC connector is certified to connect to MongoDB and Amazon
Redshift.
No comments:
Post a Comment