Many thanks for visiting my Blog..!!Please share this blog using below share buttons and leave your Comments/Feedback/Appreciations on Tab: Feedback
Share This Blog..!!

Untranslatable Character Data Handling in Datastage



Untranslatable Character Data Handling in Datastage
"Failed. 6706: The string contains an untranslatable characters"DK®

While loading the data into teradata tables using datastage jobs we are getting errors like "SELECT/Loading Failed. 6706: The string contains an untranslatable character."DK®

This error usually comes when a Junk/Bad/Latin character come across when selecting/loading from/to column of a table using some function like cast (), coalesce(),trim() etc. Sometimes we are getting LATIN/BAD/JUNK characters in data and datastage not able to load those characters to database.DK®

Logic 1 : DK®
If you are using Teradata Connector Stages to Select/Load the data from/to Teradata database.DK®

Change Following Properties in Your Datastage job to load the untranslatable char data
.DK®

Job Property => "NLS"=>"Default map for Stages"=UTF8/UTF16

Properties in Teradata Connector Stage :DK®

"Transaction Mode"                             :TERADATA
"Client Character Set"                          :UTF16 or UTF8
"Automap Characterset coding"     :Yes
"ArraySize"                                              :1
"RecordCount"                                       :1
"Variant"                                                   :12 or 13 or 14  or 15- This property by default come as 8.1 or so, Change it according to your  current teradata version.


Logic 2 :DK®
---------------
Your Parallel "NLS" Map should be set to ISO-8859-1, which will map cleanly to LATIN1_01A.
And all of your character columns should NOT have extended properties set to Unicode.

Note :  If your source  data is  coming as "Unicode" and you are reading the data using Extendedproperty as “Unicode”, you need to perform a "UstringToString" conversion, before loading the data to teradata.  That should clear up your bad character UNLESS the bad character exists in source data. DK®



Points need to Keep in mind for handling Untranslatable Characters in Datastage Jobs :DK®
-------------------------------------------------------------------------------------------------------------

1. Please make sure that, If you are reading and writing the data using connector stages with same "Client Character Set".DK®
i.e 
While reading the data using "Client Character Set" = ASCII then always use "Client Character Set" = ASCII during loading the data.
While reading the data using "Client Character Set" = UTF8 then always use "Client Character Set" = UTF8 during loading the data.
 While reading the data using "Client Character Set" = UTF16 or UTF8 then always use "Client Character Set" = UTF16 or UTF8 during loading the data.
                 
2.  Always keep  Job Property => "NLS"=>"Default map for Stages"=UTF8/UTF16 for loading the untranslatable Char Data into table.DK®

3. You can use the proper “Source TO Target”  value for the translation. e.g. LATIN_TO_UNICODE.
please check “Show Table” to verify any character set is specified for column in table definition and choose character set translation string accordingly e.g. LATIN_TO_UNICODE, UNICODE _TO_ LATIN etc .DK®



No comments:

Post a Comment

disqus