Blog

Blog

Mockito how to mock and get proper values for ENUM

JavaMockito code  download

 

First of all Mockito can create mock data which can be integer long etc It cannot create right enum as enum has specific number of ordinal name value etc so if i have enum
public enum HttpMethod {

GET, POST, PUT, DELETE, HEAD, PATCH;

} so i have total 5 ordinal in enum HttpMethod but mockito does not know it .Mockito creates mock data and its null all the time and you will end up in passing a null value . So here is proposed solution that you randomize the ordinal and get a right enum which can be passed for other test

import static org.mockito.Mockito.mock;

import java.util.Random;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Matchers;
import org.mockito.internal.util.reflection.Whitebox;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

import com.amazonaws.HttpMethod;




//@Test(expected = {"LoadableBuilderTestGroup"})
//@RunWith(PowerMockRunner.class)
public class testjava {
   // private static final Class HttpMethod.getClass() = null;
    private HttpMethod mockEnumerable;

    @Test
    public void setUpallpossible_value_of_enum () {
        for ( int i=0 ;i<10;i++){
        String name;
        mockEnumerable=    Matchers.any(HttpMethod.class);
        if(mockEnumerable!= null){
            System.out.println(mockEnumerable.ordinal());
         System.out.println(mockEnumerable.name());

         System.out.println(mockEnumerable.name()+"mocking suceess");
        }
        else
        {

                //Randomize all possible  value of  enum 
                Random rand = new Random();
                int ordinal = rand.nextInt(HttpMethod.values().length); 
                // 0-9. mockEnumerable=
                mockEnumerable= HttpMethod.values()[ordinal];
             System.out.println(mockEnumerable.ordinal());
             System.out.println(mockEnumerable.name());

        }
        }
    }







    @Test
    public void setUpallpossible_value_of_enumwithintany () {
        for ( int i=0 ;i<10;i++){
        String name;
        mockEnumerable=    Matchers.any(HttpMethod.class);
        if(mockEnumerable!= null){
            System.out.println(mockEnumerable.ordinal());
         System.out.println(mockEnumerable.name());

         System.out.println(mockEnumerable.name()+"mocking suceess");
        }
        else
        {
            int ordinal;
                //Randomize all possible  value of  enum 
                Random rand = new Random();
                int imatch =  Matchers.anyInt();
                if(  imatch>HttpMethod.values().length)
                     ordinal = 0    ;
                else
                 ordinal = rand.nextInt(HttpMethod.values().length);

                // 0-9. mockEnumerable=
                mockEnumerable= HttpMethod.values()[ordinal];
             System.out.println(mockEnumerable.ordinal());
             System.out.println(mockEnumerable.name());

        }

        }

    }
}

Output : 0 GET 0 GET 5 PATCH 5 PATCH 4 HEAD 5 PATCH 3 DELETE 0 GET 4 HEAD 2 PUT

Free ebook: Introducing Microsoft SQL Server 2016

http://blogs.msdn.com/b/microsoft_press/archive/2015/12/22/free-ebook-introducing-microsoft-sql-server-2016-mission-critical-applications-deeper-insights-hyperscale-cloud-preview-edition.aspx

 

 

https://blogs.msdn.microsoft.com/microsoft_press/2016/02/02/free-ebook-introducing-microsoft-sql-server-2016-mission-critical-applications-deeper-insights-hyperscale-cloud-preview-2/

 

Pre-Signed URL (AWS)

 

Pre-Signed URL (AWS)

pre-signed URL  is URL  which   is used  to give   access to person for some time with authentication token /Signature  and expiry  on.it  . The user  need not  have the AWS console Sign in credentials .This URL  has Following format  query string with

Http url resource + AWSAccessKeyId+ Expires+Signature

AWSAccessKeyId=ACCESSKEYXXXX&Expires=1459944479&Signature=vba%2BH0F0p9b02n2qyhTFY4Bxjkg%3D

Example

https://my-first-s3-bucket-e3ee683e-b260-4aad-923b-31fa838c6a2e.s3.amazonaws.com/PresignedUrlAndUploadObject.txt?AWSAccessKeyId=ACCESSKEYXXXX&Expires=1459897385&Signature=zJhX0CfSnD6QFgD6fzOlfqk%2FsxM%3D

https://my-first-s3-bucket-e3ee683e-b260-4aad-923b-31fa838c6a2e.s3.amazonaws.com/MyObjectKey?AWSAccessKeyId=ACCESSKEYXXXX&Expires=1459879476&Signature=5Bh1AUuF3U5Vjw0Ah7EdojE9XDY%3D

https://my-first-s3-bucket-63529645-5e01-4406-bf85-75ffc0fd00b1.s3.amazonaws.com/PresignedUrlAndUploadObject.txt?AWSAccessKeyId=ACCESSKEYXXXX&Expires=1459879632&Signature=ypaabCtSnztLp%2FpzxjT2ZvMxhkg%3D

https://my-first-s3-bucket-63529645-5e01-4406-bf85-75ffc0fd00b1.s3.amazonaws.com/PresignedUrlAndUploadObject.txt?AWSAccessKeyId=ACCESSKEYXXXX&Expires=1459883171&Signature=VvcDoikAKnnMWAPuVIg18bG3FcE%3D

https://my-first-s3-bucket-e3ee683e-b260-4aad-923b-31fa838c6a2e.s3.amazonaws.com/PresignedUrlAndUploadObject.txt?AWSAccessKeyId=ACCESSKEYXXXX&Expires=1459897385&Signature=zJhX0CfSnD6QFgD6fzOlfqk%2FsxM%3D

 

Sometime Pre-Signed URL gives error downloading the file  with below mentioned Message.

<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>

 

The  error is when you send a presigned url which  has been  created for HTTP PUT to upload a file and you are  trying to view the file  via Browser .

URL are different for PUT .GET ,….other DELETE etc VERBS

generatePresignedUrlRequest.setMethod(HttpMethod.PUT);  This Line has different HttpMethod verbs

So make sure  the URL  youa re using is  for  right Verb

To generate a  URL  for  download via browser  you  have to comment this   line

//generatePresignedUrlRequest.setMethod(HttpMethod.PUT);

Aa We   knoe  pre-signed URL gives you access to the object identified in the URL, provided that the creator of the pre-signed URL has permissions to access that object.

A pre-signed URL  can be made for HTTP PUT  get or download via browser and all may  have different urls

generatePresignedUrlRequest.setMethod(HttpMethod.PUT);  This Line has different HttpMethod verbs

where PUT is used to upload file via url  HTTP PUT method.

 

So Its not mandatory to  upload  an object to get a pre-signed url .

We can get  pre-signed url    for existing object also in S3 to send it to user to download files from S3 so that they can download file from browser .

It is Something similar to google drive where  user shares a link and the link owner can  see the File even he does not have google account. GOOGLE DRIVE get shareable link

 

Example code for generating pre-signed url    for existing object also in S3

 

Create an instance of the AmazonS3 class.
Generate a pre-signed URL by executing the AmazonS3.generatePresignedUrl method.

You provide a bucket name, an object key, and an expiration date by creating an instance of the GeneratePresignedUrlRequest class. You don’t have  to  specify the HTTP verb PUT when creating this URL as you are not  upload an object.

Anyone with the pre-signed URL can upload an object.

The upload creates an object or replaces any existing object with the same key that is specified in the pre-signed URL.

 

 

public static String generatepreassignedkeyforexistingfile (String bucketName ,String objectKey) {

AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());

Region usWest2 =  Region.getRegion( Regions.US_WEST_2

);

s3client.setRegion(usWest2);

URL url = null;

try {

System.out.println(“Generating pre-signed URL.”);

java.util.Date expiration = new java.util.Date();

long milliSeconds = expiration.getTime();

milliSeconds += 1000 * 60 * 60; // Add 1 hour.

expiration.setTime(milliSeconds);

 

GeneratePresignedUrlRequest generatePresignedUrlRequest =

new GeneratePresignedUrlRequest(bucketName, objectKey);

//generatePresignedUrlRequest.setMethod(HttpMethod.PUT);

generatePresignedUrlRequest.setExpiration(expiration);

 

url = s3client.generatePresignedUrl(generatePresignedUrlRequest);

 

 

 

System.out.println(“Pre-Signed URL = ” + url.toString());

} catch (AmazonServiceException exception) {

System.out.println(“Caught an AmazonServiceException, ” +

“which means your request made it ” +

“to Amazon S3, but was rejected with an error response ” +

“for some reason.”);

System.out.println(“Error Message: ” + exception.getMessage());

System.out.println(“HTTP  Code: ”    + exception.getStatusCode());

System.out.println(“AWS Error Code:” + exception.getErrorCode());

System.out.println(“Error Type:    ” + exception.getErrorType());

System.out.println(“Request ID:    ” + exception.getRequestId());

} catch (AmazonClientException ace) {

System.out.println(“Caught an AmazonClientException, ” +

“which means the client encountered ” +

“an internal error while trying to communicate” +

” with S3, ” +

“such as not being able to access the network.”);

System.out.println(“Error Message: ” + ace.getMessage());

} catch (Exception e) {

// TODO Auto-generated catch block

e.printStackTrace();

}

return “Pre-Signed URL = ” + url.toString();

}

Lets  see  what can  we do from Eclipse UI

so in order to use Eclispe  we have to  install new software

http://aws.amazon.com/eclipse – http://aws.amazon.com/eclipse

 

installamazonsdktoeclipse

 

Once you finish this step

you can  install explorer view  to see s3 bucket and files

Go to explorer view by Show AWS explorer view in AWS toolbar icon

installamazonsdktoeclipse1

It will ask you for credential for aws console   once you place those credentials  you can choose your region

and see all your AWS resource

So here we are   just talking about S3  but same can be use to create table, Query for Dynamodb also .

So right click  on S3 node gives  you menu to create new  bucket Delete  etc

 

 

now in this screenshot  you can see  i went  to bucket screen

On click bucket

installamazonsdktoeclipse2

Developer can open  contents of bucket in Open in bucket editor screen

We have another option  to upload file

Just drag the file  from desktop to the  screen and you will see file is getting uploaded to s3 bucket which was mentioned in bucket editor screen

its asking for objectkey  wich is filename in this context  and it will upload a file

 

installamazonsdktoeclipse3

In java  you can upload file  via these 3 lines

System.out.println(“Uploading a new object to S3 from a file\n”);
s3.putObject(new PutObjectRequest(bucketName, key, SampleFile));

SampleFileis a File object

and

S3 is  AmazonS3Client

AmazonS3 s3 = new AmazonS3Client(credentials);

Some code fragments to  loop the s3  file and summary are mentioned below to do

the process in java

AWSCredentials credentials = null;
try {
credentials = new ProfileCredentialsProvider(“default”).getCredentials();
} catch (Exception e) {
throw new AmazonClientException(
“Cannot load the credentials from the credential profiles file. ” +
“Please make sure that your credentials file is at the correct ” +
“location (C:\\Users\\Jitender.Thakur\\.aws\\credentials), and is in valid format.”,
e);
}

AmazonS3 s3 = new AmazonS3Client(credentials);
Region usWest2 =  Region.getRegion( Regions.US_WEST_2
);
s3.setRegion(usWest2);

  1. To delete  a bucket   you have to delete all files in Bucket first .When  bucket is Empty delete Bucket

 

System.out.println(“Listing objects”);
ObjectListing objectListing = s3.listObjects(new ListObjectsRequest()
.withBucketName(bucketName)
);
for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
System.out.println(” – ” + objectSummary.getKey() + ”  ” +
“(size = ” + objectSummary.getSize() + “)”);
//new method
generatepreassignedkeyforexistingfile(bucketName,objectSummary.getKey()) ;
s3.deleteObject(bucketName, key);

}
System.out.println();

then

s3.deleteBucket(bucketName);

 

  • Check all buckets you have

for (Bucket bucket : s3.listBuckets()) {
System.out.println(” – ” + bucket.getName());

}

Creating bucket with check for existing bucket as bucket should be globally unique

 try {
credentials = new ProfileCredentialsProvider(“default”).getCredentials();
} catch (Exception e) {
throw new AmazonClientException(
“Cannot load the credentials from the credential profiles file. ” +
“Please make sure that your credentials file is at the correct ” +
“location (C:\\Users\\Jitender.Thakur\\.aws\\credentials), and is in valid format.”,
e);
}

AmazonS3 s3 = new AmazonS3Client(credentials);
Region usWest2 = Region.getRegion(Regions.US_WEST_2);
s3.setRegion(usWest2);
tx = new TransferManager(s3);

 private void createAmazonS3Bucket() {
try {
if (tx.getAmazonS3Client().doesBucketExist(bucketName) == false) {
tx.getAmazonS3Client().createBucket(bucketName);
}
} catch (AmazonClientException ace) {
//   JOptionPane.showMessageDialog(frame, “Unable to create a new Amazon S3 bucket: ” + ace.getMessage(),
//    “Error Creating Bucket”, JOptionPane.ERROR_MESSAGE);
}
}

 

how to see where the log is ? Logger in slf4j,

Logger in slf4j, could you explain and give explain on how to use it? (e.g. how to see where the log is)

 

 

So  Like any  logger library  we have a configuration file where we place the locationof logs  it can be DB text file Console out put  DEbugger on statments

 

 

 

Logger in slf4j, could you explain and give explain on how to use it? (e.g. how to see where the log is)

 

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;

private final Logger log = LoggerFactory.getLogger(getClass());

log.info(ex.getMessage());

 

 

Using slf4j with Simple logger

Create a Maven based project and this in your pom.xml.

<dependency>

<groupId>org.slf4j</groupId>

<artifactId>slf4j-api</artifactId>

<version>1.7.5</version>

</dependency>

Now you may use Logger in your Java code like this.

package deng;

import org.slf4j.*;

public class Hello {

static Logger LOGGER = LoggerFactory.getLogger(Hello.class);

public static void main(String[] args) {

for (int i = 0; i < 10; i++)

if (i % 2 == 0)

LOGGER.info(“Hello {}”, i);

else

LOGGER.debug(“I am on index {}”, i);

}

}

The above will get your program compiled, but when you run it, you will see these output.

bash> java deng.Hello

SLF4J: Failed to load class “org.slf4j.impl.StaticLoggerBinder”.

SLF4J: Defaulting to no-operation (NOP) logger implementation

SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

What it’s saying is that at runtime, you are missing the logging “implementation” (or the logger binding), so slf4j simply use a “NOP” implmentation, which does nothing. In order to see the output properly, you may try use an simple implementation that does not require any configuration at all! Just go back to your pom.xml and add the following:

<dependency>

<groupId>org.slf4j</groupId>

<artifactId>slf4j-simple</artifactId>

<version>1.7.5</version>

</dependency>

Now you see logging output on STDOUT with INFO level. This simple logger will default show any INFO level message or higher. In order to see DEBUG messages, you would need to pass in this System Property -Dorg.slf4j.simpleLogger.defaultLogLevel=DEBUG at your Java startup.

Using slf4j with Log4j logger

Now we can experiment and swap different logger implementations, but your application code can remain the same. All we need is to replace slf4j-simple with another popular logger implementation, such as the Log4j.

<dependency>

<groupId>org.slf4j</groupId>

<artifactId>slf4j-log4j12</artifactId>

<version>1.7.5</version>

</dependency>

Again, we must configure logging per implementation that we picked. In this case, we need an

src/main/resources/log4j.properties file.

log4j.rootLogger=DEBUG, STDOUT

log4j.logger.deng=INFO

log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender

log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout

log4j.appender.STDOUT.layout.ConversionPattern=%5p [%t] (%F:%L) – %m%n

src/main/resources/log4j.properties file for  writing into file  and stdout

# Root logger option
log4j.rootLogger=INFO, file, stdout

# Direct log messages to a log file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=C:\\logging.log
log4j.appender.file.MaxFileSize=10MB
log4j.appender.file.MaxBackupIndex=10
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
 
# Direct log messages to stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n

 

 

 

Re-run your program, and you should see similar output.

Using slf4j with JDK logger

The JDK actually comes with a logger package, and you can replace pom.xml with this logger implementation.

<dependency>

<groupId>org.slf4j</groupId>

<artifactId>slf4j-jdk14</artifactId>

<version>1.7.5</version>

</dependency>

Now the configuration for JDK logging is a bit difficult to work with. Not only need a config file, such as src/main/resources/logging.properties, but you would also need to add a System properties -Djava.util.logging.config.file=logging.properties in order to have it pick it up. Here is an example to get you started:

 

.level=INFO

handlers=java.util.logging.ConsoleHandler

java.util.logging.ConsoleHandler.level=FINEST

deng.level=FINEST

Using slf4j with Logback logger

The logback logger implementation is a super dupa quality implementation. If you intend to write serious code that go into production, you may want to evaluate this option. Again modify your pom.xml to replace with this:

<dependency>

<groupId>ch.qos.logback</groupId>

<artifactId>logback-classic</artifactId>

<version>1.0.13</version>

</dependency>

Here is a sample of configuration src/main/resources/logback.xml to get things started.

<configuration>

<appender name=”STDOUT” class=”ch.qos.logback.core.ConsoleAppender”>

<encoder>

<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} – %msg%n</pattern>

</encoder>

</appender>

 

<logger name=”deng” level=”DEBUG”/>

 

<root level=”INFO”>

<appender-ref ref=”STDOUT” />

</root>

</configuration>

 

 

for db logging

with

slf4j-api-1.7.5.jar

The next step was to change logback.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
        <!-- encoders are assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n
            </pattern>
        </encoder>
    </appender>
    <appender name="db" class="ch.qos.logback.classic.db.DBAppender">
        <connectionSource
            class="ch.qos.logback.core.db.DriverManagerConnectionSource">
            <driverClass>org.postgresql.Driver</driverClass>
            <url>jdbc:postgresql://localhost:5432/simple</url>
            <user>postgres</user>
            <password>root</password> <!-- no password -->
        </connectionSource>
    </appender>

    <!-- the level of the root level is set to DEBUG by default. -->
    <root level="TRACE">
        <appender-ref ref="stdout" />
        <appender-ref ref="db" />
    </root>
</configuration>

As seen here, I have created two appenders – a console appender and a database appender.
The database appender here requires the JDBC driver, the jdbc url and the db credentials. An additional property is the connectionSource which is actually the type of Connection wrapper that we would like to use. Logback provides a few options here and I went with the DriverManagerConnectionSource class.
The next step was to write a test class to test the code:

public class SampleTestDbAppender {

   private static final Logger logger = LoggerFactory.getLogger(TestDbAppender.class);

   public  SampleTestDbAppender () {

 logger.info("Class instance created at {}",  DateFormat.getInstance().format(new Date()));
   }

   public void doTask() {
      logger.trace("In test  doTask");
      logger.trace("doTask test  complete");
   }

   public static void main(String[] args) {
      logger.warn("Running test code...");
      new TestDbAppender().doTask();
      logger.debug("test Code execution complete.");
   }

}

 

get script from here

 

https://github.com/qos-ch/logback/tree/master/logback-classic/src/main/resources/ch/qos/logback/classic/db/script
This resulted in three tables:

 

 

— Logback: the reliable, generic, fast and flexible logging framework.
— Copyright (C) 1999-2010, QOS.ch. All rights reserved.
— See http://logback.qos.ch/license.html for the applicable licensing
— conditions.
— This SQL script creates the required tables by ch.qos.logback.classic.db.DBAppender
— The event_id column type was recently changed from INT to DECIMAL(40)
— without testing.
DROP TABLE logging_event_property
DROP TABLE logging_event_exception
DROP TABLE logging_event
CREATE TABLE logging_event
(
timestmp DECIMAL(20) NOT NULL,
formatted_message VARCHAR(4000) NOT NULL,
logger_name VARCHAR(254) NOT NULL,
level_string VARCHAR(254) NOT NULL,
thread_name VARCHAR(254),
reference_flag SMALLINT,
arg0 VARCHAR(254),
arg1 VARCHAR(254),
arg2 VARCHAR(254),
arg3 VARCHAR(254),
caller_filename VARCHAR(254) NOT NULL,
caller_class VARCHAR(254) NOT NULL,
caller_method VARCHAR(254) NOT NULL,
caller_line CHAR(4) NOT NULL,
event_id DECIMAL(40) NOT NULL identity,
PRIMARY KEY(event_id)
)
CREATE TABLE logging_event_property
(
event_id DECIMAL(40) NOT NULL,
mapped_key VARCHAR(254) NOT NULL,
mapped_value VARCHAR(1024),
PRIMARY KEY(event_id, mapped_key),
FOREIGN KEY (event_id) REFERENCES logging_event(event_id)
)
CREATE TABLE logging_event_exception
(
event_id DECIMAL(40) NOT NULL,
i SMALLINT NOT NULL,
trace_line VARCHAR(254) NOT NULL,
PRIMARY KEY(event_id, i),
FOREIGN KEY (event_id) REFERENCES logging_event(event_id)
)

 

 

While this was the console appender, The db appender wrote entries to the table:

Please check 3 tables

Myself

I’m a SharePoint Solutions /SSIS /Biztalk/SQL/ASP.Net Developer currently working for Interpublic Group . I currently specialize in all the integration aspects of Biztalk,
implementing EAI solutions, In SharePoint dvwp, creating webparts, and creating things with jQuery. For 10+ years, I have developed software solutions that
add business value and create cost saving opportunities. I am very
experienced in BizTalk Server 2006/2009/2010/2013 and am consistently recognized by
others for exceeding expectations in the delivery of quality solutions.
Education

MS Software Engineering: Texas University

Technical Certifications

MCTS Biztalk 2006/ 2010

Jin Thakur

 

How to use distinguished fields and promoted properties in a BizTalk Server project

Admin:
How to use distinguished fields and promoted properties in a BizTalk Server project?

When you use distinguished fields and promoted properties, consider the following points:

Use distinguished fields when you want to make decisions or to manipulate data in an orchestration. The pipeline disassembler will insert a Written property into the message context for items that are marked as a distinguished field.
Use promoted properties as criteria for message routing. However, notice that promoted properties are also available in an orchestration. The pipeline disassembler will insert a Promoted property into the message context for items that are marked as a promoted property.
Promoted properties are limited to 256 characters for performance reasons. For example, promoted properties are limited to 256 characters to improve performance in comparison operations and in storage operations.
Written properties do not have a size limit. However, large values that are written into the message context must still be processed by BizTalk Server. Therefore, performance may be affected.
A promoted property may not be available as a promoted property after you write a value into the message context. This situation can occur if the value that you write into the message context has the same name and namespace that was used to promote the property.
Properties that have a null value are not permitted in the message context. Therefore, if a null value is written into the message context, this value will be deleted.

New WCF schema and other schema with namespace does not work in same way as it was in Biztalk 2009

New WCF schema and other schema with namespace does not work in same way as it was in Biztalk 2009

//CALLINGCARD or ./CALLINGCARD does not work in XSLT foreach loop

We have to add Local name for element and attributes

example "*[local-name()='CALLINGCARD' and namespace-uri()='http://Microsoft.LobServices.Sap/2007/03/Rfc/']"

New WCF schema and other schema with namespace does not work in same way as it was in Biztalk 2009
8-)8-)8-)8-)|-)|-)(bo)(bo)(bo)(bo)(bo)(bo)
Previous code

<xsl:element name="ItemCollection">
<xsl:for-each select="//*[local-name()='Root' and namespace-uri()='http://schemas.microsoft.com/BizTalk/2003/aggschema']/*[local-name()='InputMessagePart_0' and namespace-uri()='']/*[local-name()='TESTRFCMESSAGE' and namespace-uri()='http://Microsoft.LobServices.Sap/2007/03/Rfc/']">

<xsl:element name="Item">

<xsl:element name="Value">
<xsl:attribute name="CssClass">table_item_info</xsl:attribute>
<xsl:attribute name="Text" >
<xsl:value-of select="//CALLINGCARD"/>
</xsl:attribute>
</xsl:element>

<xsl:element name="Value">
<xsl:attribute name="CssClass">table_item_info</xsl:attribute>
<xsl:attribute name="Text" >
<xsl:value-of select="//RESULT"/>
</xsl:attribute>
</xsl:element>

</xsl:element>
</xsl:for-each>