Monday, October 31, 2016

Microsoft CRM + Azure Service Bus, part 4 (two-way relaying and using Azure Event Hubs)

Integrating Microsoft Dynamics CRM with Microsoft Azure Service Bus, using queues, topics, relays and the Event Hub

In the previous posts we've looked at using endpoints in CRM, how we can create a simply workflow to push the Activity Context to the Azure Service Bus, using queues and creating a simple one-way listener. In this post we'll focus on extending the code to enable two-way relaying, which will allow us to return data back to CRM. We'll also look at how we can integrate CRM with Azure Event Hubs, a service based on ASB which allows for a lot of throughput.

Allow return value in the CRM Workflow step

The first thing we'll do is to write some more logic for our workflow, to enable us to use a return value. The relay service only returns a string, which means that we could easily push JSON or base64 through the bus to have even more meaningful feedback from the service. Be sure to remember the maximum message size for your service bus, and the timeout values. I do not recommend keeping a connection for a long time to transfer end-to-end data, but there are loads of scenarios where you'd like to get something back.

[Output("Return value")]
public OutArgument<string> ReturnValue { get; set; }
OK, so what we've done here is to add an output variable to our workflow step. This allows us to take the return value and use it inside our workflow in later steps.

protected override void Execute(CodeActivityContext executionContext
{
    var context = executionContext.GetExtension<IWorkflowContext>();
    var serviceEndpointNotifySvc = executionContext
        .GetExtension<
IServiceEndpointNotificationService>();
    
var result = serviceEndpointNotifySvc.Execute(
        ServiceEndpoint.Get(executionContext), context
    );

    ReturnValue.Set(executionContext, result);
}

The code looks mostly the same as it did in part2 of this series, with a couple of additions. Firstly, we assign the return value from the notification service's Execute method to a variable "result". Next we set the value of the new output variable "ReturnValue" to the result, using the execution context of the workflow.
Now just build and deploy the updated workflow, and we'll head into MSCRM. Navigate to the Settings -> Processes area, and open up the workflow created earlier. Deactivate the workflow to enable editing, and then add an update step after the custom workflow step.
I'm going to update the Description field on the Account to the ReturnValue from our custom workflow step.


Finally, go into the plugin registration tool to update the relay service we added in part3. Make sure it's changed to a two-way relay, and eventually update the name and path if you want that. Just make sure to update the same values inside the service.

Writing a two-way relay service

To enable two way communication we first have to change our service behavior class. We have to change the interface it inherits from and add a return value.

[ServiceBehavior]
public class RemoteService : ITwoWayServiceEndpointPlugin
{
    public string Execute(RemoteExecutionContext c)
    {
        Console.WriteLine(
            $"Entity: {c.PrimaryEntityName}, id: {c.PrimaryEntityId}");
        return "Message processed in two-way listener";
    }
}

As you can see we're now inheriting from the ITwoWayServiceEndpointPlugin interface, modified the Execute method to expect a return value, and we added a return statement after printing out to the console. This means that whenever our relay service is triggered it will print out the message as before, but will also return our magic string.

The only other change we have to make is to change the typeof implementation. Earlier we specified IServiceEndpointPlugin, so we have to change it to ITwoWayServiceEndpointPlugin. I've done a little dirty hack, because I've been changing back and forth between relay types while testing, so mine looks like this:
sh
    .AddServiceEndpoint(
        typeof(RemoteService).GetInterfaces().First(), 
        new WS2007HttpRelayBinding(), 
        AppSettings["SBusEndpoint"]
    ).Behaviors.Add(tcEndpointBehavior);
sh.Open();
Console.WriteLine("TwoWay-listener, press ENTER to close");
Console.ReadLine();

sh.Close();

While this may seem like reflection and a good idea, it will only work until you've built yourself a framework to reuse across different projects/customers, and all of a sudden you're implementing several interfaces and everything stops working. For this demo, it's OK, but I'd rather specify the interface type in production code.

Testing two-way relaying

Now that we've got that out of the way, it's time to test our new implementation. Start your new and improved service, and go to CRM and trigger the plugin. If everything goes as planned you'll be looking at the following window for your relay service

 And if we go into CRM and check out the description on our contact entity we'll see the following

So that's all it takes to have two-way communication working in a relay service. We've got CRM on one side, Azure Service Bus as the communication channel, and a simple console project on the other side.

What's a REST listener?

One option I haven't mentioned so far is the REST listener which can be specified in the plugin registration tool. This is simply a two-way relay which uses REST endpoints instead of .Net binaries. This would allow you to create and run a relay service in Node.js, apache tomcat, powershell, IIS, or whichever web server you'd want. Just to trigger all the popular buzwords, this can enable you to use MSCRM, Azure Service Bus, and deploy node.js relay services in docker containers.

Azure Event Hubs

Azure Event Hubs is a special kind of service bus which is designed to handle huge amounts of messages, we're talking millions of messages per second. It's probably not what you're looking at for general integration purposes, but there are several scenarios where it could benefit your company in a big way.
The first thing I thought of was using this for auditing. Just write a simple workflow or plugin which pushes any creates, updates and deletes as messages to the event hub. Then you can use stream analytics or some other technology to populate an auditing system with actions performed by your users and integration systems. Anyone who's used the out-of-the-box auditing functions in MSCRM knows that processing the information is tedious, at best, and more often than not close to impossible to get any useful data from. But if you start pushing it into an external auditing system based on Azure services then you could use clever stuff like temporal tables to design a robust, maintainable system.

The second thing I thought of was predictive analysis. Pushing raw data to the event hub, which allows for transfer to an Azure Data Warehouse for real-time data crunching and you have a great additional source of data that can be used for predictive analysis or (buzzword warning) machine learning.

There are probably a lot of cool things you can do with this, but I won't elaborate all my ideas in this blog post. What I want to stress is the price tag. It is incredibly cheap compared to legacy systems based on clusters of servers running different messages queues with some expensive bus technology on top. And the performance is great, no matter which size you pick. It doesn't matter if you're running hundres, thousands, or hundreds of millions of messages per day, the performance is always great but there's no entry level cost, the price scales with usage.


That's all for this blog series (at least for now). I might come back to visit later on when I've done some more in-the-field deployments with the ASB.

Sunday, October 30, 2016

Microsoft CRM + Azure Service Bus, part 3 (creating a relay service and endpoint)

Integrating Microsoft Dynamics CRM with Microsoft Azure Service Bus, using queues, topics, relays and the Event Hub

In this, third part of my blog series on using the Azure Service Bus capabilities I'm going to demonstrate how to set up a relay service and add the relay namespace as an endpoint in MSCRM.
Relaying allows us to have active listeners who can either just accept messages or accept and reply to them. This allows for business critical systems like ticket reservation and receipts to ensure we're working on updated and valid data.
Click here for part1
Click here for part2
Click here for part4

What is a relay service?

A relay service works by using the Service Bus like a kind of tunnel, in which the relay is an active listener at the other end. Unlike queues and topics, where the message is "dropped off", using a one-way or two-way (or REST) relay requires that somebody pick up the message immediately. With one-way the sender is happy as long as somebody accepts it, while in a two-way scenario the receiver has to return a value. In this post we'll start out with a one-way relay, but in the next one we'll look at how we can extend that into a two-way relay.

Creating a one-way relay

First off we write a service behavior class. This represents the code we're running whenever a message is received. I'm just going to write out something to the console.

[ServiceBehavior]
public class RemoteService : IServiceEndpointPlugin
{
    public void Execute(RemoteExecutionContext c)
    {
        Console.WriteLine(
            $"Entity: {c.PrimaryEntityName}, id: {c.PrimaryEntityId}");
    }
}

So nothing magical happening here. It's an extension of the IServiceEndpointPlugin, which writes the entity name and id to the console in it's execute method.
Next up we go to the main method and define a servicehost variable and a new endpoint behavior.

var sh = new ServiceHost(typeof(RemoteService));
var tcEndpointBehavior = new TransportClientEndpointBehavior(
    TokenProvider.CreateSharedAccessSignatureTokenProvider(
        AppSettings["SharedAccessKeyName"],
        AppSettings["SharedAccessKey"]
    )
);

OK, so here we've got ourselves a new servicehost variable, which we'll use to add a new service endpoint, with our newly created RemoteService service behavior as the type. Then there's the endpoint behavior. We create a new TransportClientEndpointBehavior, with a shared access signature as the token provider.
NB! In the SDK and the MSDN article this is specified as "Shared Secret Token Provider", but that's ACS and is no longer supported in Azure. You have to use Shared Access Signature (SAS) for authentication or it won't work.

sh
    .AddServiceEndpoint(
        typeof(IServiceEndpointPlugin), 
        new WS2007HttpRelayBinding(), 
        AppSettings["SBusEndpoint"]
    ).Behaviors.Add(tcEndpointBehavior);

Here we add a service endpoint to the host. The type is IServiceEndpointPlugin, which is what the CRM async service sends to the Service Bus. We use WS2007HttpRelayBinding to match the source system, and we collect the endpoint from the app.config (formatted like the following: https://yournamespace.servicebus.windows.net/yourpath/ ).
What's important to note here is that the path specified should not be the same as an existing path. If you have a queue or topic with the same path specified, it will be overridden and you cannot recreate it in the Azure Portal afterwards. This also means that it's up to you which path you want to use for the relay, which also means you can modify it dynamically using the web.config or in advanced integration scnearios.
Finally we add the endpoint behavior we created ealier.

sh.Open();
Console.WriteLine("OneWay-listener, press ENTER to close");
Console.ReadLine();
Close();

Finally, we open up the service host connection, which makes it start listening to new messages. This means we're ready to set up the service in the plugin manager and start sending messages.

Switching to relaying in MSCRM

First off we need to create a new shared access policy in Azure. It's the same as we did in part1, except that this time we create it on the bus itself instead of on a queue.
Then we head over to the plugin registration tool, connect to our organization and add a new service endpoint. This time it'll look kind of like this when it's filled in:
What's important to note here is the namespace address and path. The namespace address should be the complete path, and the path should just be a forward slash (the tool does not accept blank values). If you specify the path in the path box then it won't be able to find and connect to your relay service (that took me a while to figure out). In addition, this is an https endpoint, not an sb endpoint
Next up we go into CRM to edit our workflow (for more info see part2 of this blog series). The only thing we need to do here is deactivate and update the service endpoint. Then save and activate it again. I've kept the workflow running synchronously, just to be able to verify that it works as expected.
Now, just to demonstrate how it looks if you've specified the wrong endpoint, or if the relay isn't running, here's the error message you get. It is the same error you'll get if you specify the path in the path box instead of in the namespace box.

Now, for the fun part instead. Start your relay service and wait for the host to be ready. Then go into CRM and trigger the plugin. If you've done everything right then you'll see a command window looking something like this

And that means we've successfully posted a message from our MSCRM system, through our azure service bus and out of the relay service. This example isn't very exciting, but just think of the possibilities you get if you put this service out into Azure (or on a web server if you're still into that whole old school infrastructure stuff ;) ).

That's it for this blog post. In the next and (for now) final post in this series we'll look at how we can extend this into a two-way relay, as well as integrating with Azure Event Hub which is a service based on the ASB.

Wednesday, October 26, 2016

Microsoft CRM + Azure Service Bus, part 2 (creating a custom workflow and consuming service endpoints)

Integrating Microsoft Dynamics CRM with Microsoft Azure Service Bus, using queues, topics, relays and the Event Hub

In part two of this blog series we're going to look at how to create a custom workflow to post messages to the Azure Service Bus queue created in part1. I'm assuming that you have basic knowledge about the C# language and that you are familiar with the custom workflow step and plugin concepts in MSCRM.

Creating a workflow

First off, I have to give thanks to Jason Lattimer for all his contributions to all CRM developers and customizers everywhere. He has made available a free version of Dynamics CRM Developer Toolkit which makes building code and deploying stuff a breeze. Be sure thank him if you ever run into him.

[Input("ServiceEndpoint")]
[ReferenceTarget("serviceendpoint")]
[RequiredArgument]
public InArgument<EntityReference> ServiceEndpoint { get; set; }

First off I've just specified a simple input parameter for the workflow step. It takes an entityreference of type (logicalname) serviceendpoint, which is the type registered through the plugin registration tool.

protected override void Execute(CodeActivityContext executionContext
{
    var context = executionContext.GetExtension<IWorkflowContext>();
    var serviceEndpointNotifySvc = executionContext
        .GetExtension<
IServiceEndpointNotificationService>();
    serviceEndpointNotifySvc.Execute(

        ServiceEndpoint.Get(executionContext), context
    );
}

Next is the execution content of the workflow. As you can see there's no real magic here. I'm getting the workflow context and the service endpoint notification service from the codeactivitycontext. Then I use the Execute method of the notification service, which takes a service endpoint as an entity reference, and an ExecutionContext (here in the form of an IWorkflowContext) as input.
What the execute method does is posting the execution context to the provided service endpoint, which in turn means that the message received in the service bus queue contains a copy of the information contained in the execution context. This means you can add shared variables and images to supply additional information to whichever system will end up reading the message.
And that's it, you've got a working custom workflow step which can be built and uploaded to CRM.


Using workflows to send messages to ASB

The next step is to start using our new workflow step inside MSCRM. Just go into Settings -> Processes and hit New to create a new process. I like to start out with a synchronous one just to make sure that everything works, and then switch to background when I know it's OK. Even though you can run this step synchronously I wouldn't do it. The network latency alone is enough to make it a bad experience for users, so I would put much effort into using it as a background WF.

I've set the workflow to run on-demand, and then I add our custom workflow action as a step. On the properties-page, search for the service endpoint registered and add that as an input to the workflow step.

Now that we have configured a workflow, go ahead and save and activate it, and we're ready to start populating the ASB with messages.
I went ahead and triggered the workflow 5 times, and as you can see from my Azure portal there's messages ready to be processed.

Processing messages

Now that we have messages ready for processing we'll write a tiny application that allows us to read the messages posted. I've created a simple console-project in Visual Studio and added the CRM sdk through nuget (search for Microsoft.crmsdk)

var queue = MessagingFactory.CreateFromConnectionString(
    ConfigurationManager.AppSettings["ServiceBusPath"]);
var client = queue.CreateMessageReceiver(
    ConfigurationManager.AppSettings["EntityPath"], 
    ReceiveMode.PeekLock);
var message = client.Receive();
var context = message.GetBody<RemoteExecutionContext>();

The first thing I'm doing is creating a queue class using the ServiceBus sdk, and I've stored the connection string and queue path in the app settings. These strings are sensitive, so don't share them with anyone.
Next I'm instantiating up a new client class, using the queue connection and the entity path, and I've set the receive mode to PeekLock. This allows me to retrieve a message from the queue without deleting it, and then I can choose to delete (.Complete()) the message or return it to the queue (.Abandon()).
Then I use the receive method to get the next available (unlocked) message from the queue, and finally retrieve the message body. The message body is of type "RemoteExecutionContext", a Microsoft CRM SDK object which the notification service creates from the ICodeActivityContext.

If we put a breakpoint in our code we see that we have the familiar attributes available, like inputparameters, shared variables, parentexecutioncontext, primaryentityid, etc.
By utilizing the shared parameters we can add additional information in workflows or queues, which allows us to build complex logic in the queue listeners.

That's it for this post. In the next one we'll look into relaying with Azure Service Bus, which allows us to send replies back to MSCRM.

Sunday, October 23, 2016

Microsoft CRM + Azure Service Bus, part 1 (creating queues and adding service endpoints)

Integrating Microsoft Dynamics CRM with Microsoft Azure Service Bus, using queues, topics, relays and the Event Hub

Todays post is kind of a wrap up from the previous Oslo Dynamics 365 meetup which was held on October 17th. We'll look into the native support for Azure Service Bus (ASB) in MSCRM  and how we can use service endpoints inside our plugins and workflows. This post will focus on creating a bus, queue and access keys in Azure, and how to register the endpoint in MSCRM using the plugin registration tool.
Click here for part2

What is Azure Service Bus (ASB)

ASB is a service bus used for sending messages between different endpoints, which allows you to build and manage interconnected applications and systems without having to worry about the communication channel (within reasonable limits).
ASB supports several different destination types and communication protocols, and in this blog series I'll focus on the ones supported by Dynamics CRM.

Creating a queue and access key

The first step to adding an ASB endpoint in MSCRM is to create it and generate access policies. We'll start by logging into the Azure Portal and adding a new resource. Simply search for Service Bus and you'll find the following available

Fill in the required information to create the new resource, and hit the create button to start provisioning your brand new queue.
TIP: If you're using CRM Online, optimize performance in MSCRM by creating the bus in the same location as your tenant. This will minimize latency and will be very helpful in scenarios where you use Event hub for advanced auditing or similar high-ouput situations.

Now that we have a brand new service bus, it's time to add a queue to it. Navigate to Queues in the left hand navigation box and click on the [+ Queue] button. Give it a name and hit "Create" to get started.
Please note that the size option is the storage size of the queue, not the message size. In my tests the messages typically were between 13 and 60kB, so a 1GB queue would hold between 16k and 77k. Even if that seems much (after all, messages are deleted after processing), remember to plan for system downtime and SLA response times. if you generate a total of 20k messsages per day then you could be looking at data loss before gets a chance to take a look at it. I highly recommend you read up on queues and how to build a robust system using ASB aside from this blog post. I'm just presenting you with a simple way to get it working, not a complete integration strategy.
Now, open up your queue and navigate to Shared Access Policies. By default there won't be any policies in a new queue (there is one for the parent bus, I'll come back to that in the post about relaying), so click Add to create a new Shared Access Policy. Now you'll be asked to specify the claims added for this policy, which are "send", "listen" and "manage". Manage automatically gives the other two, but you could add a "send" access policy without "listen", and the other way around. The claims are pretty self-explanatory. Listening allows an application to read and complete messages, sending allows an application to send messages to the queue, and manage allows an application to make changes to the queue itself. I recommend a least-access-policy, ie. create seperate keys for systems that will listen and send messages, and don't overuse the keys across multiple systems. For demo purposes, using a full access key or a send&listen key is good enough.
Now you have a service bus, a queue, and an access key. You're ready to integrate MSCRM with Azure Service Bus.

Adding a service endpoint to MSCRM

To add a new service endpoint to MSCRM we have to use the plugin registration tool. You'll find it inside the MSCRM SDK under tools. Run the PluginRegistration.exe file and connect to your MSCRM organization. Once connected you'll have a new tab for your organization with a set of actions you can perform. Click on the register button, and then on the "register new service endpoint" option. You'll be presented with two options, either entering a connection string or starting with a blank connection window. I recommend pasting in the connection string from the azure portal, giving you a completely filled out connection settings window.

Message format

You have three different formats to choose from; .NETBinary, JSON and XML. This is simply the data representation of the message content. If you're planning to integrate with websites or other non-.NET technologies, or if you don't want to be dependent on the CRM SDK in your processor applications then you can simply choose one of the other message formats. Just keep in mind that XML can be quite bloated when it comes to size, so if you expect to send messages near the size limit then I would go for JSON (or even better, .NETBinary)

Take note that you can also choose to send the UserId as a parameter as well. This allows for additional authorization checks in your processing steps, and can be very useful to help determine who did what.
Now hit save, and you're done! A new service endpoint registered and you're ready to go.

In the next post in this series I'll demonstrate how to write a custom workflow step to use the service endpoints.


I also recommend to read up on the technologies. I'm just giving you a simple demo on how to actually do this, but there's a lot more to know and understand in order to plan and implement this successfully in your environment.

MSDN article on integrating Azure with CRM (NB: the samples for relay listeners are outdated as of 2016-10-23)

Wednesday, February 24, 2016

Using and mocking OrganizationServiceProxy (part 1)

How to use the OrganizationServiceProxy with Dynamics CRM, and mocking it

This is a two-part blog series about how to use the OrganizationServiceProxy class with MSCRM. I'll demonstrate how to connect using explicit, hard coded credentials as well as using the service credentials. I'll finish up by giving some tips on mocking the OrganizationServiceProxy to simplify unit testing in your projects.
In part 1 we'll look at utilizing the OrganizationServiceProxy and creating some code which allows us to easily and flexibly integrate with MSCRM.

Prerequisites

To follow the steps described in this post you'll need to have a version of Microsoft Visual Studio as well as the Windows Identity Foundation framework added to your operating system.
Visual Studio is available in a free (community) version found here
Windows Identity Foundation can be activated with the methods described here

Using OrganizationServiceProxy

Set up the project

I'm going ahead and creating a new Web Application project in Visual Studio. I'm not worrying about hosting and which functions I'll need, so I'll just set up a simple MVC project with defaults. I'm also going ahead and creating a unit test project at the same time, which will be used to demonstrate mocking a bit later on.


When the project has been created, open up the nuget package manager and search for Microsoft.CrmSdk.CoreAssemblies. Add this package to both the MVC project and the Test project. You can also add it using the package manager console with the following commands:
Install-Package Microsoft.CrmSdk.CoreAssemblies

Next add a new, empty controller to your MVC project named CrmController. In the index method we're gonna start by defining a new OrganizationServiceProxy with values as described:

public class CrmController : Controller
{
    // GET: Crm
    public ActionResult Index()
    {
        var crmUrl = new Uri(@"https://crmviking.crm4.dynamics.com");
        var clientCredentials = new ClientCredentials();
        authCredentials.ClientCredentials.UserName.UserName = "username@domain.com";
        authCredentials.ClientCredentials.userName.Password = "password";

        var service = new OrganizationServiceProxy(uri: crmUrl, homeRealmUri: null, clientCredentials: authCredentials.ClientCredentials, deviceCredentials: null);
     
        return View();
    }

}

With just a few lines of code you've already got a working service context which can be used to send and retrieve from Dynamics CRM. I'll explain the different inputs to create a new organizationserviceproxy:
uri: This is the URL to your Dynamics CRM instance
homeRealmUri: This is the WS-TRUST URL to your secondary ADFS server, for example if you're federating across domains. I'm not using it in my case but it could be applicable in your case.
clientCredentials: This is the user credentials used to authenticate with CRM.
deviceCredentials: This is for when you generate device credentials for your service

Refactoring service generation

Now, the next logical step (to me) is moving this out into it's own class, so we can reuse for our other methods. What we're doing is generating new a new service context based on predefined values, so we'll refactor the into it's own CrmServiceFactory class. At the same time we'll extract the credentials values and put them into our web.config file (how to store and use your credentials is a discussion better left for another post, but out of two evils, I'd rather specify them in the web.config than hard coded in your class).
Add the following lines to your web.config, inside the "Configuration" namespace.
<appSettings>
  <add key="CrmUserName" value="name@domain.com" />
  <add key="CrmPassword" value="password" />
</appSettings>
<connectionStrings>
  <add name="CrmWebServer" connectionString="https://crmviking.crm4.dynamics.com" />
</connectionStrings>
<configSections>


Refactoring our code into a factory for generating a new OrganizationServiceProxy gives us the following factory-code:

public static OrganizationServiceProxy GetCrmService()
{
    var crmUrl = new Uri(ConfigurationManager.ConnectionStrings["CrmWebServer"].ConnectionString);
    var authCredentials = new AuthenticationCredentials();
    authCredentials.ClientCredentials.UserName.UserName = ConfigurationManager.AppSettings["CrmUserName"];
    authCredentials.ClientCredentials.UserName.Password = ConfigurationManager.AppSettings["CrmPassword"];

    var creds = new AuthenticationCredentials();
    var service = new OrganizationServiceProxy(uri: crmUrl, homeRealmUri: null, clientCredentials: authCredentials.ClientCredentials, deviceCredentials: null);

    return service;
}

Now we can change the implementation in our controller to simply:
var service = CrmServiceFactory.GetCrmService();


Using service credentials

If we want to use service credentials we start by specifying which credentials will be used to run our application. For an MVC application we do that by specifying the user account settings in the IIS Application Pool. More information about setting the service credentials in IIS is described here (technet).
Next we need to change our code implementation as follows:

public static OrganizationServiceProxy GetCrmService()
{
    var crmUrl = new Uri(ConfigurationManager.ConnectionStrings["CrmWebServer"].ConnectionString);
    var authCredentials = new AuthenticationCredentials();
    authCredentials.ClientCredentials.Windows.ClientCredential = CredentialCache.DefaultNetworkCredentials;

    var creds = new AuthenticationCredentials();
    var service = new OrganizationServiceProxy(uri: crmUrl, homeRealmUri: null, clientCredentials: authCredentials.ClientCredentials, deviceCredentials: null);

    return service;
}

As you can see, what we've changed is replacing the explicit declaration of the username and password and converted to using the credentials that our application is running with.
This way we won't be relying on hard coded values, and we don't risk "giving away" our credentials if somebody snatches up your source code.

Using the organizationserviceproxy

First of, technet has a lot of information and examples on how to use the CRM components, and I highly recommend that you spend a fair amount of time reading up on them. There's a lot more to coding against CRM than using classes in .Net. Here's the url to the OrganizationServiceProxy

Implementing a create method

OK. We'll just create a super simple class which will create an account. We'll name it AccountRepository.

public void Create()
{
    var service = CrmServiceFactory.GetCrmService();
    var account = new Entity(entityName: "account");
    account.Attributes["name"] = "Contoso";
    service.Create(account);
}

That was simple, good to go right? Not quite, if I left it at that the Marvelous Hosk would throw harsh words my way. We have some basic principles we should adhere to, mainly Dependency Injection. We'll modify our code to take in the service in the default constructor, and we'll take the name used to create the account as input for the "Create" method.

private readonly OrganizationServiceProxy service;
public AccountRepository(OrganizationServiceProxy service)
{
    this.service = service;
}
public Guid Create(string name)
{
    var account = new Entity(entityName: "account");
    account.Attributes["name"] = name;
    var accountId = service.Create(account);

    return accountId;
}


OK, that's a bit better, we can reuse the class in different projects without having to rewrite any logic, and we can create accounts with different names as well. In addition, we're returning the unique identity (Guid) of your newly created account, which is useful in a number of different scenarios.

Implementing a retrieve method

Implementing a retrieve method is really simple. We'll just add a method to our existing class as follows:

public Entity Retrieve(Guid id)

{

    var account = service.Retrieve("account", id, new ColumnSet(true));

    return account;

}

That's easy and self explanatory, but unfortunately it requires us to know the account id of the account we're retrieving, and I for one do not go around remembering the Guid of my accounts.
So what we'll do is that we'll change this implementation to querying CRM for an account based on the account name, because that's a value we'll remember. The only thing is, when we do a query we'll retrieve a list of entities. Querying for the account name will potentially give us multiple accounts as a result, so I think we'll go ahead and create a new method, named RetrieveByName.

public EntityCollection RetrieveByName(string name)
{
    var query = new QueryExpression("account");
    query.ColumnSet = new ColumnSet(true);
    query.Criteria.AddCondition(new ConditionExpression("name", ConditionOperator.Equal, name));

    var accounts = service.RetrieveMultiple(query);
    return accounts;
}

Now we're retrieving a collection of entities, if we wanted we could also just return a generic list of entities, but I would rather do that elsewhere in my code than implement logic here which makes the method more rigid and less reusable.

Implementing an update method

As you might expect, updating entities aren't much harder. I'll jump straight into implementing an Update method which an entity. It takes an Entity as input, which means we'll be doing the main manipulation in other classes. This might seem redundant in our example, because we're not doing anything that we couldn't do by just calling the OrganizationServiceProxy's Update method. For most deployments that's probably all you need as well, but for some scenarios you might want to do additional, mandatory manipulation every time an update is performed. You might want to whenever it's called, or you might want to implement a date field which is supposed to be updated whenever an update occurs. Additionally, I like to handle all my organization queries inside my repositories.

public void Update(Entity entity)
{
    service.Update(entity);
}

Easy peasy.

Implementing a status update method

Updating the status of a record is a bit special for Dynamics CRM. Instead of simply updating the state and status using the update method you have to send a SetStateRequest.
Here's the code we'll implement.

public void UpdateStatus(Guid id, int state, int status)
{
    var stateRequest = new SetStateRequest();
    stateRequest.EntityMoniker = new EntityReference("account", id);
    stateRequest.State = new OptionSetValue(state);
    stateRequest.Status = new OptionSetValue(status);

    service.Execute(stateRequest);
}

There's no magic in this code either, but as you might notice it is quite generic. We're already taking in the entity id, the state value and the status value. The only parameter missing is the entity logical name and we could reuse it across all entities, and that's exactly what we'll do. A point I want to make is that we'll be passing in four parameters, and to stay in Uncle Bob's good graces we'll create a model to pass as the input instead.

First off, here's our model

public class CrmStatusModel
{
    public Guid Id { get; set; }
    public string EntityName { get; set; }
    public int StateValue { get; set; }
    public int StatusValue { get; set; }
}


Next, it's our new, generic status update class. I went ahead and named it CrmStatusHandler. Like our repository, I'm going to pass in an organizationserviceproxy in the default constructor.

private readonly OrganizationServiceProxy service;
public CrmStatusHandler(OrganizationServiceProxy service)
{
    this.service = service;
}
public void UpdateStatus(CrmStatusModel model)
{
    var stateRequest = new SetStateRequest();
    stateRequest.EntityMoniker = new EntityReference(model.EntityName, model.Id);
    stateRequest.State = new OptionSetValue(model.StateValue);
    stateRequest.Status = new OptionSetValue(model.StatusValue);

    service.Execute(stateRequest);
}


Now we can use this handler to update the status for all our entities, and we've also got a model instead of four separate parameters.

Create additional entity repositories

Now we've seen how to implement a repository for the account entity. I'm gonna go ahead and create another repository for the contact entity. I'll implement the same methods as we did in the account repository, with the same input parameters, except for the query by name.

private OrganizationServiceProxy service;

public ContactRepository(OrganizationServiceProxy service)
{
    this.service = service;
}
public Guid Create(string name)
{
    var contact = new Entity("contact");
    contact.Attributes["name"] = name;
    var contactId = service.Create(contact);

    return contactId;
}

public Entity Retrieve(Guid id)
{
    var contact = service.Retrieve("contact", id, new ColumnSet(true));
    return contact;
}

public void Update(Entity entity)
{
    service.Update(entity);
}


As you can see, it's pretty much the same as the account, only for the contact entity. In addition, I'll create two methods for querying by values instead. I'll create one method for querying by first name, and one method for querying by last name.

public EntityCollection RetrieveByFirstName(string name)
{
    var query = new QueryExpression("contact");
    query.ColumnSet = new ColumnSet(true);
    query.Criteria.AddCondition(new ConditionExpression("firstname"ConditionOperator.Equal, name));

    var contacts = service.RetrieveMultiple(query);
    return contacts;
}

public EntityCollection RetrieveByLastName(string name)
{
    var query = new QueryExpression("contact");
    query.ColumnSet = new ColumnSet(true);
    query.Criteria.AddCondition(new ConditionExpression("lastname"ConditionOperator.Equal, name));

    var contacts = service.RetrieveMultiple(query);
    return contacts;
}

Create an adapter

Lastly, we'll create an adapter to utilize our repositories. I'm going to simulate a situation where we'll always create a contact whenever an account is created, and we'll create a method to deactivate a company when a contact is deactivated. These aren't necessarily methods you'd want to implement in an actual useful environment, but it's a good example of where you'd want to utilize an adapter pattern to combine the usage of several repositories.

public void CreateCustomers(string accountName, string contactName)
{
    var service = CrmServiceFactory.GetCrmService();
    var accountRepository = new AccountRepository(service);
    var contactRepository = new ContactRepository(service);

    var accountId = accountRepository.Create(accountName);
    var contactId = contactRepository.Create(contactName);


    var contactId = contactRepository.Retrieve(contactId);
    contact.Attributes["parentcustomer"] = new EntityReference("account", accountid);
    contactRepository.Update(contact);
}

public void DeactivateCustomers(Guid contactId)
{
    var service = CrmServiceFactory.GetCrmService();
    var accountRepository = new AccountRepository(service);
    var contactRepository = new ContactRepository(service);
    var statusHandler = new CrmStatusHandler(service);

    var contact = contactRepository.Retrieve(contactId);
    var accountReference = (EntityReference)contact.Attributes["parentcustomer"];

    var contactStatus = new CrmStatusModel()
    {
        EntityName = "contact",
        Id = contactId,
        StateValue = 1,
        StatusValue = 2
    };
    var accountStatus = new CrmStatusModel()
    {
        EntityName = "account",
        Id = accountReference.Id,
        StateValue = 1,
        StatusValue = 2
    };

    statusHandler.UpdateStatus(contactStatus);
    statusHandler.UpdateStatus(accountStatus);
}

The first thing you might notice is that these two methods have some redundant code. That already gives you an inclination that there potential for refactoring and improvement. There's some immediate changes we could make, mainly the number of instantiated classes and yet again breaking the dependency injection rules. The first thing we'll do to reduce the redundancy and get better DI is that we'll add the OrganiationServiceProxy as an input for the public constructor for our adapter. Then, in the public constructor we'll set up our repositories as private readonly properties, so they're available to both of our methods inside the adapter. Another thing to note is that the second method also uses the status update handler we created earlier. Creating a new class instance is cheap, especially when we've already got the OrganizationServiceProxy for our adapters, so I'm going to instantiate the status handler in the constructor as well, even though we might not even use it for a particular instance of the adapter class.

private readonly AccountRepository accountRepository;
private readonly ContactRepository contactRepository;
private readonly CrmStatusHandler statusHandler;

public CustomerAdapter(OrganizationServiceProxy service)
{
    accountRepository = new AccountRepository(service);
    contactRepository = new ContactRepository(service);
    statusHandler = new CrmStatusHandler(service);
}


As you can see this hasn't reduced the amount of lines mentionably, but we've got control of the instances at the top of our class declaration, and we can easily change or manipulate them in the future without changing the values inside each method. We'll do some more with our code in the next part, which is Unit Testing our new classes using Moq, so if you've got objections to the changes just made I'd check that out first.

Disposing your objects

Remember that the OrganizationServiceProxy creates new network connections, and you should always call the dispose method when you're done (or instantiate it in a using statement). The network connections aren't part of the CLR, so even in your MVC/web api project where your controllers are instantiated and thrown away in milliseconds the connections will stay open until the idle time out occurs.

Wrap-up

In this part we've looked at how we can utilize the OrganizationServiceProxy to integrate with Microsoft Dynamics CRM. We've created some repositories, a generic status handler and mixed all of our classes into a nice, extensible adapter class. In part 2 we'll look at unit testing these classes, and mocking the OrganizationServiceProxy using Moq. To do that we need to take a look at interfaces, and you'll understand the decisions made in this part even better.


Until then, happy CRM-ing!