Archive

Posts Tagged ‘Chris Clements’

Dynamically selecting one or more approvers as participants in a “Start a task process” action in Nintex workflows for Office 365

April 20, 2017 Leave a comment

How can I dynamically select the participants/approvers in a parallel task approval process in Nintex Workflows for Office 365?

I recently created a solution where our customer wanted a task approval process to be sent to multiple users based on a user’s selection of an area of responsibility.  For example if selecting the “Finance” department the approval task may be sent to Jon Smith and Jane Doe.  However, if the “HR” department was selected then the approvals may go out to Fred Young and Joe Mandoza.  The departments and their approvers exist in a SharePoint list.  In this post I’ll describe how I solved this problem.    I hope you find it useful and that it saves you much heart ache.

Creating the Department Approvers list.

First, let’s create a SharePoint list that contains a column for the department (single line of text) and one for the approver (person).

I filled in some data so that if I select the Finance department then two approvers ,Clements, Chris and Chris Clements will be selected and added as participants in the task process.

Creating the primary list.

Next, let’s create a list to which we will attach our Nintex workflow.  This list will have one lookup column that will be linked to our Department Approvers list.  Let’s call our primary list, Quarterly Reports.

Here is what the new item form looks like.  Note the look up column displaying the available departments from Department Approvers.

Creating the “Start Approval Task Process” workflow.

Start by creating a new list workflow on the Quarterly Reports list.  Call the workflow Get Approvals.

Step 1: Query the list to get the approvers based on the selected department.

Use the query list action with the following configuration:

We are going to select the approver column from this Department Approvers list where the department column matches the selected department on the quarterly reports list. Configure the action as shown below.

Note that we are selecting the “User Name” property of the approver column.  In Office 365 this will be the person’s E-mail address.  Also notice that I created a workflow variable of the collection type, colDepartmentApprovers to store the results in.

Step 2: Transform the colDepartmentApprovers variable into a dictionary

If you were to run the workflow now and observe the contents of the colDepartmentApprovers variable you would see that its format is that of a JSON object called “Approver” which contains an array of “UserName” key value pairs.

[{"Approver":[{"UserName":"Chris.Clements@mydomain.com"},{"UserName":"cleme_c@mydomain.com"}]}]

But look closely at the leading and trailing brackets [].  This means that this is an array of JSON approver objects each with its own array of key value pairs.

We need to crack open this string to get at the E-mail addresses inside but it is going to take several steps to get there so grab a beer and chill.

Let’s add the “Get Item from Collection” action.

Configure the action as shown below:

Notice that I hard-coded the 0 index counter.  This means that I will always fetch the first item in the collection.  I realize that if a user were to create a second record for the “finance” department in the department approvers list that this workflow would not pull in the approvers listed in that row.  To handle that case we would need to loop across the colDepartmentApprovers variable. However, to keep this post concise I will only handle one row of approvers per department.

In the output section of this action I created a workflow variable called “dicDepartmentApprovers“.  This variable will hold whatever was in the first position (the 0 index) of the colDepartmentApprovers variable.

Let’s run the workflow again and check out the contents of dicDepartmentApprovers.

{"Approver":[{"UserName":"Chris.Clements@mydomain.com"},{"UserName":"cleme_c@mydomain.com"}]}

We were successful at pulling the first and only “Approver” row from the collection. The leading and trailing brackets “[]” are gone! Now we need to get at the array of key value pairs containing the E-mail addresses.

Step 3: Parse the Approver object into a dictionary

Let’s add the “Get an Item from Dictionary” action.

Configure the action as show below:

In this step I am selecting the “Approver” path from the dictionary called dicDepartmentApprovers however, I insert the resulting data back into the collection variable.  What’s going on here?  Let’s run the workflow again and see what the data in the colDepartmentApprovers variable looks like.

[{"UserName":"Chris.Clements@mydomain.com"},{"UserName":"cleme_c@mydomain.com"}]

Hopefully you see that we are again dealing with an array/collection of data.  Selecting the “Approver” value from the dictionary in yields a collection of “UserName” values.  This time we are going to loop across this collection and crack out the individual E-mails.

Step 5: Looping across the colDepartmentApprovers collection

Add the “For Each” action.

Configure the action as shown below:

I specified our collection variable, colDepartmentApprovers, as the input collection across which we are going to loop.  For the output variable I created a new workflow variable called “dicUserName“.  This variable holds the currently enumerated item in the collection.  In other words, it is going to hold a single instance of the “Username” key value pair each time through the loop.  Its value will look something like this on the first trip through the loop: {"UserName":"Chris.Clements@mydomain.com"}  and then it will look like this: {"UserName":"cleme_c@mydomain.com"} during it’s second loop.

Given that the value of the dicUserName is dictionary we can use the “Get an Item from Dictionary” and specify the “UserName” path to arrive at the E-mail address.  Let’s take a look.

Add the “Get and Item from Dictionary” action. Be sure to place this action inside of the for each looping structure.

Configure the action as shown below:

Again we use the Item name or Path setting to retrieve the value.  In this case we set “UserName” and save the results into a new workflow variable called txtApproverEmail.  Its contents look like this:

Chris.Clements@mydomain.com

VICTORY!  Well, almost. Remember we want to our approval task to go out to both E-mail addresses at once.  We are going to need to concatenate our E-mail addresses into a format that is acceptable to the “Start a task process” action.

Let’s add the “Build String” action.

Configure the Action as shown below:

In this action I am stringing together our E-mail address with a new, empty workflow variable called txtCombinedApproverEmail.  I take the result of the concatenation and run it back in the txtCombinedApproverEmail variable.  It works like this:

First time through the loop where txtApproverEmail = “chris.clements@mydomain.com” and txtCombinedApproverEmail is empty.

chris.clements@mydomain.com;

The second time into the loop where txtApproverEmail = “cleme_c@mydomain.com” and txtCombinedApproverEmail = “chris.clements@mydomain.com;”

cleme_c@mydomain.com;chris.clements@mydomain.com;

PRO TIP: Notice that there is NO space between the E-mail addresses and the semicolon.  You are warned.

Step 6: Beginning the Approval Task

Finally, we are at the last step. That is to create the “Start a Task Process” action and assign our txtCombinedApproverEmail in the Participants field.  Remember to create this action outside of the for each loop.

And the configuration:

All that I am really concerned with here is setting the participants field to my workflow variable txtCombinedApproverEmail.  Let’s run the workflow and see if we get our task assigned to two approvers at once based on our selected “Finance” department.

Ah, and there you have it:

This is the entire workflow I created in this post:

Closing Thoughts

I have to be honest.  This feels like it shouldn’t be this hard to do in a tool such as Nintex for Office 365.  Seven steps?  Really?  If I were a “citizen developer” there is no way I could have pulled off a task like this. Knowledge of JSON encoding is essential.

Chris

 

 

 

 

 

Chris Clements
I am a senior software developer and development team lead in Houston Texas. I am passionate about the “art” of software development. I am particularly interested in software design patterns and the principles of SOLID object-oriented code. I am an evangelist for test driven development. I love to think and write about my day-to-day experiences in the trenches of enterprise IT. I relish the opportunity to share my experiences with others.

From the wire to the presentation, I am holistic solutions guy. I have broad experience in client side technologies such as Javascript, Ajax, AngularJS, Knockout, and Bootstrap. I have extensive experience with MVC, MVVM, and ASP.NET Web Forms. I am strong in SQL Databases, performance tuning, and optimization. I also have a background in network engineering, wide-area and inter-networking.

This article has been cross posted from jcclements.wordpress.com/ (original article)

So, you want to delete users with the Azure AD Graph API? Good luck with that!

May 16, 2016 1 comment

You might think that deleting users using the Azure AD Graph API would be pretty straightforward right?  You already have a registered application that succeeds in updating and creating new users.  This link doesn’t provide any warnings about hidden dragons or secret pitfalls.

Rest assured, there is at least one gotcha that’s primed to eat your lunch when it comes to deleting users.  Fortunately for you, True Believers, I’m here to show you how you too can quickly overcome this less than obvious configuration issue.

According the the Azure AD Graph Reference deleting user the is a simple operation.  All you have to do is send the HTTP Verb “DELETE” to the URL of the user you want to delete.

Example:

http://bit.ly/1VZ0GVf{user_id}[?api-version]

The user_id can be the UserPrincipalName. In other words, the E-mail address of the user.

As an example, I will delete a pesky AD user named “John Doe”.  This John Doe character has got to go!

Azure

I use PostMan to to get my API calls properly formatted.  It also helps to ferret out problems with permissions or configurations. This helps me to *know* that it works before I write my first line of application code.

Note: Notice that I have an OAuth Bearer token specified in the header.  I won’t cover how I got this token in this post.  If you want to know more about how I acquire tokens for Console Applications send me an E-mail!

PostmanDelete1

Assuming you have your tenant ID, user ID, and OAuth token all set correctly then all you need to do is click “Send”.  Your user is deleted as expected… right?

NOPE! you encounter the following JSON error response:

{
“odata.error”: {
“code”: “Authorization_RequestDenied”,
“message”: {
“lang”: “en”,
“value”: “Insufficient privileges to complete the operation.”
}
}
}

Your first reaction may be verify that your application registration is assigned the proper permissions on the AD Graph.  However, there is no permission that allows you to delete. You can only get variations of Reading and Writing.

AzurePermission

What do you do?  If you Google Bing around a bit you will find that your Application needs to be assigned an administrative role in Azure. It needs a ServicePrincipal.  So, off you go searching the competing, overlapping, portals of Azure trying to figure out how to assign an application roles within a resource.  You may even be successful.  We weren’t.

I had to use remote PowerShell to add my application to the appropriate role in order to delete users from AD.

REMOTE POWERSHELL TO AZURE AD

I used instructions from this MSDN article to download and install the Azure AD Module.  First I downloaded the Microsoft Online Services Sign-In Assistant for IT Professionals RTW.  Next, I grabbed the Active Directory Module for Windows PowerShell (64-bit version).  Once I had my PowerShell environment up and running, I cobbled together a quick script to Add my Application registration to the “User Account Administration” role.  Here is how I did it!

THE CODEZ

# Log me into my MSDN tenant using an account I set up as “global admin”.
$tenantUser = ‘admin@mytenant.onmicrosoft.com’
$tenantPass = convertto-securestring ‘Hawa5835!’ -asplaintext -force
$tenantCreds = new-object -typename System.Management.Automation.PSCredential -argumentlist $tenantUser, $tenantPass

Connect-MsolService -Credential $tenantCreds

# Get the Object ID of the application I want to add as a SPN.
$displayName = “MyAppRegistrationName”
$objectId = (Get-MsolServicePrincipal -SearchString $displayName).ObjectId

# Set the Role name and the Add the Application as a member of the Role.
$roleName = “User Account Administrator”
Add-MsolRoleMember -RoleName $roleName -RoleMemberType ServicePrincipal -RoleMemberObjectId $objectId

PLAY IT AGAIN SAM

If you execute the PowerShell above (and it’s successful) then you can attempt to invoke the API again.  Click Send!

DeleteSuccess

Notice this time PostMan returns an HTTP status of 204 (no content).  This is the appropriate response for a DELETE.  Let’s check our tenant to ensure Jon Snow is dead or rather John Doe is deleted.

DeleteProof

He’s gone!  You are good to go.

CONCLUSION

Azure is a dynamic, new technology.  Documentation is changing almost daily. It can be frustrating to navigate the changing landscape of marketing terms and portals.

All the information you need to sort out this error is out there. However, I found it to be scattered and not exactly applicable to what I was doing.  The PowerShell snippets existed in parts, one to log in to a remote tenant, one to add the role.  This post simply serves to bring the information together so you can quickly get past this problem and on to writing more code.

 

Cheers!

 

 

Chris Clements
I am a senior software developer and development team lead in Houston Texas. I am passionate about the “art” of software development. I am particularly interested in software design patterns and the principles of SOLID object-oriented code. I am an evangelist for test driven development. I love to think and write about my day-to-day experiences in the trenches of enterprise IT. I relish the opportunity to share my experiences with others.

From the wire to the presentation, I am holistic solutions guy. I have broad experience in client side technologies such as Javascript, Ajax, AngularJS, Knockout, and Bootstrap. I have extensive experience with MVC, MVVM, and ASP.NET Web Forms. I am strong in SQL Databases, performance tuning, and optimization. I also have a background in network engineering, wide-area and inter-networking.

This article has been cross posted from jcclements.wordpress.com/ (original article)

Reading a SharePoint Online (Office 365) List from a Console Application (the easy way)

In a previous post I talked about our strategy of using scheduled console applications to perform tasks that are often performed by SharePoint timer jobs.

As we march “zealously” to the cloud we find ourselves needing to update our batch jobs so that they communicate with our SharePoint Online tenant.  We must update our applications because the authentication flow between on premise SharePoint 2013 and SharePoint Online are completely different.

Fortunately for us, we found the only change needed to adapt our list accessing code was to swap instances of  the NetworkCredentials class for the SharePointOnlineCredentials class.

Imagine that this is your list reading code:

using (var client = new WebClient())
{
client.Headers.Add(“X-FORMS_BASED_AUTH_ACCEPTED”, “f”);
client.Credentials = _credentials;  //NetworkCredentials
client.Headers.Add(HttpRequestHeader.ContentType, “application/json;odata=nometadata”);
client.Headers.Add(HttpRequestHeader.Accept, “application/json;odata=nometadata”);

/* make the rest call */
var endpointUri = $”{_site}/_api/web/lists/getbytitle(‘{_listName}’)/Items({itemId})”;
var apiResponse= client.DownloadString(endpointUri);

/* deserielize the result */
return _deserializer.Deserialize(apiResponse);
}

The chances are your _credentials object is created like this:

_credentials= new NetworkCredentials(username,password,domain);

Here, the username and password are those of a service account specifically provisioned a for SharePoint list access.

In order to swap the NetworkCredentails class for SharePointOnlineCredentails first, you  need to download and install the latest version of the SharePoint Online Client Components SDK here (http://bit.ly/1rKS6N8).

Once the SDK is installed  add a reference to the Microsoft.SharePoint.Client and Microsoft.SharePoint.Client.Runtime libraries.  Assuming a default installation, these binaries can be found here: C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\16\ISAPI\.

Be certain to reference the 16.0.0.0 version of the dlls.  If you get the 15.0.0.0 version (which is currently the version in NUGet) your code may not work!

Now you can “new up” your _credentials like this:

_credentails = new SharePointOnlineCredentials(username,password);

But “TV Timeout!” (as a colleague likes to say after a couple brews at the pub) the password argument is a SecureString rather than the garden variety string.  You will need a helper method to transform your plain old string into a SecureString.  Here is how we do it:

public static SecureString GetSecureString(string myString)
{
var secureString = new SecureString();
foreach (var c in myString)
{
secureString.AppendChar(c);
}
return secureString;
}

One last thing to note; the SharePointOnlineCredentials class implements the System.Net.ICredentials interface. That’s what allows us to simple swap one class for another.

Therefore,  if you are following the SOLID principles and using dependency injection then the extent of your code changes may look like this:

var securePassword = SecureStringService
.GetSecureString(settings.SPOPassword);

container.Register<ICredentials>(()
=> new SharePointOnlineCredentials(username, securePassword));

Now that is cool!

Cheers and Happy Coding!

 

Chris Clements
I am a senior software developer and development team lead in Houston Texas. I am passionate about the “art” of software development. I am particularly interested in software design patterns and the principles of SOLID object-oriented code. I am an evangelist for test driven development. I love to think and write about my day-to-day experiences in the trenches of enterprise IT. I relish the opportunity to share my experiences with others.

From the wire to the presentation, I am holistic solutions guy. I have broad experience in client side technologies such as Javascript, Ajax, AngularJS, Knockout, and Bootstrap. I have extensive experience with MVC, MVVM, and ASP.NET Web Forms. I am strong in SQL Databases, performance tuning, and optimization. I also have a background in network engineering, wide-area and inter-networking.

This article has been cross posted from jcclements.wordpress.com/ (original article)

Applying the Concepts of the SharePoint App Model to SharePoint 2010

September 24, 2015 Leave a comment

Legacy Code Is Still Out There

The SharePoint 2016 Preview was released in August and many companies are already moving toward the cloud and SharePoint Online.  However, a good number of enterprises still have SharePoint 2010 (and perhaps older) farms hanging around.   It’s likely those on premise 2010 farms host highly-customized SharePoint solutions and perhaps require occasional enhancements.  This is the case in our organization.

Our development team was approached and asked to enhance a SharePoint 2010 solution so that our site could display news feeds from an external vendor.  The site must cache feeds so that the page displays correctly even if the remote site is unavailable at the time of page load.  Naturally, we asked our SharePoint 2010 developer to devise a solution to this problem.  A short while later the developer delivered a technical approach that is steeped in SharePoint tradition.

The SharePoint Way of Doing Things can be Expensive, Time Consuming and Disruptive

The solution proposes to provision content types, site columns, and lists during in the usual way, via feature activation.  These two lists would hold the remote URL (feed) and the fetched content from the remote feed.   A timer job would read from the feed configuration list and fetch the data storing the results into a second SharePoint list.  Lastly, a custom (server side) web part would be created to read and display the contents of the retrieved news feeds list on the page with all the appropriate sorting, formatting, and style our users expect.

On the surface, this seems like a perfectly reasonable solution for the task at hand.   The use of a full-trust deployed solution to create needed plumbing such as content-types and lists was how it should be done in those heady, salad days of SharePoint 2010.  The proposed solution can confidently claim that it adheres to the best practices of SharePoint 2010.

However, there are drawbacks to going with a traditional SharePoint-based solution.  Before the advent of the sand-boxed solution in 2010 it was very easy for a poorly written SharePoint solution to adversely affect the farm on which it was installed.  Custom code has caused many a SharePoint admin sleepless nights. We don’t want to introduce new code to the farm if it’s not completely necessary.

Our team employs both SharePoint developers as well as .NET developers.  Our contract SharePoint developers command a higher hourly rate than our “run of the mill” .NET developers.  As our industry is extremely cost sensitive right now it would be great if we could avoid the use of specialized SharePoint developers for this one off project.

This last bit could be unique to our organization and may not be applicable to yours.  We have a stringent process for SharePoint deployments.  Suffice it to say, from the first request to have code promoted to test that a minimum of two weeks must pass before the code is deployed to production.  Content updates, such as adding web parts and editing pages is not subject to this testing period.  The ideal solutions would avoid an “formal” SharePoint development.

Why the SharePoint App Model is Cool!

The SharePoint app model was introduced with Office and Sharepoint 2013.   With the app model, Microsoft no longer recommended that developers create solutions that are deployed directly on the SharePoint farm.  Rather, developers create “apps” that are centrally deployed from an app catalog and run in isolation from SharePoint processes. SharePoint App Model code runs entirely on the client (browser) or in a separate web application on a remote server.  Apps’ access to SharePoint internals are funneled to a restricted and constricted RESTful API.

The app model prevents even the worst behaving application from affecting the SharePoint farm.  This means the farm is more stable.  Additionally, applications written using the App Model do not require a deployment to the farm or not the type of deployment that would necessitate taking farm into maintenance or resetting IIS.  Under the App Model SharePoint remains up even as new applications are made available.  Customers are happy you can quickly pound out their requests and make them available and admins are happy because your custom code isn’t taking down their farm (allegedly).

Sadly, the app model doesn’t exist for SharePoint 2010, or does it?  While specific aspects of the App Model do not exist in SharePoint 2010 you can still embrace the spirit of the App Model!  The very heart of the SharePoint App Model concept is running custom code in isolation away from SharePoint.  In our case we really only need to interact with SharePoint at the list level. Fortunately, SharePoint 2010 provides a REST API for reading and writing to lists.

Let’s re-imagine our solution and apply App Model-centric concepts in place of traditional SharePoint dogma.

First let’s use PowerShell scripts to create our Site Columns, Content Types, and lists rather than having a solution provision these objects on feature activation.

Next, let’s replace the SharePoint timer job with a simple windows console application that can be scheduled as a Windows scheduled task or kicked off by an agent such as Control-M.  This console app will read a SharePoint list using the REST API, then run out to fetch the content from the Internet writing the results back to a second list using the REST API.

Finally, we can substitute our server-side web part with a Content Editor Web Part that uses JavaScript/Jquery to access our news feed list via, you guessed it, the REST API.  The contents can then be styled with HTML and CSS and displayed to the user.

It’s noteworthy to mention that the UI aspect of this project could potentially suffer from the lack of a formal App Model and where a true Farm deployment may be superior.  In a true App Model scenario apps are deployed to a central App Catalog and can be deployed to sites across site collections.  In order to deploy this Content Editer Part to multiple site collections we would need to manually upload the HTML, CSS, and Javascript to the Style Library of each site collection.  Imagine having dozens or even hundreds of site collections. An actual solution deployment would have afforded us the ability to place common files in the _layouts folder where they would be available across site collections. Fortunately for us the requirement is only for a single site collection.

By cobbling together a set of non-SharePoint components we have, essentially, created an App Model-like solution for SharePoint 2010; a poor-man’s App Model if you will.

In my opinion, this solution is superior to the SharePoint way of doing things in the following ways:

  • Ease of Maintenance / Confidence – Using PowerShell to create columns, content-types, and list is better because scripts can be tested and debugged easily.  Deployments that provision sites are more complicated and time consuming.  From the perspective of a SharePoint admin PowerShell is likely a known entity. Admins can examine exactly what this code will be doing to their farm for themselves and perhaps gain a highly level of confidence in the new software being deployed.
  • Lower Development Cost / Ease of Maintenance A Windows console app is superior to a timer job because you don’t need to pay an expensive SharePoint developer to create or support a solution on a depreciated platform (SP 2010).  Maintaining a console application requires no specific SharePoint experience or knowledge.  In our case, we have an entire team that ensures timed jobs have run successfully and can alert on failure as needed.
  • Reliability / Availability – There is no custom code running within the SharePoint process.  This means there is NO chance of unintended consequences of misbehaving code created for this solution affecting your Farm.
  • Standards Based – HTML, JavaScript, and CSS are basically universal skills among modern developers and standard technologies.
  • No Deployment Outage – This solution can be implemented without taking down the SharePoint farm for a deployment.  Adding a simple content editor web part does not interrupt business operations.
  • Ease of Portability / Migration – Our solution, using a console app, HTML, and Javacript works just as well on SharePoint 2013 and Office 365 as it will with SharePoint 2013.  Whereas a traditional SharePoint solution cannot be directly ported to the cloud.

Conclusion

There is a lot of legacy SharePoint 2010 out there, especially in large enterprises where the adoption and migration to newer platforms can take years. Occasionally, these older solutions need enhancements and support.  However, you want to spend as little time and money as possible on supporting outdated platforms.

We needed a solution that had the following characteristics:

  • We didn’t want to continue to write new server-side code for SharePoint 2010.
  • We wanted a solution that didn’t require an experienced SharePoint developer to create and maintain.
  • We wanted code that was modular and easily migrated to Office 365.
  • We wanted to avoid a formal SharePoint deployment and its associated outage.

A traditional SharePoint solution was not going to get us there.  Therefore, we took the best parts of the SharePoint App Model (isolation, unobtrusive client side code, and RESTful interfaces to SharePoint) and created a holistic solution that fulfilled the customers’ expectations.

-Chris

Chris Clements
I am a senior software developer and development team lead in Houston Texas. I am passionate about the “art” of software development. I am particularly interested in software design patterns and the principles of SOLID object-oriented code. I am an evangelist for test driven development. I love to think and write about my day-to-day experiences in the trenches of enterprise IT. I relish the opportunity to share my experiences with others.

From the wire to the presentation, I am holistic solutions guy. I have broad experience in client side technologies such as Javascript, Ajax, AngularJS, Knockout, and Bootstrap. I have extensive experience with MVC, MVVM, and ASP.NET Web Forms. I am strong in SQL Databases, performance tuning, and optimization. I also have a background in network engineering, wide-area and inter-networking.

This article has been cross posted from jcclements.wordpress.com/ (original article)

Uploading an Existing Local Git Repository to BitBucket.

September 24, 2015 1 comment

I use BitBucket for all my recreational, educational, and at home programming projects.  I like that fact that you can have free, private repositories. BitBucket supports Git as well as Mercurial.

Typically, I will create a new BitBucket repository and then use the Git Bash shell or Visual Studio to clone the project from BitBucket and simply add files to the new local repository.  However, there are times when I will start a local repository first and later decide that I like the project and want to save it off to BitBucket.

This is the procedure I use to upload an existing local Git repository to BitBucket.

Step 1 – Create a New Git Repository on BitBucket.

newRepo

Step 2 – Open your Git Bash Shell and Navigate to the Location of your Git Repository

Note: The location to the .git file is the path we are looking for.

$ cd Source/Repos/MyProject/

navigateRepro

Step 3 – Add the Remote Origin

Note: You will need to the remote path to your repository you created on BitBucket.  You can find this URL on the Overview screen for your repository in the upper right corner of the page.

$ git remote add origin http://bit.ly/1PuVrG7

addOrigin

Step 4 – Push your Repro and All its’ References

$ git push -u origin –all

You will be prompted to enter your BitBucket password.

pushAll

Step 5 – Ensure all Tags get Pushed as Well

$ git push -u origin –tags

Again you will be prompted to get your BitBucket password.

pushTags

If all goes well you will see the “Everything up-to-date” message displayed in the Git Bash shell.

The procedure above will move the entire repository. That means if you created local branches, the those are moved up as well. It’s pretty cool really.  Once the remote origin is set you can commit changes locally and then use Visual Studio’s built in Git support, or the Git Bash to Sync your changes “to the cloud”.

sourceView

Happy Coding!

Chris Clements
I am a senior software developer and development team lead in Houston Texas. I am passionate about the “art” of software development. I am particularly interested in software design patterns and the principles of SOLID object-oriented code. I am an evangelist for test driven development. I love to think and write about my day-to-day experiences in the trenches of enterprise IT. I relish the opportunity to share my experiences with others.

From the wire to the presentation, I am holistic solutions guy. I have broad experience in client side technologies such as Javascript, Ajax, AngularJS, Knockout, and Bootstrap. I have extensive experience with MVC, MVVM, and ASP.NET Web Forms. I am strong in SQL Databases, performance tuning, and optimization. I also have a background in network engineering, wide-area and inter-networking.

This article has been cross posted from jcclements.wordpress.com/ (original article)