Salesforce API – authentication

Recently I spent some time in a free Salesforce developer sandbox on  I got to experimenting with their API to connect with my little custom app and realised there were a few gotchas along the way that required some investigation.  This post is an aide-mémoire for me, but may help others in similar circumstances.

You need to set a few config settings in the UI in order to authenticate using the Salesforce API:

  • ensure ‘all users may self-authorise’
  • ensure IP range is set (or relaxed)

Then, in name > personal, select ‘reset security token’ to get emailed your token. Append this to your password when authenticating with the API:

curl -d "grant_type=password" -d "client_id=<YOUR_CLIENT_ID" -d "client_secret=<YOUR_SECRET>" -d "username=<YOUR_USERNAME>" -d "password=<YOUR_PASSWORD><YOUR_TOKEN>"

…which returns:

{"access_token":"<YOUR_ACCESS_TOKEN>", "instance_url":"","id":"","token_type":"Bearer","issued_at":"1498045747558","signature":"Bz8JWOVDvr1hN1e8zd/wVwqwerbj3cDAcPcO7QrUmGo="}

You can then make requests like:

curl -H "Authorization: Bearer <YOUR_ACCESS_TOKEN>" -H "X-PrettyPrint:1"

Just remember to escape the ! character in your access token.

Puphpet – vagrant provision failure on Ruby gem deep_merge

If you’re running into this error provisioning your puphpet vagrant box:

==> machine2: ERROR: Could not find a valid gem 'deep_merge' (= 1.0.1), here is why:
==> machine2: Unable to download data from - SSL_connect returned=1 errno=0 state=error: certificate verify failed (
==> machine2: ERROR: Could not find a valid gem 'activesupport' (= 4.2.6), here is why:
==> machine2: Unable to download data from - SSL_connect returned=1 errno=0 state=error: certificate verify failed (
// etc

Fix it quickly and easily with this helpful paste. Thank you junkystu!

Openshift Origin – secrets and namespaces

OK, I’ve figured out how the secrets and namespaces thing works in Openshift Origin.

Secrets are generated through the CLI client ‘oc’. In order to generate a secret, you must login on the client (oc login), then select the project you want to generate the secret for (oc projects ).

By selecting the project, you switch to the project’s namespace. You see, when you create a project, it is assigned to a namespace which just happens to be the name of the project.

Now, when you generate the secret, all three service accounts automatically generated for each project (default, builder and deployer) will have access to the secret for the given project in that namespace, but only you’ve added the secret to one of the service accounts (oc secrets add serviceAccount/accountName secret/secretName).

This makes total sense. Once you know how it works, it seems easy.

OpenShift Origin

While I wait for my invitation to join Openshift Online, I’ve been experimenting locally with Openshift Origin. To keep things easy, I opted for the Vagrant all-in-one vm.

Openshift is a distribution of Kubernetes which provides easy-to-build and deploy docker container environments. There’s plenty info on their website, and the documentation is good.

Once I got up and running, I followed the CakePHP tutorial, as it’s mainly PHP web apps I’m interested in using this for.

CLI Client

As well as the Vagrant vm, I locally installed the ‘oc’ CLI client which can be downloaded from here (latest release at time of writing)

If you don’t want to install the CLI client, you can SSH into the vm and it’s already available to you there.

Automated Builds

I want my Openshift project builds to be triggered automatically by pushes to my repos on GitHub, so I’m using webhooks to achieve this.

In order for GitHub to reach my local private installation of Openshift, I’m using Ultrahook, which you can integrate very easily by following along with an Openshift blog post.

Private Repository

If pulling from a private Github repository, you need to generate a secret to hold your GitHub credentials (either username and password or public SSH key).

The first step, is to make sure you’re logged in via the CLI client (oc login). Then follow the instructions in the link above to generate a new basicauth or SSH secret.


Ignore the instructions to link the secret to the builder, however, as this command is obsolete. Instead, you can discover how to do it correctly by running oc secret add -h where you’ll find that the correct command is
oc secrets add serviceaccounts/builder secrets/

The instructions on how to add the secret to your YAML file are correct.

Service Accounts

I almost pulled my hair out trying to connect to a private repo. For some reason, the builder service account couldn’t reach my credentials secret so the build would hang in “pending” state. I couldn’t understand why, then I found a command that helped me solve it:

oc describe serviceaccount builder

Including the service account name (builder in this case) is optional. If you leave it out, all service accounts are listed.

The output of this command showed me that the builder service account was operating in a specific Namespace. I must have made some sort of error when following the cakephp tutorial and somehow associated my builder service account with a “cakephp-ex” namespace (a value I entered when I followed the tutorial to make that project). I still don’t know what I did wrong, but by editing the GitHub repo URL in the cakephp project (adding a private GitHub repo URL that I owned instead), I was able to successfully clone it using the authentication secret that I defined in the YAML build config file.

I need to read up some more on what’s going on here, because it’s bizarre that the builder can only see a secret if a build has already successfully occurred. I’m sure it’s a simple matter of EBKAC and I’ll post the answer when I’ve discovered it.

Next Steps

This is as far as I’ve got to-date. I’m now trying to successfully build a Symfony web app with a database in a separate pod. I’ve got the MySQL pod up and running, but I need to work out how to either import a seed database, or at least execute the build schema command during a post-build hook so that Composer’s cache:clear command doesn’t fail and break the webserver build! Another blog post will follow when I’ve got this sorted out.

Doctrine 2 and Slim 3

I initialise my entityManager in the container:

$container['em'] = function ($c) {

$paths = array(__DIR__."/../../src/entity");
$isDevMode = $c['settings']['devMode'];

$db = $c['settings']['db'];
$dbParams = array(
'driver' => $db['driver'],
'user' => $db['user'],
'password' => $db['pass'],
'dbname' => $db['dbname'],

$config = Setup::createAnnotationMetadataConfiguration($paths, $isDevMode, null, null, false);
$entityManager = EntityManager::create($dbParams, $config);

return $entityManager;

I had trouble getting the entityManager to recognise my entity metadata.

I followed Doctrine’s Installation and Configuration instructions on Doctrine’s website. Then I found the solution on Stackoverflow.

With my entityManager initialised in this way, I can use Doctrine inside my app and from the cli.

Datatables – refresh table on ajax success

I want users to be able to edit rows of a datatable via ajax. On success, I need the table to refresh to show the edit.

It’s a very small table (~20 rows max), so for speed I’m just going with a reload of the whole table. The table itself is loaded via ajax so happily the solution to this is quite straight forward:


I call this in my edit ajax success method and viola.

I hope this helps anyone who, like me, might have spent a good 30 minutes shaking their fist at the table.draw() approach.

DQL – treating 0 values as NULL using AVG()

I have a table with an ‘interval’ column (INT).  I want to calculate the average interval over a given found set.  Easy peasey, right?  Just use AVG(columnName) in my SELECT statement.

Not quite.  The resulting figure was significantly lower than I expected.

The reason?  I have several rows with an interval of 0 (zero).  Now, if these zero values were NULL I wouldn’t have a problem because NULL values are excluded by default.  So, in order to get the true average for my particular use case, I need a clean way of excluding the zero rows.

Turns out this is quite easy, using a combination of COALESCE and NULLIF I can do just that:

AVG(COALESCE(NULLIF(l.interval, 0))) AS averageInterval,

Ta daa.



Apply jeditable select field to jQuery datatable column without crashing JavaScript

I have a server-side jQuery datatable and I want an Ajax-populated select field in a single column of the table.  I’m using the jeditable plugin for that.

I was applying jeditable to the column ‘td’s in the datatables event:

summaryTable.on('draw.dt', function(e) {

// apply jeditable here


This worked fine for on 500 rows or less, but as soon as I increased up the result set above 500 rows JavaScript came to a grinding halt.

To fix this, I am now applying jeditable in my datatables settings object:

"columnDefs": [

// more column definitions here

 "targets": [7],
 "width": "7%",
 "createdCell": function (cell, cellData, rowData, row, col) {

 var severitySrc = Routing.generate('my_editable_action');

 $(cell).editable(severitySrc, {
   indicator : 'Saving...',
   height: "14px",
   type: "select",
   loadurl: severitySrc,
   callback: function (sValue, y) {
     var string = $.parseJSON(sValue);
   submitdata: function () {
   return {
     "row_id": this.parentNode.getAttribute('id')

 $(cell).on('change', 'select', function () {

Now I can load 3,000 rows and more without JavaScript crashing.

Prevent server-side datatable reloading when editing cell with jeditable

If you’re using server-side datatables and want to edit a cell or row without triggering an additional server call to reload the datatable, you can.

Instead of this:


Use this:


The draw() method updates the datatable cache, and is important if your table source is DOM or Ajax.

If your table is server-side, however, the datatable cache is redundant.  This is because all actions (sort, search, etc) are performed on the server, not the client.  So you can just update the cell display and forgo calling draw in this instance.  Obviously you should still handle sending the updated value to the server via Ajax and persisting it.  The point is you don’t need to reload the whole datatable as well.

There is a relevant forum thread here (pay attention to Allan’s second comment).