Skip to content

Flow

Power Automate: Sending Multiple Select Choice Field Options from Microsoft Forms to SharePoint List

We recently used Power Automate to create a workflow (which we’ll call the “flow” in this post) that automatically sends a Forms submission to a SharePoint list. For the most part, this was a straightforward process, since the Form and the List had columns that held the same type of data. The trigger is when a response is submitted for that particular Form; add an action to get the response details; then create the item in SharePoint, with the response details added into their corresponding list column.

The challenge came when we got to a choice field in the Form that allowed for multiple selections. There were two parts to this challenge: first, directly inputting the dynamic content from the Get Response Details action resulted in a string in the choice field in the format [\”Option A\”,\”Option C\”] rather than corresponding to the preexisting Option A, Option C, etc. in the list; and second, we wanted the ability to add, remove, or change options in the choice field without needing to also update the flow.

When we looked up the process online, the answers being given were overly complicated and a bit nonsensical. This post is designed to walk through our process of getting all of the choices in a multi-select choice field from a Microsoft Form into a SharePoint list. We’ll talk about some of the snags we hit along the way.

In this post, we’ve added a few styles to help explain the process. Bolded items are action items; these are typically specific things on the screen you should click or select. Underlined items are what you should read on screen, but not necessarily directly interact with, such as the name of a field or section; use these to help orient yourself to where you should be looking. Courier New font is used to indicate specific things that should be typed.

The Process

First, make sure your Form and SharePoint list contain the same fields. In the case of the SharePoint list, ensure your choice field has each option listed and the Allow multiple selections option toggled On.

Create the Flow

Go to Power Automate and create a new Automated cloud flow. Give a name to your flow. For the trigger, select the Microsoft Forms “When a new response is submitted” trigger, then Create the flow.

In the Form Id field, select the corresponding Form. If it was created directly in the Microsoft Forms interface, it’ll be listed in the drop down menu. This was our first snag: we created our Form in Teams. If you created it via Teams like we did, you’ll need to manually find and input the form ID. Navigate to your Form in Teams, then right-click the tab with the Form, and click Go to website. When the Form appears in the browser, find FormId in the URL, then copy everything after the equals sign (=) – it should be a long string of characters. Go back to your flow and, for Form Id, click Enter custom value, then paste in the Form’s ID.

New step, then select Get response details. Add the form ID as you did above. For Response Id, select the dynamic content Response Id.

Parse JSON

New step, then select Parse JSON. In Content, select the dynamic content under Get response details for the multi-select choice field. To build the Schema, click Generate from sample, then type or paste in the following:

[“Red”,”Blue”]

This won’t match the options in your form – that’s okay. This step is to simply teach the Parse operation how the data will be displayed, not necessarily what. Click Done. A JSON schema will automatically be generated. Importantly, you’ll see it will generate an array.

Convert Selection Array to String

New step, then select Initialize variable. This is the first of two variables we’ll create. It’ll be used to transform the array of selected options into a string. Give your variable a name. For Type, select String. Leave Value blank. For this example, we named it ColorInput to indicate the “raw” inputs into the field.

New step, then Apply to each. In Select an output from previous steps, select the dynamic content under Parse JSON for Body. Within the Apply to each step, Add an action, then select Append to string variable. In Name, select the name of the variable you just created. In Value, select Expression, then type in the following formula:

concat(item(),’;#’)

As you type, closing parentheses and apostrophes will be added automatically, so make sure you avoid duplicating characters. What this function does is add a ;# to the end of each choice, which is what SharePoint will use to determine the different options selected. Then click OK.

Trim String Length

Outside of Apply to each, New step, then select Initialize variable. This is our second variable, which we will use to make our final adjustments to the first. This was another snag we found, since the flow won’t allow you to select a variable and then change that same variable. Give your new variable a name. For Type, select String. Leave Value blank. For this example, we named it ColorFinal to indicate that it would hold the final, readable transformation of the raw inputs.

New step, then select Condition. A quirk we found at this point is that the previous step will result in a “blank” option being added into the SharePoint choice field. We use this condition to check to see if any option is selected, and if so, it will trim the last couple of characters off the end, which removes the “blank” item. In the leftmost Choose a value, select Expression, then type the following formula. You’ll need to replace ColorInput (our first variable’s name) with the name the first variable you created.

length(variables(‘ColorInput’))

Click OK. Click is equal to to open the menu, then select is greater than. In Choose a value, type 2.

Move to the Yes side of the condition. Add an action, then select Set variable. In Name, select the second variable created. In Value, select Expression, then type the following formula. Just like with the previous Expression, you’ll need to replace ColorInput (our first variable’s name) with the name of the first variable you created.

substring(variables(‘ColorInput’),0,sub(length(variables(‘ColorInput’)),2))

Add to SharePoint List

Beneath the condition, New step, then select Create item. In the Site address, select the SharePoint site containing your corresponding list. Then, in List Name, select the corresponding list. For each field except for the multi-select choice fields, you can use the dynamic content under Get response details to match each field in the Form to each field in the list.

Find the box that indicates the multi-select choice field of the list (it will be surrounded by a dashed line). It will read [Field] Value – 1. Click on it, and when the drop-down menu appears, select Enter custom value. Then select Dynamic content, under Variables, choose the second variable created – for our example, ColorFinal.

Save and test your flow. Any number of selected options in the Form’s multi-select choice field should now be correctly transferred into your multi-select field in the list.

Data Driven Apps – Flattening the Object with PowerApps, Flow, and SharePoint

The scenario we started to explore in Data Driven Apps – Early and Late Binding with PowerApps, Flow, and SharePoint was one of a test engine: we had tests in which we didn’t want to have to code every question as its own form – or even two different tests with two different applications. We left the problem with a set of rows in the table that each represented a separate answer to a question.

The User Experience

The need for flattening these multiple rows into a single record is to make it easier for users to work with. It’s not easy in the user tools – like Excel – to collect a series of rows and to generate an aggregate score. It’s easier to score a single record that has columns for the individual questions and then score that one record.

That’s OK, but the question becomes how do we take something that’s in a series of rows and collapse it into a single record using PowerApps, Flow, and SharePoint? The answer turns out to be a bit of JSON and a SharePoint HTTP request.

JSON

JavaScript Object Notation (JSON) is a way of representing a complex object, much like we used to use XML to represent complex object graphs. Using JSON, we can get one large string that contains the information in multiple rows. From our PowerApp, we can emit one large and complex JSON that contains the information for the test taker and for all the questions they answered.

This creates one item – but it doesn’t make it any easier for the user to process the values. For that, we’ll need to use Flow to transform the data.

We can write this JSON into a staging area in SharePoint and attach a Flow to the list, so any time a row is changed, the Flow is triggered. The Flow can then process the item, create the properly formatted row, and delete the item that triggered the event.

The SharePoint HTTP Request

The power to do this transformation comes from the ability for Flow to call SharePoint’s HTTP endpoints to perform operations. While most of the time, Flows use the built-in actions, the “Send an HTTP request to SharePoint” can be used to send an arbitrary (therefore late-binding) request to SharePoint to take an action. In this case, we’ll use it to put an item into a list. This request looks something like this when completed:

You’ll notice a few things here. First, it requires the SharePoint site URL. (Which you can get by following the instructions in this post.) In this example, the value comes from the SharePointSiteURL variable.

The next thing you’ll notice is that we’re using the HTTP method POST, because we’re adding a new item. URI (the endpoint we want to call) is coming from the variable ListAPIURI, which is being set to:

_api/web/lists/GetByTitle(‘Evaluations’)/items

The title of the list we want to put the item into is ‘Evaluations’, thus the URL. It’s possible to refer to the endpoint a few different ways, including by the list’s GUID, but most of the time accessing the list by title works well, because it’s readable.

The next configuration is to set the headers, which are essential to making this work. You can see that odata-version is set to 3.0, and both accept and content-type are set to application/json;odata=verbose.

Finally, we have the JSON, which represents the item. This is largely a collection of the internal field names from SharePoint – but it has one challenging, additional field that’s required.

__metadata

In addition to the internal fields you want to set values to, you must also set an item “__metadata” to the collection of { “type”: “SP.Data.ListItem” } – unless you’re using SharePoint content types. In that case, you’ll have to figure out what the API is referring to the content type as. We’ll cover that in the next post.

Internal Names of Fields

For the most part, we don’t pay much attention to the internal name of the field. It’s noise that SharePoint uses to handle its business. However, when you create a field, an internal name is created as the name of the field you provide with special characters encoded. Mostly people use spaces when they’re creating names, so “My Field” creates an internal name of My_x0020_Field. You can determine the field’s internal name by looking in the URL when you’re editing the field. The name parameter will be the field’s internal name. (With one exception: if you used a period in the name, it won’t show as encoded in the URL but will be encoded in the name as _x002e_)

Processing the Questions

To get the JSON to send to SharePoint, we need to have three segments that we assemble. There’s the initial or starting point with the __metadata value, there’s a middle with our questions, and there’s an ending, which closes the JSON.

To make the construction easy, we’ll use the Compose data operation action to create a string and put it in a variable. The initial segment we’ll set and then assign to the variable (Set Variable). For the other two segments, we’ll use the Append to string variable action. The result will be a variable with the entire JSON we need.

So, the start looks something like:

After this, we can set a specific field that we want to set. Once this action is done, we use its output to set to our end variable, like this:

Now we get to the heart of the matter with JSON Parsing that we’ll use to do the flattening.

JSON Parsing

There’s a Data Operation called Parse JSON that allows us to parse JSON into records that we can process in a loop. We add this item, and then, generally, we click the ‘Use sample payload to generate schema’ to allow us to create a schema from the JSON. Flow uses this to parse the JSON into the values we can use. After pasting JSON in and allowing the action to create the schema, it should look something like:

Next, we can use a loop and use the questions variable from the parse operation as our looping variable and move directly into parsing the inner JSON for the question.

From here, we’ve got our answer, but it’s worth making one check. If, for some reason, they didn’t answer a question, we’ll create a problem, so we do a check with a condition:

length(body(‘ProcessQuestionJSON’)?[‘Answer’])

If this doesn’t make sense, you may want to check out my quick expression guide, but the short version is that it’s checking to make sure the answer has something in it.

If there is something in the answer, we create a fragment for the field with another compose. In our case, we prefixed numeric question numbers with an E. Because the questions also had periods in them, we had to replace the period with _x002e_. The fragment ends with a comma, getting us ready for the next item. The fragment is then appended to the JSON target.

The Closing

We’re almost done. We just need to add an end. Here, because we ended with a comma before, we need to just include at least one field and the closing for our JSON. In our case, we have an ObservationMasterID that we use – but it can literally be any field.

This just gets appended, and then we call our SharePoint HTTP that we started with, and we get an item in our list with all our questions flattened into the record.

Data-Driven Apps – Early and Late Binding with PowerApps, Flow and SharePoint

Most of the time, if I say, “early and late binding,” the response I get is “huh?” It doesn’t matter whether I’m talking to users, developers, or even architects. It has a design impact, but we usually don’t see that it’s happening any more than a fish knows it’s in water. In this post, I’m going to lay out what early and late binding is, and I’m going to explain a problem with designing data-driven apps with PowerApps, Flow, and SharePoint. In another post, I’ll lay out the solution.

The Data-Driven Application

Every organization has a set of forms that are largely the same. The actual questions are different, but the way the questions are responded to is the same. One way to think about it is to think of quizzes or tests. Most test questions fit the form of pick one of four answers. They are this way, because those are the easiest kind of questions to psychometrically validate. The content of the questions changes, but the format doesn’t.

Imagine for a moment you’ve got a test for chemistry and a test for physics. Would you want to create two different apps for them? Do you want to hardcode every screen? Or would it be better to have it be data-driven, with a table for questions and the tests they belong to, and then an application that reads this data and allows you to enter data for each of the questions based on the same template? Obviously, the latter is better, but it does create a data structure problem. Instead of having everything in one row, there will be numerous rows of data for the same quiz.

That makes it hard for you to score the quiz and record all the answers into one row for the student. In fact, because of early binding, you can’t directly create a PowerApp that will write to fields that are added after the PowerApp is published. That is, let’s say you have five questions with individual fields in a row for the answers. If you add a new, sixth question and column, you’ll have to go in and refresh your Data Connection in PowerApps and then write code to write the value out into that column. That’s okay if you don’t make many changes, but if the questions change frequently, this isn’t practical.

Early Binding

Languages like C# and Java are type-binding languages. The type of the variable is determined by the developer before the execution begins. Integers, Booleans, floats, and so forth are set out in the code, and the variables used make sure the data matches the type of the variable.

Though both languages now offer type-flexible variables that can accommodate many actual types, and both have always supported arbitrary name-value pairings, they demonstrate a bias towards knowing what you’re going to get and the compiler validating it.

In contrast, JavaScript is a type-less language. Everything is basically just an arbitrary collection of objects. The good news is that you can readily expand or extend JavaScript. The negative is that simple typos or capitalization errors can create problems that are very difficult to debug. The compiler won’t help you identify when one part of a program sends data that another part of the program doesn’t know how to deal with.

In short, type binding and early binding helps us ensure that our programs are more reliable and reduces our cost to develop – most of the time.

Late Binding in an Early Binding Environment

If you’ve done much development with C# or Java, you realize that, even though the language fundamentally wants to do early binding, you can structure your code in a way that’s data-driven. You can use type-less variables (declared with var) and then use them however you would like. The compiler tries to infer what is going on based on method signatures and warns you when what you’re doing won’t work out well.

This is really a compiler-friendly way of doing something we could always do. We could always define our variables as object – so making sure that we’re managing how we use the variable is on us. This works fine for all sorts of objects but represents a challenge with primitive data types, because, fundamentally, they’re not processed as pointers like every other kind of object is.

Pointers

In the land before time, when dinosaurs first roamed the earth, we worked with languages that weren’t as friendly as the managed languages we deal with today. We had to manually allocate blocks of memory for our object and then keep track of pointers to that information. These pointers were how we got to the objects we needed. This is abstracted for us in managed languages, so that we don’t have to think about it – because it was easy to get wrong.

The net, however, is that every object we work with is really a pointer. (Or, even more technically, a pointer to a pointer.) So, whether I’m working with a string or a date, I’m most of the time really working with a pointer that points to the value. That’s fundamentally different than the way we work with integers, floats (including doubles), and Booleans. These are native types, and mostly we use them directly. This makes them more efficient for loops and control structures.

It’s technically possible to refer to an integer or Boolean through a pointer. .NET will handle this for you, but it’s called boxing and unboxing, and, relatively speaking, it’s slow. So, the idea of using an object (or var) for a variable works well when we’re dealing with objects, but not so much when we’re dealing with primitive data types. (To make things more efficient but more complicated, with var, the compiler will sometimes optimize this to the real type for us, saving us the performance hit.)

Arrays, Lists, and Collections

Another way that you can do late binding in an early binding language is to use arbitrary arrays, lists, or collections. These collections of objects aren’t explicitly referenced and are instead referenced by location. More flexibility happens when we have name value pairs, called dictionaries, allow for name identified items. In this case, you can look up an item by its name and set or retrieve values.

JavaScript transparently converts a period (.) into a lookup into the dictionary, because JavaScript only has arrays and dictionaries. So, while C# and Java look in a specific place inside the object for a value when you use a dotted notation, JavaScript is really looking up that value in the collection.

What in JavaScript is written like object.valuea in C# is something like dictionary.getValue(“valuea”).

What This Means

In most cases, we don’t think about early or late binding, because it’s just a part of the environment. However, it becomes important in the scenarios like the above, where we want to create one test engine or one form engine that can be data-driven. We don’t know the names of the questions when we start. We only know the names after we’ve begun. It’s still not an issue if we can store the data in the database as a set of records for each question – but that isn’t the way that people want to see the data to do analysis. To do analysis, users want to see one record for the entire test. To do that, we need a way to convert late binding into early binding. In short, we need to flatten the abstract questions into a concrete test.

Getting the SharePoint Site URL in a Microsoft Flow

Working with clients on deployments in the Microsoft world isn’t always easy. Some clients want to develop in one environment (or tenant) and then move that development to a new tenant. Invariably, there are steps that get missed. When they do, the production deployments break, which impacts users – and it’s frustrating.

One of the problems with Microsoft Flows is the need to update the source URL for the SharePoint operations. If you trigger based on some list event, and then want to update another list in the same site, you’ve got to enter that URL again.

The first solution to this is to define a variable at the top of the flow and then use that variable for the Site URL for every SharePoint operation. That helps, because it centralizes the setting of the site URL to a single place. However, it’s still easy to miss when you’re doing a deployment.

A way to enhance this is to use an expression to set the variable so that it’s always the right URL, but that’s not easy.

In your trigger inputs – trigger()[‘inputs’] – there’s a variable called path. The problem is that this path looks something like /datasets/https%253A%252F%252Fthorprojects.sharepoint.com%252Fsites%252FTEST/tables/f0de155c-08bc-4a46-9772-74dde8a1ece0/onupdateditems

To get the SharePoint path out of that, you need to do a few things. Initially, I tried stripping the last three levels in the URL iteratively, but that proved to be unwieldy. What I ultimately realized is that the lengths of the start and end of the URL are fixed, thus I can use hardcoded values for the lengths to trim rather than relying on the slash delimiters. For this URL, I can use the following statement to get the SharePoint portion.

substring(trigger()[‘inputs’][‘path’], 10, sub(length(trigger()[‘inputs’][‘path’]), 69))

Of course, I still have some replacements to do to get my encoded characters back, so the statement becomes:

replace(replace(substring(trigger()[‘inputs’][‘path’], 10, sub(length(trigger()[‘inputs’][‘path’]), 69)), ‘%252F’, ‘/’), ‘%253A’, ‘:’)

That’s relatively clean. I just initialize my variable to this, and we’re good – if the trigger is an On Changed. It turns out the URL for On Created is slightly different. (It ends with onnewitems.) To cover both, I added an if statement to test whether it’s a new or changed event and ended up with:

if (greater(indexOf(trigger()[‘inputs’][‘path’], ‘onupdateditems’),0),
replace(replace(substring(trigger()[‘inputs’][‘path’], 10, sub(length(trigger()[‘inputs’][‘path’]), 69)), ‘%252F’, ‘/’), ‘%253A’, ‘:’),
replace(replace(substring(trigger()[‘inputs’][‘path’], 10, sub(length(trigger()[‘inputs’][‘path’]), 65)), ‘%252F’, ‘/’), ‘%253A’, ‘:’))

Now, I’ve got one formula that I can give developers to get the SharePoint Site URL for the current site, and I don’t have to worry whether the URL gets updated during a deployment or not.

Microsoft Flow and Azure Logic Apps – Math and If Function Workarounds

Microsoft Flow is built on Azure Logic Apps and uses the same workflow language functions. However, there are some limitations of the math functions included in Logic Apps that need some creative workarounds, so here’s a few ways to work around issues. I mentioned in my Quick Formula Expression Guide that you can’t use the arithmetic operators you would expect. Instead, you must use functions to perform operations, but there are some quirks.

Int() Doesn’t Accept Floating Point Numbers

Everyone expects that the int() function should return an integer based on a floating point input. However, if you provide int() with a number that has a decimal component, it will throw an exception. There’s a solution to this. Instead of int(variable) use div(variable, 1). The div() math function returns an integer (not a float as you would expect – more on that in a moment). It’s the number of whole times the number can be divided. So, you can get an integer by dividing your floating point by 1.

Div() Returns an Integer

There are two kinds of division in math. Floating point division is what we normally expect. This is the kind of division we were taught in elementary school. There’s also a second kind of integer division that returns the whole number of times that the divisor goes into the dividend. The remainder is dropped (but can be fetched with the mod() function). In Logic Apps, the div() returns an integer division – not a floating point one. However, there’s another way to solve the problem of division resulting in a floating point number.

Every division can be accomplished with a multiplication. mul() returns a floating point. In the simple example, if you want to divide by 100, you can also multiply by 0.01. (I.e. div(variable, 100) becomes mul(variable, 0.01) after conversion.) So, you can convert your divisions into a multiplication of a fraction. Factors of 10 are obviously easy, but other numbers can be done as well. Simply use a calculator to divide 1 by the number in question, and you’ll have the factor you need to multiply by. (or enter the number you want to divide by and use the great 1/x key available on some calculators)

Truncating the Number of Decimals

When dealing with currency, it’s often important to truncate the number of decimal places in the calculation to two decimals. Given that there’s no round() function in Logic Apps, we have to settle for truncation. However, even this becomes a bit “interesting” due to the limitations. Take a look at this sequence:

mul(

div(

mul(variable, 100)

,1)

, 0.01)

It multiplies the variable by 100 to make two decimal places into whole numbers, then uses div() to eliminate the fractional component, and finally uses mul() to shift the number back two decimal places to the right.

If False, then True

Another quirk that exists in the Logic Apps language is that the true side of an if statement is always evaluated. Say we have the if statement:

if(false, div(0,0), 1)

This will throw an exception because the div(0,0) results in a division by zero error. You must convert the if statement so that the true side can always be executed:

if(true, 1, div(0,0))

In this case, the result will be 1 and no exception will be thrown.

While this isn’t the way other languages work – nor is it desirable – it is the way that it works today, and it might explain some odd exceptions your Microsoft Flows are throwing.

Book Review-Creativity: Flow and the Psychology of Discovery and Innovation

Creativity seems to have some mystical property to it. It seems like some people are creative and others are not. It’s like someone is born to be an artist, and another person is born to be an accountant. In Creativity, Mihaly Csikszentmihalyi dispels the notion that creativity is something that you’re born with and begins a journey with us about how creativity might be encouraged or discouraged. Creativity, it seems, is a pretty difficult thing to pin down, because it means different things to different people.

The real story of creativity is (as Csikszentmihalyi explains) “more difficult and strange than overly optimistic accounts have claimed.” Creativity is context-sensitive and sensitive to subtle environmental factors that are hard to detect and, in some cases, even harder to create. We’ve learned that creativity is hard to predict.

On Becoming Creator

When the first creation myths arose, we were primitive apes that could barely survive except through our work with one another, and even then with haltingly high numbers of casualties. However, over time, we’ve taken over the role of creator as we began to shape and control our world.

Our creativity wasn’t a thing when we were struggling for survival, but now that we’ve placed ourselves in the creator role, we feel the need to ensure that we’re always creating.

Catching Creative

Some of the most creative ideas ever thought have been lost to the sands of time. It’s not enough to have a really good idea. Every truly creative thought can only be defined as such once someone has decided that it’s creative. Everett Rogers, in Diffusion of Innovations, explains how an innovation can be spread in a community. He, at the same, time tacitly acknowledges that even a good idea might not diffuse its way into a community without the right conditions.

It takes more than just the next creative idea. It takes a recognition that the idea is creative and useful for it to possibly catch on. Creativity is validated by the domain that it’s creative in. That requires the right mix of different and the same – or at least acceptable.

Defining a Domain

Sometimes, new domains are created by creativity, but, much more frequently, creativity exists inside of a domain. A domain is defined by the symbols and routines that define it. That is, a domain is a set of agreements about how things will operate. There’s a language that is used, a way problems are approached, and a set of rituals.

A domain is the space, and a field is the people in that space. The people in the field understand the domain, including its rules, and choose to operate in it – at least sometimes. By operating in a domain and learning the semantic rules, the field has developed an enhanced schema for the information in the domain. (For more about schemas see Efficiency in Learning.)

A side effect of the enhanced internalized schema for the domain is that it can make it difficult for outsiders to penetrate the domain. Developing the baseline understanding or schema for the domain is difficult, because those in the field have “the curse of knowledge” and cannot – typically – understand what it’s like to not know the field. (See The Art of Explanation for more on “the curse of knowledge.”)

Crossing Boundaries

Creativity often is the crossing of boundaries in domains. Sometimes the boundary is the division between the domain and another domain. Simply leveraging marketing concepts in communication can be creative. So, too, can the person who brings manufacturing insights to creating websites. The point is that the creative person often has experiences beyond the domain and brings those experiences with them as they come back to participate in the domain again.

When we think of Edison, we think of electricity and the lightbulb. Few people recognize that Edison’s experience and expertise expanded well beyond these two simple things. He did substantial investigation into rubber in his later years and even created a failed voting machine in his early life. For the lightbulb, he employed and consulted with dozens of experts on gas lighting, metallurgy, and other topics to intentionally bring together different domains to try to infuse creativity into the very nature of his work. (Find out more in Originals: How Non-Conformists Move the World.)

The Medici family – whether intentionally or unintentionally – did the same thing in Florence. They brought together experts and masters and caused them to interact. (See The Medici Effect for more.) The result was the spawn of the Renaissance period. It was the crossing of boundaries that fueled this period.

The ONE Thing

The ONE Thing, by Gary Keller, suggests that we should focus on only one thing and ignore the rest. Well, actually, you’re encouraged to find one thing in each area of your life – but focus is the key goal. Jim Collins, in Good to Great, encourages focus through the analogy of the fox and the hedgehog. Robert Pozen, in Extreme Productivity, explains how focus can help you get more results – and simultaneously acknowledges that much of his own life has been driven by serendipity – happy accidents.

The road to mastery of a domain seems to be driven by Anders Ericsson’s research on peak performance – as explained in Peak. It encourages the focused, purposeful practice that takes time – though he doesn’t simplify it to the level that Malcolm Gladwell does in Outliers with 10,000 hours. Whether it’s more or less than 10,000 hours, the point is the time investment required. The problem with this is that these kinds of time investments can’t often be made in multiple areas.

The draw of this research and approach is the simplicity of getting good at one thing. However, as the canal conductors learned, sometimes outside forces – like the railroad – can transform or eliminate your industry nearly overnight.

Lean Agility

An often-overlooked aspect of both agile software development and lean manufacturing is in the way that you make decisions. You’re encouraged to make reversable decisions where possible and, when not possible, delay decisions as much as possible. While lean is focused on the elimination of waste, it acknowledges that making decisions – particularly irreversible ones – too early yields more waste. (There’s more about lean in my review of The Heretic’s Guide to Management.)

Agile development is focused differently, towards a better end product, but the results are the same: you’re expected to explore, discover, and make investments that will yield the information necessary to make better decisions. Necessarily when you’re exploring, you’re going to backtrack and find new paths – you’ll be focused on more than one thing.

Creating Confidence

Tom and David Kelley have nurtured creativity in many through their company IDEO, the Stanford d.School (design school), and their books. Creative Confidence focuses on the barriers that prevent us from believing we’re creative – the lies we’ve been told that it’s not a part of us. The Art of Innovation instead focuses on the factors that help creativity (and therefore innovation) to grow. In it, Tom Kelley shares the idea of a Tech Box, which contains a set of random things that may be useful in building a prototype or just sparking an idea. The Tech Box contains a random set of things from many different industries. They’re just interesting objects.

If we want creativity to be driven from inside of us, we can’t drag along a Tech Box with us wherever we go. Instead, we must collect a set of experiences and learnings that we can draw upon as we’re confronted with new, novel, and interesting challenges.

Not in a Person

Creativity isn’t, Csikszentmihalyi explains, “an individual phenomenon.” Instead, it’s the environment that a person finds themselves in, including all the resources and barriers in the system. (If you need a primer on systems, Thinking in Systems is a good place to start.) The Difference pointed out that the diversity of thinking that comes from different backgrounds can sometimes propel groups to greater levels of productivity, creativity, and, ultimately, performance.

The environment itself can create opportunities for the creative person to decide to explore ideas further – or limit their ability to learn and grow in new directions that might be transformative in their domains.

Tension

Boiled down to a single word, creative people are complex. They exhibit characteristics that shouldn’t be compatible. They find ways to exist in the world by holding onto opposite ends of multiple spectrums and experience this without internal conflict. They can be at home in any environment, creative or not, because they can themselves exist across so many different places. The ten key dimensions that Csikszentmihalyi exposes are:

  1. Creative individuals have a great deal of physical energy, but they are also often quiet and at rest.
  2. Creative individuals tend to be smart, yet also naive at the same time.
  3. A third paradoxical trait refers to the related combination of playfulness and discipline, or responsibility and irresponsibility.
  4. Creative individuals alternate between imagination and fantasy at one end and a rooted sense of reality at the other.
  5. Creative people seem to harbor opposite tendencies on the continuum between extroversion and introversion.
  6. Creative individuals are also remarkably humble and proud at the same time.
  7. In all cultures, men are brought up to be “masculine” and to disregard and repress those aspects of their temperament that the culture regards as “feminine,” whereas women are expected to do the opposite. Creative individuals, to a certain extent, escape this rigid gender role stereotyping.
  8. Generally, creative people are thought to be rebellious and independent. Yet it is impossible to be creative without having first internalized a domain of culture.
  9. Most creative persons are very passionate about their work, yet they can be extremely objective about it as well.
  10. Finally, the openness and sensitivity of creative individuals often exposes them to suffering and pain yet also a great deal of enjoyment.

Tipping Scales

The question of what makes someone interested in something is a perplexing one. It doesn’t seem like an initial skill in something makes much difference. This is something that Carol Dweck’s work (in Mindset) and Angela Duckworth’s work (in Grit) would agree upon. The initial conditions aren’t nearly as interesting as the desire to work towards being better, but what causes someone to want to do better?

The answer may be found in looking at Judith Rich Harris’ work, No Two Alike. In it, she explains why no two children are alike. Even identical twins don’t always develop an interest in the same things. Sometimes, the advantage that one of the siblings has prevents the other from even trying. There’s a randomization to things and how one child may develop in one direction and the other child in a completely different direction.

Creativity has a similar aspect, it seems. You may develop an interest in a topic when your relative – but not absolute – skill is better. When you start to receive praise, recognition, and results, you’ll invest more and become better skilled. This process begins to feed back upon itself, and the changes can be substantial. Being able to get into and sustain the psychological state of flow can have huge impacts on long-term skill growth. (See Flow and Finding Flow for more on this.)

Inner Strength

There’s an understated theme that runs throughout Creativity. It echoes through the quotes and descriptions of creative people. There’s an inner strength to do what’s right – for them. Somewhere born out of a small advantage and hard work, there has developed an assurance that, in their chosen passion, being themselves and doing what they believe to be right and true is the only way to go.

In the description of E.O. Wilson, there’s a note about his favorite movie, High Noon, followed by, “I don’t mind a shoot-out, and I don’t mind throwing the badge down and walking away.” It’s a statement of confidence and inner strength that the person still retains the responsibility to do what they need to do. This does not, however, eliminate the need for compassion.

Compassion

Csikszentmihalyi explains, “Creative individuals are often considered odd—or even arrogant, selfish, and ruthless.” However, as was explained in the tension section above, there are often great tensions inside of creative folks. Sometimes, other people can’t quite figure out how these tensions can be resolved inside of someone else. Like looking at one of Escher’s works that looks fine in two dimensions but could never exist in the real world, they have trouble making the care and concern that a creative person often feels for his fellow man make sense.

Compassion is simply the awareness of the suffering of another and the desire to alleviate it. I’ve written about compassion repeatedly, but perhaps the best way to understand it better is to contrast it with related, but different, words as explained in Sympathy, Empathy, Compassion, and Altruism.

Creative people often become wrapped up in their awareness of the plight of humanity and the suffering of others and seek to leave their indelible mark on society by changing some corner of the world. They do this with a passion for the change and a detachment from whether they’ll ever accomplish the goal or not.

Detachment

To many members of the Western world, detachment is a bad thing. We’ve been conditioned that secure attachment is a good thing. However, our attachment to outcomes – particularly outcomes that we don’t control – is challenging. It leads us down a road of suffering, because we’re constantly shaken by the impermanence of life. Buddhists are taught that it’s attachment to our impermanent world that causes the cycle of reincarnation and suffering. (You can find a longer discussion of detachment in my review of Resilient.)

Creative people are able to stay compassionate about their world and detached enough to recognize that they won’t always succeed in alleviating the suffering of others – but that’s not the point. The point is that they should try. Maybe the thing that you should do is try to be a bit more Creative, whether you’re successful or not.

The Real Data Type for Dates and Times; and Microsoft Flow’s Perspective

It’s OK to call me old school. For me, dates are stored as floating-point numbers. It’s still that way in many applications, including Microsoft Excel. Sometimes I think of date and time as a series of seconds after a fixed time – so, a very large integer, because that’s another way we’ve classically represented dates and times in code. However, Microsoft Flow takes a different approach. To Microsoft Flow, a date/time is a string – a standard ISO timestamp.

Background

The next three sections provide some background on how we came to understand dates and times and how we represent them. If you’re just interested in the details of implementing dates in Flow, you can just know that they’re strings – and skip the background.

Tracking Dates and Times

Most of us take it for granted that we all agree it’s the same day – but until 1875, that wasn’t the case. That’s when everyone officially agreed upon the Gregorian calendar that we’re all familiar with. Similarly, few people need worry about the fact that the agreement only stabilized dates back to 1582.

Our relationship with time became more concrete around then. While we had chronometers – which were capable of relatively stable timekeeping – since 1764, it wasn’t until the introduction of the railroads that people officially standardized times to allow for train schedules that worked across large distances. Before that, “high noon” literally meant when the sun was directly overhead in the place where you are. Time zones started to be ratified across the globe, and by about 1885, we had a framework for the time zone system we use today.

The ISO 8601 Timestamp

Getting dates and times from one computer system to another is a classic problem. Storing dates as the number of days past a fixed point, as is done for both floating point and integer representations of a date, requires that everyone agree on what the fixed starting point is. The problem is that people didn’t. Sometimes they used January 1st, 1900, and other times other dates were used. This created problems where dates would seem to shift. As an attempt to resolve this issue, the ISO 8601 standard was first published in 1988.

It spells out the details of how to convey a date and time (as well as durations) in a standardized way. This addresses the difference between the folks in the US, who often represent dates in month-day orientation, and folks in Europe, who are more accustomed to day-month approaches. More than once, I’ve had to look at a date carefully to figure out what was intended because of the possibility of these transitions.

Time Zones and Daylight Savings Time

One of the tricky challenges that happens in computer systems is addressing times across time zones, particularly dealing with daylight savings time. I’m based in Indiana, and for my formative years in computers, we didn’t observe daylight savings time. Though we do today, I vividly remember my joy at not having to adjust every clock in the organization I worked for twice a year. Today, this is mostly handled automatically, but back then it would have been a task for some poor human.

When we refer to our time, we do it conveniently. I refer to 5PM in my current time zone. Two people speaking of meeting at 5PM are fine if both are in the same time zone. If they’re not, one of the people may have to wait a while. Because of time zones, 5PM in one place isn’t the same time as 5PM in another place. It sounds obvious, but it leads to a challenge when sharing times across time zones.

The solution in aviation is that all times are referenced in Coordinated Universal Time (“UTC” came about as a compromise due to the different order of the words in French), which is also called Greenwich Mean Time or, in aircraft parlance, Zulu time. This is the time zone from which every other time zone is referenced. Eastern time – my home time zone – is UTC-5. Thus, if I want to meet someone at 5PM, I’d say 10 PM, or, more properly, 2200 Zulu. That is, I’d use a 24-hour clock instead of the more common 12-hour clock with AM and PM designators that we’re used to. Well, that’s not exactly correct.

If we were observing daylight savings time when I was trying to meet, I’d have to subtract one hour, because daylight savings time advances our clocks one hour. So, it would be 2100 Zulu. One would think that daylight savings time would be consistently observed, but it’s not. Each country chooses when to observe and stop observing daylight savings time. The United States Congress has changed the official dates of observance more than a few times. So, even today, trying to understand what time it is in another place on a given day can be challenging.

The result is that the ISO 8601 standard defines a time zone designator that can indicate the time added to or removed from UTC time to get to the time indicated. This keeps the time human-readable and ensures that it’s possible to get to the exact time.

Standard Date Problems

There are a set of problems when working with dates that we’ve grown accustomed to. Adding days is easy, but adding months isn’t. Because months don’t have a standard number of days, adding a month could mean adding 28, 29, 30, or 31 days to the current date. Similarly, adding a year to the current date could mean adding 365 or 366 days – though, in practice, few people worry about this problem. The good news is that Flow’s Date and Time activities allow you to address these standard problems.

There are a second cluster of problems that Flow’s activities don’t solve. That is the common requirement that dates be expressed in terms of business days. So, if you’re a five-day per week operation, the dates should be advanced five business days. This isn’t possible directly with the actions provided, but it’s possible to leverage the dayOfWeek() method to determine how to address weekends. However, holiday observance isn’t something that’s feasible today.

Times in Microsoft Flow

The good news is that all the challenging activities – like adding and subtracting from dates and converting from one time zone to another – are available as activities. You must remember that the dates are strings. If you want to see more about the out of the box flow activities for managing dates, take a look at Working with Dates and Times inside of your flows.

Microsoft Flow, SharePoint, 429, and Throttling a Workflow to Death

We’re here to mourn the death of many a workflow instance at the hands of SharePoint’s HTTP throttling. Except it’s not SharePoint’s throttling that is the true killer. SharePoint’s just the accomplice in this crazy dance that will get your workflows killed. Though it’s possible to protect your workflow instances from being throttled to death, it isn’t as easy as it might seem. In this post, we’ll talk about what happens every day and then what strategies you can use to protect your workflow.

Request Throttling

Before we can explain how a flow gets throttled to death, we first must understand a bit about throttling. Web servers are under constant assault from well-intended users, the code written by bumbling idiot developers (of which occasionally I am one), and malicious people. One of their defenses is to respond with “I’m too busy right now, come back later.” This comes back as the code HTTP status code 429. Sometimes these responses are kinder and will indicate when the person should come back. “Hey, I should be able to take care of that request for you after seven seconds” tells the program when it should retry the request – and here’s the kicker – expect that it will be able to be serviced.

The problem is that, when you’re dealing with so many different users and so many variables, it’s sometimes difficult to predict when the server will be willing or able to service a request.

Automatic Retries

Because a server may respond with a 429, most programs know to retry the request. The strategies vary but the default answer is an exponential interval. The first time you get a 429, you stop for a period of time – say four seconds – and then each time you retry, you square the interval. So, four seconds becomes sixteen on the second retry, and 256 seconds on the second interval. Flow allows you to follow the default policy, which is described as exponential, or explicitly set the interval to exponential. To do this, click the ellipsis at the right of the action and select Settings.

The action’s space in the flow will change to the settings view, where you can explicitly set the retry policy to exponential – which will further change the view to provide spaces for a maximum retry count, an interval, minimum interval, and maximum interval.

The default settings, however, are supposed to do exponential waiting on retries and four retries. So that seems like a good place to start. The default is implemented as one try plus three retries. That is how they get to four retries.

Retry Schedule Conflicts

What happens when there is a conflict between the provided retry schedule and what the server responded with as a part of its 429 response? The good news is that Flow will use the larger of the two numbers. In theory, at least, you’ll never hit a retry more than once or twice. Sure, the server could make a mistake with its guidance once but surely not twice. Unfortunately, that’s not always the case. Sometimes the response from the server – and the default exponential interval – will be too small, and you’ll exhaust the three retries and end up failing your request. This typically happens when you have multiple flows running at the same time.

Parallelism

Any given Flow may not be making that many requests, but what happens when there are many Flow instances running at the same time? If you do work on a queue that information is dropped into, you can’t necessarily control how many items will come in at the same time. With the scalability of the Flow platform, what’s to stop you from running hundreds or thousands of Flows at the same time – against the same poor server that’s just trying to cope?

If you have 100 Flows all starting in a relatively short time talking to the same back end server, it may be getting thousands of requests from Flow every minute. Even if each Flow is told to retry later, even the server telling the consumer to retry later may cause the server to need to push off work even further. The result is that every Flow gets throttled to death – until so few remain that they can be handled inside of the capacity of the server.

Luckily, Flow offers a technique for limiting the number of active flows at any time. This can be done by going to the ellipsis and then Settings for the trigger. This changes the trigger to its settings display, which allows you to limit the number of concurrent Flow instances – or the degree of parallelism.

SharePoint Request Throttling

Generic request throttling is fine but how does my favorite server – SharePoint – do it? Well, the answer isn’t clear. Microsoft published an article “Avoid getting throttled or blocked in SharePoint Online“, which says that they won’t publish the rules – because they’re changing them.

At the same time, they make clear that some of the criteria being used to manage the workflow are things like user agents, accounts, and App IDs. However, when it comes to Flow, we have some limitations.

Flows always run as the user that created the Flow – not as the user initiating the request. So, from the point of view of SharePoint, one user is making all the requests – and they’re making them from the same application, Flow. This makes Flows a high target for throttling, even when you consider it’s a well-known and well-trusted application.

It turns out there’s more to it than that. There’s an interface layer that the connectors – including the connector to SharePoint – use.

Connectors and Infrastructure

Many of Microsoft’s new offerings like Flow, PowerApps, and PowerBI need access to the same service data inside of multiple tenants. As a result, the connectors use a common architecture that allows multiple services to interact with Microsoft’s online service offerings. This architecture has its own throttling built in. It’s designed to protect the back-end services and has its own rules for throttling requests that’s more aware of the uniqueness of each of the fixed number of Microsoft internal consumers.

One of the things that this infrastructure can use to manage throttling is the connection to the service. In the first figure, you’ll notice two connections in the settings menu – with the same identity. This is one way that you can help the infrastructure avoid throttling you.

Multiple Connections

When there aren’t many things to differentiate requests on, the connection is one. It’s got its own identifier. Because of that, it’s easy for the back end to see which connection a request is associated with – and throttle too many requests from a single connection. Thus, if you want to help your Flow avoid getting throttled, you can make multiple connections to the same data source. This allows your requests to get spread across different thresholds and for more to get through.

The Fingerprints Match

For my case, the Flows were getting throttled not by SharePoint directly but through the infrastructure hub. I set the maximum degree of parallelism and assigned every action in the flow to a different connection, but it wasn’t enough. I didn’t set the retry settings manually, and the default settings continued to allow my poor Flows to be throttled to death by the infrastructure.

In the end, to spare more Flows from being throttled, we moved some of the data to Azure SQL. However, we saved many Flows just by adjusting the retry strategy and concurrency and creating multiple connections.

water flowing

Microsoft Flow and Azure Logic Apps Quick Formula Expression Guide

Microsoft Flow and Azure Logic Apps are powerful tools for automation, with tons of connectors and the things that you need to do work. Microsoft Flow in particular makes the design of a workflow easy with a rich design experience – except when it comes to expressions. You’re expected to know some relatively arcane formula expression rules to make your steps work together. I’ve gathered up the things that I felt like are the most important for writing expressions into this quick guide. Let’s get started with some basic math.

Arithmetic

If you want to get tripped up in a hurry, try to write your basic math statements with the operators that we’ve come to know and love: + – * /. They’ll make your expression fail – unless you swap them for their function call equivalents:

+ add()
sub()
* mul()
/ div()

The parameters to each of the functions occur in the order you would expect. Thus, you can swap:

A*B/C+D with…

add(div(mul(variables(‘A’), variables(‘B’)), variables(‘C’), variables(‘D’))

It’s not super easy to read, but once you get the pattern, you can create even more complex statements. For now, ignore the variables() function, we’ll come back to that in a bit.

Nulls

With basic math out of the way, next it’s important to know how to deal with nulls. Dealing with nulls is two pieces. First, separating a property with a question mark (?) instead of a period (.) will automatically handle nulls for you – return a null in the end. This greatly reduces the amount of testing that you must do to ensure that you don’t accidentally reference a null value.

The corollary to this is what do you do to get a default when a value is null. Here we use the manipulation function coalesce(), which returns the first non-null object. So, if I were to do:

coalesce(variables(‘A’), variables(‘B’), ‘Default’)

I’d get A if it is non-null. If A is null, I’ll get B if it’s non-null. If both A and B are null, this will return the string ‘Default’. If you’re wanting to get a zero instead of a null, you can wrap the result in a float() or int() conversion, which will return zero when the value is null.

One Thing About Strings

There are many string functions, but one that’s particularly useful and necessary. That’s the concat() – short for “concatenate” – function. It takes two or more parameters and concatenates them into one large string. If I wanted to create an ODATA compatible filter, I might write:

concat(‘SalesProID eq ‘, variables(‘A’))

If I needed to put a single quote around the values, I’d use the escape syntax, which is two single quotes together. If I wanted to filter by OrderType, the command might look something like:

Concat(‘OrderType eq ”’, variables(‘A’), ””)

That provides a string that has the quotes needed around the value.

Getting Values

That’s good basics, but how do you get the values from other parts of your Flow into your equation? You’ve already seen one approach. That is to use variables() to fetch one of the variables you’ve defined in your flow. To do this, you simply include the variable you want to use in single quotes like this:

variables(‘MyVariableNameHere’)

This will return the value of the variable. However, it’s more likely that you want to get a variable from another step. Here, we start with the values from the trigger that are fetched with triggerBody(). If I were fetching the ID of the SharePoint item that triggered the flow (assuming that’s our trigger), I’d write:

triggerBody()?[‘ID’]

This would return the ID of the item or null. If I had another step that fetched a different item, or had some other output, I could use the body() function with the parameter, which is the name of the step. So, if the step was named “Quote” and I wanted the symbol property, I could write:

body(‘Quote’)?[‘Symbol’]

There’s one caveat here. If the step has spaces in its name, then you must replace those with underscores. To get the Quote property from the Quote of the Day step, you’d write:

body(‘Quote_of_the_Day’)?[‘Quote’]

One other common condition is when you’re working on a set of items in a For Each step. If you’re in a For Each step, you can get the values from the current item with item(). If I wanted to get the ID of the current item, I’d write:

item()?[‘ID’]

If you’re inside nested For Each you can refer to the specific item, you want to pull from by using the function items() and specifying the name of the For Each. If I had a name of a For Each Step called ‘Each Item’ inside of a For Each Step called ‘Each Order’, I could write an expression to get the order number and item number in a string like this:

concat(‘Order Number :’, items(‘Each_Order’)?[‘OrderNumber’], ‘ Item Number :’, items(‘Each_Item’)?[‘ItemNumber’])

If you want to see the full list of functions, you can go to https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-workflow-definition-language — for warning, there are lots of methods listed there, but you’ll have to search for exactly what you’re looking for.

Recent Posts

Public Speaking