Object Hierarchy and Scripts in Unity

If you’re a traditional developer, Unity is more than a bit odd. The constructs, for the most part, make sense, but they require that you contort your thinking a bit. In this post, I’m going to walk through object hierarchy, scripting, and reuse.

Game Objects

Everything in Unity is built up from the concept of a game object. A game object is a connection for components. Every game object will have either a Transform or a Rect Transform component associated with it. 3D objects have a Transform – which is a 3D transformation containing the location, rotation, and scale, along X, Y, and Z dimensions. 2D objects have Rect Transform components, which are designed to work with 2D objects even if they exist in a 3D space. Game objects with a Transform don’t fundamentally have a size. Their size is driven by the other components that are attached to the game object.

Game objects can be organized in a hierarchy. This is useful when you want the objects to operate as a unit, as we’ll see as we build out the rotator script and demonstrate its use to create a cluster of directional lights. However, for performance reasons, deep nesting of game objects is discouraged. Largely, this is because some operations are recursive, and a large number of recursive operations can drop the frame rate of the solution unacceptably low. (Below about 60 frames per second.)

Frame of Reference

In the post Creating A Spinning Cube with Unity and MRTK, I created a single object – called Nexus – which had child objects of lights. To Nexus, we attached a rotation script and rotated Nexus. This caused all the lights associated with Nexus to rotate and move. We were able to set the context of the lights local to Nexus, and whatever Nexus did, the light objects would do as well.

This illustrates why we need to be careful with object hierarchy inside of Unity. When we do one operation, we’re impacting seven objects inside the hierarchy. The greater the depth of the hierarchy and the more objects, the more things need to be manipulated – or at least checked – for a single operation.

Scripts and Game Objects

In Unity, scripts are C# code that is associated with a game object. Placing a script in the project does nothing until it’s connected to an instance of a game object. Game objects themselves can be almost anything – including nothing but a container with a Transform component, like the Nexus game object. When we added the script as a component of the game object, we connected the script and told Unity to instantiate the class and connect it to the game object that we added it to.

Including public properties in the script allowed us to set those properties in the editor and ultimately have one script act on multiple objects in different ways. Unity automatically tooled our Camel case names and added spaces before the capital letters to make it more readable. It also provided an editor for the complex Vector3 object. This makes it easy to create scripts that are reusable across objects and for different purposes.

HR Uprising Episode 66: Avoiding Burnout – Masterclass

A week ago, I joined Lucinda Carney on the HR Uprising Podcast to give a special masterclass on burnout – and how you can avoid it. In it, I discuss ways to identify the root causes of burnout. I talk about what our personal agency is in the context of burnout and what fills or drains our personal agency bathtub. I also review the two components of stress (the stressor itself and our assessment of the stressor) and strategies that can help you avoid burnout.

You can listen to the full podcast here: https://hruprising.com/avoiding-burnout-masterclass-with-robert-bogue/

Creating the Spinning Cube Program with Unity and MRTK

In this post, I’m going to walk through the creation of a spinning cube program in Unity. For this demo, we’ll build it with the Mixed Reality Tool Kit (MRTK) and set it up so that you could deploy it to a HoloLens 2. A spinning cube is sort of the 3D equivalent of “Hello World.” So, it’s a good starting point for developing with Unity and using MRTK.

I’m going to use this as an opportunity to explain script reuse and how you can use the same script with parameters with multiple objects. I’m also going to use this as an opportunity to explain basic game object hierarchy and how you can rotate a parent object to cause all the child objects to rotate as well. Finally, we’ll use this project to show off the MRTK orbital component that will allow us to keep objects in the center of view.

To make this project work, we’re going to create one cube object, an empty game object, six directional lights, and one rotator script. We’re going to use the lights to change the appearance of our cube as it spins. The standard material for the cube will reflect the light if we change the colors. By using the orbital component in MRTK, we’ll give the user a way to inspect the object from different angles, even while things are rotating.

Getting the Project Ready

Rather than replicating the steps necessary to create a new project and configure it with MRTK and set it set to deploy to the HoloLens 2, I’m going to refer you to the first 16 steps in the post Building a Unity Project with Speech Recognition using MRTK for a HoloLens 2. The only change we’ll make is that we’ll name the project SpinningCube, since that’s what we’re doing.

Adding the Cube and Lights

[Note: We’ve intentionally continued the numbering throughout the entire post so that we can refer to unique steps in the post.]

Let’s get started by adding our cube and the lights.

  1. Click the GameObject menu and click the Create Empty option.

  2. Right-click on the GameObject in the menu, select the Rename option, then type a new name called Nexus.

  3. Click and hold the Directional Light object and drag it into (under) the Nexus object.

  4. Right-click the Directional Light and click the Rename option in the menu. Enter the name Red Light for the light.
  5. With the Red Light selected, go to Inspector panel, and enter the position of X 0, Y 10, Z 5 and a rotation of X 90, Y 0, Z 0. This points the light to a spot 5 meters ahead of the visible camera.

  6. In the light component, double-click the Color option. In the dialog that appears, set the Red (R) to 255, Green (G) to 0, Blue (B) to 0, and make sure that Alpha (A) is set to 255.

  7. Right-click the Red Light object and select Duplicate from the menu. Repeat this process four more times until you have five lights under your Nexus object.
  8. Using the preceding steps (4-7) and following table, reconfigure the lights you duplicated under Nexus:
     

    Position

    Rotation

    Color

    Name

    X

    Y

    Z

    X

    Y

    Z

    R

    G

    B

    Blue Light 10 0 5 0 -90 0 0 0 255
    Yellow Light -10 0 5 0 90 0 255 255 0
    Purple Light 0 0 10 0 -180 0 255 0 255
    Cyan Light 0 0 0 0 0 0 0 255 255
    Green Light 0 -10 5 -90 0 0 0 255 0
  9. Create the cube by clicking GameObject, 3D Object, Cube from the menu.

You’ll notice that the cube appears to have a different color on each side. If you check the material for the cube, you’ll see that the material is a neutral color. The colors that appear on the cube are coming from the different lights we added to the scene.

Adding the Orbital Component

One way to make it possible to see what we’ve done is to add an Orbital component that comes as a part of MRTK. We can use this to keep an object (or collection of objects) in front of the camera. Let’s add the Orbital component to our cube and to our lights separately. If we weren’t going to rotate the cube separate from the lights later, we could move the cube under our Nexus object, but since we want different rotations, we’re going to keep these two objects separate.

  1. In the Hierarchy, select the object we named Nexus.

  2. In the bottom of the Inspector pane, click the Add Component button. Start typing orbital and select Orbital when it appears.
  3. Notice that Unity added a SolverHandle and set it to Head in addition to the Orbital component. The SolverHandle is necessary so Orbital knows what it’s tracking. In this case, Head is the same as the primary camera.
  4. In the Orbital component, click the Orient Type dropdown and select Unmodified. This will prevent Orbital from trying to orient the object.
  5. Also, in the Orbital component, change the Local offset’s Y value to zero and the Z value to 2– so the resulting offset is X 0, Y 0, and Z of 2.
  6. Select the Cube object in the hierarchy and repeat steps 11-14.

Testing the App

Now we can test our solution. We can press the Play button to see our cube floating in front of us in the game tab. If we move in the play mode, we can see different sides of our cube. You can right-click in the game window and move your mouse to move around in the 3D game space.

That’s a good start, but let’s create a reusable rotation script.

Adding the Rotator Script

We want to setup two different objects to rotate. First, we want to rotate the cube that we added to the scene, but second, we want to rotate the lights in the scene differently. To do that, we’re going to create a rotate script, and in the next section, we’ll attach the script to objects and configure them.

The script will define two public fields – RotationXYZ and RotationRateScale. The first will set the rotation along each axis, and the second will allow us to scale the rotation speed overall without modifying the individual variables. The only other part of this script will in Update() and will use the Transform component’s Rotate method to rotate the object.

Let’s get started.

  1. In the menu, click Assets, Create, and finally C# Script.

  2. Name the script Rotator and press Enter.
  3. Close the MRTK Project Configurator dialog, then double-click your Rotator script to launch Visual Studio.
  4. Inside the top of the class, define the two public variables:
    public Vector3 RotationXYZ = new Vector3(1.0f, 2.0f, 3.0f);
    public float RotationRateScale = 1.0f;
  5. Inside the Update method, add the following code to rotate the object:
    float totalScale = Time.deltaTime * RotationRateScale;
    Vector3 v3 = new Vector3(RotationXYZ.x * totalScale, RotationXYZ.y * totalScale, RotationXYZ.z * totalScale);
    gameObject.transform.Rotate(v3);
  6. Click the Save All button in the ribbon to save your changes to the script.

The script initializes the rotation to a default rotation of X 1, Y 2, Z 3 degrees per second and an overall scale of 1. The update method uses Time.deltaTime – a global property to determine what fraction of a second has occurred since the last call to Update(). This is the way that we can scale the degree of the rotation based on the framerate that’s happening inside of Unity. The code gets the total scaling for the rotation by multiplying our overall rate scale times deltaTime. Then a new Vector3 is created with the rotation needed, and this is applied via the transform component’s rotate method.

Attaching the Rotator Script to the Cube and the Nexus

Our rotator script is fully functional – but it’s not connected to anything. We connect it to our cube and to our Nexus object by selecting the object and dragging the script into a blank spot in the hierarchy.

  1. Start by clearing the MRTK Project configurator by clicking the X in the upper right-hand corner.
  2. Select the Cube object in the hierarchy.
  3. Drag the Rotator script into a gap between components in the inspector pane and release.
  4. Select the Nexus object and repeat step 24.
  5. In the Inspector panel, in the Rotator component, change the Rotation Rate Scale to 10.

By setting the rotation scale to 10 for the lights, you’re causing the lights and the cube to rotate at different rates, and therefore the surfaces of the cube will turn different colors as different lights start acting on them.

Viewing the Final Solution

To view the final solution, simply press the Play button and watch how the Game tab shows your cube spinning – and with the lights spinning at different rates.

You can now go to Nexus and change the rate – or the individual rotation values for each of the X, Y, and Z components to see different spinning effects. You can also navigate around the scene and the cube and lights will follow you. You can use this to look at different parts of the cube in the scene.

If you want to get the completed project, it’s available at https://github.com/rlbogue/UnityPublic/tree/master/SpinningCube

Book Review-Reading the Room: Group Dynamics for Coaches and Leaders

Have you ever had that bewildering moment when you’re in a conversation and you suddenly realize that you have no idea what the conversation is about? You’re going along, disagreeing but still conversing, until you reach the moment when you’re aware that you’re not talking about the same thing as the other person, and you wonder exactly how you got here. Reading the Room: Group Dynamics for Coaches and Leaders is designed to help you better understand the dynamics that are in play in a room and be able to observe and react to them better.

I was first introduced to David Kantor’s work via Bill Issacs’ work, Dialogue. The revelation that was shared was that people speak from three different points of view: power, meaning, and feeling (affect). When people are in the same conversation but speaking from these radically different perspectives, it’s often as if the people in the conversation are talking past each other, unable to even hear, much less process, what the other person is saying. As I was preparing the Confident Change Management course, I decided that I needed to dig a bit deeper into Kantor’s work. I’m glad I did.

Operating Systems

When a computer boots up, it starts the operating system, and thereafter it just becomes a part of the way the computer works. It’s largely unnoticed except when you go to launch a new program. Yet all the time, the operating system is coordinating and shaping your experience with the computer. The operating systems that we use in life are the same. They’re the default assumptions, way of working, and underground of our consciousness. Kantor explains that there are three operating system types:

  • Closed – We believe we have all the answers, and we must share them with the world so they can execute our great ideas.
  • Open – Collectively, we know more than anyone can know. We just need to bring everyone together in a conversation (or dialogue) to expose it.
  • Random – Insights come but only if we’re willing to ignore the structure and work through problems in the way that feels the most natural.

None of these are good or bad – just good or bad for the environment they’re used in.

Communications Domains

The power, meaning, and feeling (affect) I mentioned above make up the communication domain. Some people are concerned with the movement of power, some with the meaning of it all, and some with how it will make people feel. What is important here, as it was with operating systems, is that people don’t realize this is happening.

They have a style of communication that is their preferred style. It is the one they’ll fall back to most often, particularly when stressed. Communicating with people speaking in a different domain can be as foreign as speaking with someone in a foreign language. It takes great concentration and focus to understand what the other person is saying.

Action Modes

The third part of the model (or third layer, if you’d prefer) is the way that someone is responding in a conversation. Here, most of us have more flexibility, but still tend towards one of the following approaches:

  • Move – A drive towards action
  • Follow – Support of a previous mode
  • Oppose – To move against the proposed move either by stopping or moving in another direction.
  • Bystand – Watch what is happening and observe, but don’t outwardly act.

Kantor says that people can only take one of these four stances in a conversation. I disagree, because I think it underplays the need for curiosity, inquiry, and understanding. Whether you want to use Motivational Interviewing, The Ethnographic Interview, or some other guide to understanding the position of another person, I believe that it’s essential to communication. So, I support the attempt to identify archetypical moves, I’m not sure this is the comprehensive list.

Dialogue Mapping exposes the IBIS model of dialogue mapping that includes questions, ideas, pros, and cons. Fundamental to this approach is the question, which is both the central theme for discussion and a way that the problem can be clarified.

Seeing Ghosts

A key, and appropriate, observation of Kantor’s is that sometimes the conversations aren’t about the conversation happening in the room at that time but are instead ghosts left over from our childhood, the stories we told ourselves, and the patterns we were left with. We see reasons to trust where none should be given. (See Trust => Vulnerability => Intimacy, Revised for more on trust.)

Sometimes the challenge with the conversation that’s going on has nothing to do with the here and now and instead is some remnant of some experience that was had a long time ago, and it’s left an indelible mark on our psyche, a permanent fixture that we’ll spend the rest of our lives covering up or addressing.

In the Shadows

Shadows are places where the ghosts live. They’re the places that people don’t want to go. They’re spooky, scary, and frightening. However, the greatest risk to our conversations isn’t in not learning a new skill. The greatest risk is in not being able to address those things that hold us back.

Whether these barriers to success surface as ghosts or as undesirable consequences of our default operating modes, learning to shine light in the shadows of our psyche – and then having the discipline to do it – makes us more complete as humans, coworkers, and leaders. Unlike many popular psychology books today, Kantor invites us into the space of our weaknesses, so that we can discover them more fully and learn to address the most grievous issues that are causing us harm.

Courteous Compliance

In Radical Candor, Kim Scott has a place called “ruinous empathy,” where there’s caring for the other person but no willingness or ability to challenge directly. Kantor calls this “courteous compliance,” when you disagree with the conversation but aren’t willing to speak up to have your voice heard. Kantor explains that we need to have the other voices to test and check our perspectives. It’s the silence of dissenting voices that can prove disastrous to the person and to the organization. (See The Difference for more on being inclusive of all voices.)

Control

Often in organizations, you see compensating systems. These are systems with the purpose of limiting the negative effects of other individuals. Instead of confronting the person directly about the problem, you’re faced with a system designed to limit the limitation of a leader or group. The system might take the form of an additional review meeting, an extra sign off, or additional activities, but ultimately the goal is to prevent the negative consequences from happening again.

The problem with these systems is that they are necessarily both wasteful and incomplete. They’re wasteful, because if you could only correct the root problem – or even create a higher awareness – the system wouldn’t need to exist. They’re incomplete in the fact that they’ll never cover every possible situation.

Chaos

Organizations exist through their ability to keep the chaos of the market and the world on the outside. They resist change, because it’s the status quo that keeps the organization together. People with random operating systems are a threat to the very nature of the organization. They’re called disruptors, and there’s a reason. They disrupt the carefully crafted control of the organization and replace it with just a little chaos.

Organizations resist the chaos that those with a random operating systems bring only to often find themselves unresponsive to the broader world until it’s too late.

Model Building

Kantor continues with a conversation of model building. That is, building a way that the organization will function – a leadership model. Though he uses different terms, it’s the same following, fluent, detaching approach that I’ve discussed before that is the heart of the apprentice, journeyman, master trades model. (See Presentation Zen, The Heretics Guide to Management, and Story Genius for other places where it’s occurred and Apprentice, Journeyman, Master for a core conversation about the progression.)

He uses the words imitation, constraint, and autonomy for the progression, but the concepts are nearly identical to the following, fluent, and detaching that are more commonly used.

Perhaps it’s because I’ve been through the progression myself that I don’t get locked up with Kantor’s approach to everything. However, I feel as if I’ve gained some appreciation and skills for Reading the Room through reading the book.

The 6 Figure Developer Episode 155: Burnout and Change Management

I recently had an opportunity to join John Callaway on his podcast, The 6 Figure Developer. In this episode, we spend some time discussing the research for Extinguish Burnout. I explain how, in the classical definition of exhaustion, cynicism, and inefficacy, it’s inefficacy that seems to be key to heading toward (or getting out of) burnout. I also discuss the Bathtub Model, which describes our capacity for personal agency, the factors that pour into it, and the demands that draw from it.

We then move onto a brief conversation on change management. I talk about why change management is important even in the realm of technology and review the work that went into the change management course, including the 101 books referenced in the course and the custom programs developed in the process.

You can listen to the full podcast by going here: https://6figuredev.com/podcast/episode-155-burnout-change-mgmt-with-rob-bogue/

Now Available: A One Click Link to Start a OneDrive Sync for a SharePoint Library White Paper

Many companies are transitioning from file shares to SharePoint, especially with the ability to collaborate on files in real time from anywhere. During this transition, it can be useful to migrate the files to SharePoint, then set up OneDrive to synchronize libraries to the local machine. The files end up in a slightly different place, but it’s still familiar to users.

The problem is that, out of the box, there’s no way to get a one-click link to start the synchronization process. That’s why we developed this white paper, “A One Click Link to Start a OneDrive Sync for a SharePoint Library,” with a corresponding web part. We walk you through the little bit of setup it takes to get a one-click link to synchronize a library.

You can get the white paper right now by clicking the link below.

Get the One Click OneDrive Sync white paper

Building a Unity Project with Speech Recognition using MRTK for a HoloLens 2

In this post, I’m going to do a step-by-step walkthrough for building a project that will do speech recognition for a HoloLens 2. Mostly these steps are the same for anytime you want to use MRTK to recognize speech. In the few places where the settings are specific to the HoloLens 2, I’ll call out what’s specific. Let’s get started.

  1. Launch Unity Hub.

  2. Click the New button and select the version of Unity that you want to use for your project. We’re going to use 2019.4.5f1 for this walkthrough.

  3. Enter the project name and folder for your project, then click Create. We’re going to be using SpeechDemo as our project name.

  4. Once Unity has finished loading (which may take a while), go to Assets, Import Package, Custom Package…

  5. Locate the MRTK Foundation Package that you previously downloaded to your local machine and click OK. In this case, we’re using the 2.4.0 build of MRTK.

  6. When the assets list is displayed, click Import.

  7. After the assets are imported, on the MRTK Project Settings Dialog, click Apply.

  8. Close the MRTK Project Settings Dialog, then go to the Mixed Reality Toolkit menu and select the Add to Scene and Configure… option.

  9. In the File menu, go to Build Settings…

  10. Click the Add Open Scenes button to add the current open scene to the build.
  11. Change the build settings to Universal Windows Platform, and, if you’re using the HoloLens 2, set the Target Device to HoloLens and the Architecture to ARM64. For Build and Run on, select Remote Device. Click the Switch Platform button to complete the switch.

  12. After the platform is switched, the MRTK Project Settings dialog reappears. Click Apply to reapply the MRTK settings, then close the dialog.

  13. In the Build Settings dialog, click the Player Settings button in the lower left-hand corner to show the project’s player settings.
  14. Expand the Player Publishing Settings and change the package name to match the name of the project. (This prevents deployments from overwriting other projects and vice versa.)

  15. Scroll down and expand the XR Settings area of the player. Click the plus button and select Windows Mixed Reality. Doing this will allow the project to open in 3D view on a HoloLens instead of in a 2D slate.

  16. After you apply the Windows Mixed Reality settings, the MRTK Project configuration dialog reappears; just close it.
  17. In the Hierarchy, verify that the MixedReality Toolkit object is selected.

  18. In the Inspector, click the Copy & Customize button to create a copy of the profile. This is done because you cannot make changes to the provided profiles. We’ll be making copies of the main profile, the input profile, and the speech profile over the next several steps.

  19. Click Clone to create the new MRTK profile.

  20. In the Inspector tab, go to Input, then click the Clone button.

  21. In the Clone Profile dialog, click Clone.

  22. Expand the Speech section of the input profile and click the Clone button.

  23. In the Clone Profile dialog, click the Clone button.

  24. Click the Add a New Speech Command button. In the new item that appears on the bottom, enter the word hello. This is the word we’re going to use to trigger our script.

  25. Go to the Assets menu, select Create, and C# Script.
  26. If you’ve never configured Unity to use Visual Studio as the editor for scripts, you may want to do that now by going to Edit and then Preferences... (If it’s already configured, you can skip to step 27.)

  27. Go to External Tools and set External Script Editor to Visual Studio – whatever version you have. We’re using Visual Studio 2019 (Enterprise) for this demo. It is functionally the same as the community version for the purposes of this walkthrough.

  28. Double-click the NewBehaviorScript.cs that was created. Visual Studio appears with the file open.

  29. Enter the interface IMixedRealitySpeechHandler, and then right-click the squiggles and select quick actions to have Visual Studio suggest the using statement for Microsoft.MixedReality.Toolkit.Input. Click this entry or press Enter to accept the change.

  30. In the Startup() method, add the line CoreServices.InputSystem?.RegisterHandler<IMixedRealitySpeechHandler>(this); to register this script to receive events. Right-click the squiggles and select quick actions to have Visual Studio suggest the using statement for Microsoft.MixedReality.Toolkit. Click this entry or press Enter to accept the change.

  31. Add a new method to support the interface, OnSpeechKeywordRecognized() as follows:

    Public void OnSpeechKeywordRecognized(SpeechEventData eventData)

    {

    switch(eventData.Command.Keyword.ToLower())

    {

    case “hello”:

    Debug.Log(“Hello World!”);

    break;

    default:

    Debug.Log($”Unknown option {eventData.Command.Keyword}”);

    break;

    }

    }

  32. Remove the Update() method from the file as it is not used. The completed script appears below:

  33. Save the files.
  34. The MRTK Project Configuration Dialog will appear; just close it.
  35. Associate the script to the MixedReality Toolkit object in the game hierarchy by dragging it to between two components in the object in the Inspector window.

  36. Run the application and say “hello.” In the debug window, you’ll see Hello World!

    If you want to get a copy of the final project, it’s available at https://github.com/rlbogue/UnityPublic/tree/master/SpeechDemo

Book Review-One Minute to Midnight

It was the closest that the world had ever come to a global nuclear war, and it started in America’s back yard. Metaphorically speaking, it was just one minute from the end of the atomic day. The clock advanced to just one minute before midnight, a whisper from the end of the world. Then slowly, magically, it receded to a spot where both sides stepped back from the abyss and found a way towards peace. It was a peace that would start the world on a track of lower risk of mutually-assured destruction.

The time spent one minute from midnight started from October 16th, 1962, when the President of the United States, John F. Kennedy, was notified that we had aerial reconnaissance confirmation of Soviet missiles in Cuba. The Cuban Missile Crisis had begun, and it had the effect of advancing the atomic clock to One Minute to Midnight.

The Story

In brief, the Soviets had worked with Cuban leader, Fidel Castro, in a partnership that put medium-range ballistic missiles (MRBMs) on Cuban soil aimed at the United States. Castro has suffered intrusions into the Cuban state through US-sponsored incursions, most notably The Bay of Pigs. The relationship with the Soviet Union was a way of protecting himself from the US and at the same time allowed Nikita Khrushchev a way to give the US back some of what it was giving to Moscow. The US had deployed MRBMs to Turkey – roughly the same distance to Moscow as it was from Cuba to Washington, D.C.

The situation was ultimately resolved through a blockade and subsequent diplomacy, but not before having nearly two weeks of very tense moments. The missiles were removed from Cuba and the US agreed to remove the missiles from Turkey.

That’s the history lesson and the context of the book. However, in addition to the twists and turns the story takes, there’s a second story that’s told of how our world has changed and how it has stayed the same.

Communications

Perhaps the most striking observation was the change in communications from then to now. Commands relayed from Washington could take 6-8 hours to make it to the commanders of the Navy stationed in the Gulf of Mexico. Official communication to the Soviet Union could take 12 hours or more. Even before the red phone was installed to provide direct communication between the US and the Soviet Union, we had improved communications dramatically.

Today, we take for granted that we can reach out and contact anyone on the planet in a matter of minutes if not seconds. We have video calls with friends and colleagues half a world away. We expect that our messages will arrive nearly instantaneously and that everyone has access to the internet in one way or another. However, at the time, the internet wasn’t a thing. It wasn’t even a wish.

One of the major challenges for the Soviet submarine commanders was the requirement that they surface to communicate with Moscow each day. While the timing made perfect sense in conflicts centered around Moscow – midnight – it made them very vulnerable during the daylight in the Atlantic waters.

Time and Distance

Never had the Soviet Union deployed ships and troops in such quantities so far away. Simple challenges like communications seemed onerous until they needed precise time signals that were too weak to receive from Moscow. Instead, they had to accept their time signals from US sources – unbeknownst to the US army.

Intelligence

It took nearly 30 hours for the US to notice that the Soviet ships that were on their way to Cuba to turn around and start heading home – after the initial awareness that the US knew of the missiles and Khrushchev started pulling back. Still, there was a spy providing the US with lots of useful information including the technical manual for the missiles being deployed to Cuba. We also had a sophisticated (for the time) set of listening posts that made it possible to detect the location Soviet submarines without their knowledge.

Spy planes, including the U-2, were used to gather aerial reconnaissance. (See The Complete Book of the SR-71 Blackbird for more about spy planes.) Where now we have satellites orbiting to safely photograph locations of interest, back then, we had to put people at risk to gather the photographic intelligence we needed to make decisions.

What we knew was mostly wrong – particularly as it pertains to the number of nuclear warheads that were in Cuba and the troop deployment. Moreover, we had dramatically overestimated the Soviet nuclear capacity. Where we underestimated the deployment strength, we vastly overestimated the total strength.

Missiles

The crisis wasn’t really about the ability to hit the US from Cuba. The truth was, as Kennedy was aware, that you were dead whether the nuclear warhead was delivered through an intercontinental ballistic missile (ICBM) or a MRBM. Kennedy never liked the Jupiter missiles deployed to Turkey and he tried to remove them – but he was always blocked. His “ace in the hole” was the Minuteman ICBMs that were scattered throughout Montana, Wyoming, and the Dakotas. Where the Jupiter missiles were mounted above ground and took 15-30 minutes to fuel, the Minuteman missiles were in underground silos and were ready to launch “within minutes.” Their farm configuration – which spread the missile silos over large areas of sparsely populated space – made them difficult for the Soviets to wipe out in an initial attack scenario.

The missiles in Cuba were a pawn of the much larger nuclear one-upmanship that the two superpowers had been playing. It was the case of American imperialism against communist solidarity. The missiles weren’t the point – the fact that the US was being threatened was.

Cuba’s Castro

Ninety percent of Cuba was owned in some way by the United States companies or individuals before the revolution. Cuba’s liberation meant that the government ceased the assets of foreign owners for state control – and even despite this grab of economic power, the country nearly collapsed. Castro’s revolution was a success – barely – but his economy was a wreck. He was intent at doing whatever it took to ensure that the economy survived, so that the country would survive under his leadership.

He was, however, a revolutionary at heart, and as such, he was willing to go to much greater extremes than either the US or his Soviet counterparts. Where the US soldier wouldn’t tolerate poor conditions and as much as one-third of the soldiers becoming ill, this was tolerable for the Soviet troops. The Soviets had done testing on their own people with regard to the impacts of nuclear radiation. Many died as a result of their radiation exposure. Castro knew the impacts of nuclear radiation and was willing to poison his country for decades to stop an invading US force.

The Soviets brought more with them than the MRBMs. They brought tactical nuclear weapons that would wipe out an invading force – but not without rather permanent and lasting damage to the ability for Cuba to be habitable. This didn’t seem to bother either the Soviet suppliers or the Cuban Dictator, who seemed locked in his revolutionary ways and the belief that winning was all that mattered.

The Consequences of Nuclear War

Kennedy and Khrushchev were both painfully aware that there was no such thing as a limited nuclear war. They knew that once the first weapon was fired (even inadvertently), there would likely be little turning back. Where Castro seemed intent on using whatever means necessary, both leaders saw their roles in history differently. They felt like that if they stepped too far forward, there would be nothing to step back to.

What does it mean to be the victor when the world is destroyed, they wondered. Victory is hollow when it is only to survive longer before inevitable death.

Communism

The threat to democracy was communism. There was a belief that it just could be a better system of government, and the US’ democratic approach was bound to be buried by communist efficiency. Where Khrushchev made promises to crush the US economically, we now know that this was just bluster. That didn’t stop the inquiries at the time or the fear that our way of living might be changed by forces outside our control.

It’s interesting to me as I compare it to Microsoft’s response to Linux in the 1990s. Linux was a real threat to Microsoft’s Windows desktop market – only to be revealed to be a non-issue. Microsoft did lose some market share to Linux in the server market, but this was hardly as pervasive or as redefining as it was anticipated to be.

When you’re standing too close to the problem, you fail to put it into a proper perspective.

Kennedy

JFK is a hero. However, his image is much larger than the real-life person. His handling of the crisis, his push to the Moon, and his famous speeches anchored a place for him in the American psyche. Having been assassinated, he didn’t have to accept the messiness of the fall from grace. However, when you look deeper, you see parts of the man that don’t reflect the hero image.

His medical issues were a secret to me until One Minute to Midnight. I never realized all the care that he was receiving behind the scenes to remain functional. I recognize these host of problems as the result of stress and incongruency in his world – something that the doctors at the time didn’t appear to be aware of. However, the man that spoke for everyone in America was as fallible as any other man.

There are the stories that you hear about JFK and his infidelity. Marylin Monroe’s relationship with him – including the alleged sexual relationship – are well known. His string of sexual encounters was also well established. However, the relationship with his former neighbor and former wife of a senior CIA official was an aspect I had not previously been aware of.

I can only believe that these were different times for different people, when it was expected that men, particularly powerful men, would have affairs. I don’t understand it or how it would be acceptable to the wives, but it’s far from the last time that a politician – or sitting president – would have an indiscretion that the wife knew about and either condoned or concealed. (Think Bill Clinton.)

I don’t know that we’ll ever get to the same place that we were with the Cuban Missile Crisis. The Soviet Union’s attempts to keep pace with the US economy and defense spending broke it. Communism, it seems, wasn’t as great as it was made out to be. What I do remember from my history class is that those who don’t know history are doomed to repeat it – and not just the high school history class. If for no other reason than avoiding the possibility of nuclear war, perhaps it’s time to give some thought to One Minute to Midnight.

Unity Packages Overwrite One Another On HoloLens and HoloLens 2

By default, Unity packages will overwrite one another as you deploy them through Visual Studio to the HoloLens and Hololens 2. This is due to the package name being defaulted to Template3D for all new projects (created via the 3D Template project.)

To fix the problem, go to Build Settings and press the Project Settings button in the lower-left corner, or go to Edit – Project Settings…

In the project settings dialog, navigate to Player (on the left), expand the Publishing Settings section, then change the value in the package name.

You’ll need to delete the directory that you created the build in (the directory you use when you press the Build button on the Build Settings dialog). Allow Unity to recreate the directory, compile, and deploy via Visual Studio as normal.

Book Review-Kirkpatrick’s Four Levels of Training Evaluation

Sometimes, you say a thing and it just catches on. It’s a moment of insight that gets frozen in time like a mosquito in amber, and later you realize just what you have. Kirkpatrick’s Four Levels of Training Evaluation is like this. It’s a simple framework for evaluating the efficacy of your training program. Don Kirkpatrick uttered the words: reaction, learning, behavior, and results. His son and his son’s bride take up these words and refine the meaning that the industry gave to the words and adjust them back towards their original intent.

The Levels

Despite the fact that Don never uttered the words as levels, others added them to the descriptors, and eventually people began calling it the “Kirkpatrick Model.” It stuck. Today, professionals speak about the levels of evaluation like this:

  • Level 1: Reaction – Did the students report that they enjoyed the learning experience, including the materials, the instructor, the environment, and so on?
  • Level 2: Learning – Did the students learn the material that was delivered? This is the typical assessment process that we’re used to having to complete to be able to report successful completion of a course, but it’s more than that. It’s did we learn anything that we can retain after the class and the test are long over?
  • Level 3: Behavior – Ideally when we’re training, we’ve identified the key behaviors that we want to see changed. Level 3 is the measurement of the change in the behavior.
  • Level 4: Results – Did the change in behaviors create the desired outcome? Are we able, as training professionals, to demonstrate that what we’re doing has value to the organization in a real and tangible way?

The Process called ADDIE

Many instructional designers use a design process called ADDIE after the steps in the process:

  • Analysis – What results do we want, what behaviors need to change to support that, and what skills need to be taught to change the behaviors? (Here, I’d recommend looking at The Ethnographic Interview and Motivational Interviewing for tools you can use.)
  • Design – What kinds of instructional elements and approaches will be used to create the skills and behaviors that are necessary to accomplish the goal? (Here, Efficiency in Learning, Job Aids and Performance Support, The Art of Explanation, and Infographics are all good resources.)
  • Development – The long process of developing each of the individual elements of the course.
  • Implement – Implementation is the execution of the training, either instructor led or in a learning management system.
  • Evaluate – Assess the efficacy of the program – and, ideally, revise it.

If you’re unfamiliar with the course development process or you’d like to explore it in more detail, our white paper, “The Actors in Training Development,” can help you orient to the roles in the process and what they do.

The untold truth is that, in most cases, the processes is rushed, hurried, and many of the steps are skipped or given insufficient attention. Rarely does an organization even have someone with instructional design training much less the time to do the process right. There’s always more training that needs to be developed and never enough time. The Kirkpatricks are driving home an even more telling point. The evaluation process – how you’re going to assess the efficacy – needs to be planned for during the analysis and design phases. The development and implementation phases need to consider the conditions that will be necessary to get good evaluation results. Evaluation isn’t something that can be bolted on at the end with good results.

It’s sort of like Wile E. Coyote strapping a rocket to his back and hoping to catch the roadrunner. It always seems to end badly, because he never seems to think through the whole plan.

Leading and Lagging Indicators

I learned about the horrors of metrics through The Tyranny of Metrics but learned real tools for how to create metrics through How to Measure Anything. However, it was Richard Hackman who really got me thinking about leading and lagging indicators in his book, Collaborative Intelligence. He was focused not just on how to make teams effective in the short term but how to create teams where their performance remains good and keeps getting better. He was talking about the results as a lagging measure, an outcome from the creation of the right kind of team. Influencer picked up and reinforced the concept. We need to look not just at the outputs that we want but the behaviors that we believe will drive those outcomes.

It’s all too easy, as you’re working on developing the metrics for your training, to focus on the lagging metrics and say that you don’t have enough influence on them. After all, you can’t take responsibility for sales improvement. Some of that’s got to be up to the sales manager. And you certainly don’t want to say that your training sucked if sales dropped after salespeople took the course. As a result, training professionals too often shy away from the very metrics that are necessary to keep the organization when there’s a downturn. Instead of being seen as an essential ingredient to success, they’re seen as overhead.

By focusing on a mixture of both leading indicators and lagging indicators, training professionals can get to an appropriate degree of accountability for end performance. Leading indicators are – or at least should be – behaviors. They should be the same behaviors that were identified as a part of the analysis phase as needing to be changed. These should be very highly impacted by the training. The lagging measures are the business outcomes that also should have been a part of the analysis process – but are further from the learning professionals’ control.

Waypower

While it’s not true that we need to hope for good outcomes, there’s a bit we can learn from The Psychology of Hope with regard to training’s role in the process of changing behaviors. In The Psychology of Hope, Snyder explains that hope is made of two components: willpower and waypower. Willpower is what you’d expect. It’s the desire, perseverance, or grit to get things done. (See Willpower or Grit for more.)

Waypower is different. It’s the knowledge of how to get to the other side. It’s the knowledge of the how that learning professionals can help individuals with. It’s waypower that training professionals give to their students every day. This may be used for the purposes of some corporate objective, but in the end, it’s a way of creating hope in the minds of the students that they can get it done if only they try. (Here, a proper mindset is important, too, as explained in Mindset.)

Application

There’s nearly zero research on the relationship between overall performance on the job and well trained, knowledgeable people. The problem is that we don’t really know how much training does really matter. What we do know, however, is that the application of the skills and behaviors that are taught in the classroom don’t always happen. The problem is called “far transfer,” and it’s a relative secret that what we teach in classrooms doesn’t always get applied to the real world. (If you’re interested in some other relative secrets in the training industry, check out our white paper, “Measuring Learning Effectiveness.”)

There’s an absolute essential need to consider how the skills that are being taught in the course can – and will – be applied by the student in the real world. Discussions, case studies, and conversations make for learning experiences that tend to be more used long after the training has been completed.

About the Questions

The book wouldn’t be complete without some guidance on how to write actual evaluation questions, including avoiding superlatives and redundant adjectives when evaluating in a scale – and ensuring that the scale matches the type of question being asked. Question authors are encouraged to keep the questions focused on the learning experience rather than the instructor or environment to get better answers.

The real question for you is will you read Kirkpatrick’s Four Levels of Training Evaluation and apply it to the way you evaluate your training?