Skip to content

Most people believe that professional learning is one thing.  It’s not.  It is, in fact, at least two distinct things – each with their own objectives and needs.

Jobs to Be Done

Clayton Christensen made the idea of “Jobs to be done” popular in The Innovator’s DNA and Competing Against Luck.  Fundamentally, it’s the question about what it is that a product or service is really being asked to do – what job is it being hired for.  It builds on the idea that consumers don’t really want ¼” drill bits – they want ¼” holes.  Christensen takes that further.  He uses the example of a milkshake.

In his research, he found two different uses for milkshakes – each requiring different characteristics.  In the morning, they are used as a way to stave off hunger until lunch.  In the afternoon, they are a treat for children.  In the morning, the thicker they are, the better.  In the afternoon, owing to the desire for parents to move on with the next thing, the thinner they are, the quicker they can be downed by children and the faster they can move on with the next thing.

On the surface, it’s one thing: a milkshake.  However, on closer examination, it’s two things: a food alternative and a treat.  When we look at training, we’re looking at two things with very different goals.

Degrees, Credentials, and Certificates

The first kind of learning that everyone is familiar with is the kind that leads to degrees, credentials, and certificates.  The point of them is to get a job, a better job, a promotion, or more money.  The truth is that the learning isn’t the goal, the job opportunities are.  It’s from that lens that we must view learning for professional gain and recognition.

What matters is the “paper.”  The learner will do what’s necessary to get it.  The learning is a requirement for it, so the learner does it.  Not necessarily because they’re interested in the learning but rather because they’re interested in the opportunities that the “paper” means.

The point of the “paper” – whether a degree, credential, or certificate – is that the employers value it.  If they value it, they’re willing to hire, promote, and pay for it.  Protecting the value proposition for the “paper” is essential.  That’s why colleges are accredited by various bodies and why credentials are based on certifications and experience.

Generally speaking, degrees command the most reward because, in part, they’re seen as greater effort and greater validation.  Certifications are less work but still validated with some sort of reasonable assessment that the learning objectives have been internalized.  Their lower effort yet validation of knowledge allows them to command a substantial, if less than a degree, level of response from employers.  Certificates meet a lower bar of simply having completed the work without meaningful validation that the information has been learned – and as a result have a smaller command for pay and promotion than the other two.

What’s reasonable validation that the learning – which the company wants – has been accomplished?  It can be demonstrating the skills, but in most cases today, a multiple-choice, question-and-answer test is used as a proxy, because implementing testing of actual skills is too expensive and prone to biases.  A key distinction between the kind of testing performed for degrees and certifications is that these exams are proctored.  That is, someone is watching the student and validating their identity to eliminate any kind of cheating.  While certificates may have testing associated with them, the testing is largely perfunctory, and therefore the test isn’t proctored.

Most of the time, organizations are asking the “paper” to convey some sort of business skill that they can then use the “paper” as a proxy for.  Bachelor’s degrees are used as a proxy for the degree to which someone can learn on their own and manage themselves.  (This can either be explicit or, more often, implicitly expected.)  Credentials are a proxy for the industry-relevant skills that make the professional conversant and capable.  Certificates are often used to demonstrate a much smaller set of skills – with less validation of the learning.

Microlearning

All the rage in training and development circles for more than a decade, microlearning is a response to too much work being done on the kinds of learning that lead to better jobs, roles, and renumeration (pay).  The concept is that people don’t want to spend a week in class to become an expert information architect, they just want to know how to create a search page.  They might be able to be taught how to create a search page in 10 or 15 minutes.  The focus is on “how do I get my immediate job done?” – rather than becoming competent in a new profession or obtain specialization in a profession.

Just as there are legitimate reasons why someone would need to learn a new profession, there are legitimate reasons that they would need to learn how to do a single, focused thing.  In today’s world, expediency is necessary for agility.  We can’t afford to overtrain someone who just needs a small piece of knowledge.

The key difference between what is important for microlearning and more broad kinds of professional development is that microlearning has very tiny objectives for what is being learned.  In most cases, this learning is about how to do something and less about when and why.  The simple reason for this is the same observation that Benjamin Bloom and his colleagues noted in 1956 when they created the taxonomy of hierarchical learning objectives (in the cognitive domain).  Educational objectives like recognition and recall are quick and easy to teach.  Objectives like teaching the capacity for analysis and synthesis in a space require substantially more effort.

Microlearning is often the correct thing when people just need to know how to do something and aren’t necessarily engaged as a part of evaluating when or why the task should be done.  This, of course, exposes them to the potential for using the “how” when the ethics of the situation might dictate something else.  Without the context to evaluate when and why, there’s no way to evaluate the ethical and moral considerations for the task.

In the end, the criticism of microlearning is that it denies the learner the broader context that they may need.  One of the proposed solutions to this is to stitch together different bits of microlearning into a learning path.

The Legitimacy of Learning Paths

The answer to the gap between these two learning objectives is supposed to be learning paths, but using them as the bridge oversimplifies the gap that exists between the two kinds of learning – and it complicates an overlapping coverage problem.

Before I graduated high school, I was taking software development courses.  I thought that each new language I learned increased my marketability and improved my skills.  I learned COBOL and FORTRAN.  I learned BASIC and C.  This provided me with a great deal of syntactical knowledge about the various languages and how to write a program in them.  However, because I spent so much time learning the syntax, I didn’t learn the finer points of software design – regrettably, I wouldn’t learn that until much later in my career.

The point is that learning four languages isn’t the same as learning one language and the advanced techniques that can be used in that language to improve the probability of creating performant, reliable, and maintainable software.  Similarly, learning 20 ways to do something doesn’t help me learn when to do it or why.

Learning paths work – but only when they’re stitching together a set of individual “how-to” skills that can be leveraged in concert with one another.  Learning the basics of networking and the basics of databases helps if you’re building an application that needs both – but it doesn’t teach good design.

Another problem with learning paths is that they don’t really operate at the level of the skill.  Learning paths stitch together micro-courses.  These courses each have their own skills – and sometimes those skills overlap and sometimes there are gaps.  Consider the idea of network skills as an example.

One course might teach the basics of TCP/IP with emphasis on setting an IP address, subnet mask, and default gateway as a mechanism for basic communications.  Another course might teach how to wire together an ethernet network, including cable lengths and how to crimp on connectors.  Neither course covers how the physical addresses used in ethernet are converted to the logical addresses of TCP/IP.  Both may cover a bit of what the other covers – or they may leave gaps.

The result is that the learner either isn’t given the pieces to connect one topic to the other or they’re giving duplicate and often slightly conflicting coverage of the same topic.  Both are disorienting to the learner – either because they can’t navigate the gap or because they don’t know which course to “believe.”  Or, said less strongly, they may question which model of networking is better for them to use.

Absolutely learning paths are valuable – but they’re not a solution to every problem.  Another solution, badges, for instance, are used to deal with the motivation problem for non-essential microlearning.

We Don’t Need No Stinkin’ Badges

Building on the fact that people respond well to recognition, learning management systems have proliferated badges.  Every course completion conveys a completion badge as well as badges for completing multiple courses, completing series of courses in a learning path, etc.  The result is that we’re devaluing the badges themselves.

In 1968, Frederick Herzberg published an article in Harvard Business Review, “One More Time: How Do You Motivate Employees?”  His results were clear.  Achievement and recognition topped the list of things to do to motivate employees.  As the most requested reprint of any HBR article, it’s a critical piece of what we know about human behavior in organizations – even if it’s a bit dated.

The result was we started giving paper awards for all sorts of things, and eventually people’s cubicles became memorials to the paper awards.  When learning management systems made it easier to automate the process, we started adding badges to everything – and as a result, we made it impossible to tell what the badge even was.  Instead of individual badges having meaning, it became a game of collecting the most badges.

More importantly, because the badges were being given for taking a class – but not necessarily testing to prove competence or even recall of the material – they became participation trophies.  You didn’t have to learn or develop skills, you just had to show up, and that’s not what organizations need or are willing to pay for.

Confusing One for the Other

One of the challenges of the learning and development industry is understanding whether a learning objective is of the former or the latter category.  When are we trying to prepare someone to be successful in a new space or profession versus when just trying to provide them a missing skill?  In the rush to make things easier with fewer barriers, we sometimes swap out microlearning for situations where that’s not the right answer.  Conversely, we have historically forced people through longer programs than they needed because we wanted them to have the complete picture – when it wasn’t what they needed.

When you’re offering training, make sure you know which you’re offering.

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share this: