The Tech Behind Pathfinder Labs


What we built and why we built it

The Rundown

There are more than a few statistics and engineering concepts ahead, so for those who want to skip to the pictures:


  • Feedback loops are more than meets the eye, and worth our time. In addition to service providers getting testimonials and ideas for their programs, there is scientific evidence that written feedback - particularly with an organizational response - has a positive psychological effect on the writer.
  • Feedback loops also have issues: they often have statistical bias and user-generated errors on both the part of the writer and the reader. These issues are simply part of being humans, and are nearly impossible to filter and properly analyze without the help of a machine.
  • We built that machine! We created a system of process algorithms, statistics, and some pretty cool (we think) programming.
    • Linguistic Inventory and Word Count analysis, to learn about everything from motivation to what the writer really thinks about programs.
    • Natural Language Processing computations, to figure out elements of personality and program impact while minimizing errors and survey bias caused by us being human, like self-censoring and self-perception.
    • Mathematical Algorithms, centered around clinically validated research in psychology, to compute standardized scales for program impact.
    • Machine Learning, to both adjust and update the algorithms as well as identify patterns leading to successful community development.
Taking qualitative input - feedback - and turning it into standardized, qualitative metrics can seem like witchcraft. We made this page to back up the magic with science.

Why we decided to bring tech into feedback analysis

Image of Pathfinder App

It Starts With a Feedback Loop

As detailed on the About page, everything Pathfinder Labs does is centered around the concept of closing feedback loops.

The purpose of Pathfinder's technology is to identify patterns and latencies in feedback, helping providers and participants visualize and understand program impact, and optimize their growth.

When an organization provides a service - holds an event, offers a treatment series, conducts training, or whatever the speciality - the best way to track program efficacy is to ask for participant feedback. That's nothing new; many products, services, restaurants, even government policy writers ask for feedback to make sure they are meeting the needs of their customers.

Honest, thorough feedback offers benefits throughout the community:

  • The provider learns the participant's opinions and experiences, identifying elements of the program they might want to highlight or potentially change
  • The participant has an outlet for voicing their concerns, making them feel more connected to the provider
  • Others reading the feedback gain an idea of what to expect, enabling better decision-making
If the provider can respond to the feedback, acknowledging the voice of the participant, it provides validation that the organization is listening and values meeting the needs of those seeking services. This builds community and connection, and can both reinforce feelings of positive experiences and reduce the impact of negative ones.

Once the participant knows the feedback was considered and the organization knows areas where they excel and where they can improve, the loop is considered closed. As more feedback loops are closed, the provider begins to see patterns that help to prioritize what program changes might have the largest effect on opinion.

Additional benefits and psychological effects of feedback loops can be found in articles on our Research page.


Technology Makes Feedback Better, Faster, and Stronger

A closed feedback loop is a great start to guiding improvements in individuals, programs, and communities. What technology can do is reduce or eliminate problematic factors; the same opinions and emotions that make feedback valuable as an outlet and community builder also make it much more challenging to use as a tool for making efficient, objective decisions.

A trained, calibrated technology platform can help mitigate:

  • Writer (Emotional Influence) Bias. Writing feedback with emotion might make the review skewed towards the good or bad aspects creating the emotion, omitting the opposite.
  • Reader (Perceived Emotion) Bias. Seeing a positive or negative rating creates emotion in the reader, leading them to place undue emphasis on the content.
  • Sample Errors. Humans cannot randomly select testimonials for analysis, nor can they ensure representative sample sizes from each demographic group.
  • Comparative Errors. Readers can only process the feedback through the lens of what else they have read. This may limit the amount of information they can identify.
  • Filter Errors. We are limited in how accurately and objectively humans can sort information into groups. We risk making "apples to oranges" comparisons without the ability to choose the best filters.
  • Self-Report (Self-Perception) Bias. Asking someone to accurately perceive - and report - how they feel or behave invites both conscious and unconscious self-censoring. For instance:
    • We might be concerned our answers will be seen and make the reader unhappy or angry
    • Current mood can lead us to view ourselves in a different light on a different day, making results inconsistent
    • We might want to be viewed in a particular light, so will answer subjective questions (anything from pain to conscientiousness) to skew the results
Bottom line? Feedback is awesome, but proper feedback - and its analysis - can get really complicated, really quickly. That's where we step in.

The Technology That Makes Us Tick


What it looks for

We are interested in impact: what effect does a program have on the different elements making up their participants' well-being? What motivates them to seek services, or return to events and appointments? Is the program having a lasting effect on their anxiety and resilience, or creating diligent habits?

This is all about personality. Personality prompts us to behave in a way that generates emotions that affect our behaviors to change our emotions. These are all linked, but the one that is hardest to change, and at the base of it all, is personality. If a provider or series of providers can impact emotions and behaviors enough, they may be able to influence a shift in personality. Reduce stress often enough, eventually you may find you are less predisposed to being anxious. Your baseline is lower: the anxiety facet of emotional range (or neuroticism) is reduced.

We reasoned we couldn't very well ask someone to do a full personality test every time they provided feedback, and we wouldn't be likely to get accurate or standardized responses anyway (see the first section).

Computers and math and statistics and data engineering to the rescue! Yeah, we're nerds. It's cool.

What it is

There are three primary components that make up the Pathfinder Labs technology:

  • Linguistic Analytics. There are several systems out there for counting and categorizing words in a block of text to measure for tone, analytical thinking, influence, and authenticity. We use a combined method, pioneered by psychologists and repeatedly tested and compared with traditional testing methods, This method is a peer-reviewed and verified method for objectively assessing individual and aggregated changes in these variables and their components.
  • Natural Language Processing (NLP) Technologies. Generally speaking, NLP is the examination of all components of speech including syntax and connector (or "stop" words). Text is compared to a corpus - a large collection of text samples from a source already validated for the purpose needed - and patterns are identified for measuring the probability of results similar to what was measured in the corpus. NLP can be found in phone predictive text, customer service bots, and - in our case - personality metrics.
  • Supervised and Unsupervised Machine Learning. Machine Learning is a broad term for software that is programmed to adapt without being recoded. If the program is designed to play checkers against a human, for instance, it might "learn" a particular human opens by moving to the center 70% of the time and then will move to fill the empty spot 55% of the time, and so on. Depending on if the computer won or lost the game when the human played a particular sequence, the software can calculate and recalculate the best probable paths each time. In this way, the software "learns" to win more consistently, and while making fewer moves. This is supervised machine learning: the outcome (winning the game) is known and can be mapped based on a sample - or, training - data set. Unsupervised machine learning is when the "best" outcome is not known, because the machine is learning on the fly through patterns. If the computer didn't know the basic rules of checkers, for instance, it might learn by noting the opponent always moves along a diagonal. Pathfinder technology uses both kinds of machine learning. We can also play a mean game of checkers.

  • LIWC, NLP, and two types of machine learning show us a world of numbers from a block of text like feedback. Then it's time to mix in research, evidence and clinical studies, and our proprietary algorithms.

    What it does

    Hold onto your hats, because here's where it gets fun.

    1. Step 1: Distill the Feedback. As users contribute feedback, we break the text down in a variety of configurations. We write code to sort for demographics and survey responses and submission date and a variety of other criteria, and once the text blocks are grouped and regrouped and grouped again we run LIWC and NLP software.
      Security Note: We need a few hundred words for valid NLP measurements, so we never run NLP analysis for a provider with just one individual's writing. This means no result shown to a provider will ever be representative of a single person, further protecting anonymity and individual results.
    2. Step 2: Standardize the Feedback. Our software examines how the language in the text block compares against over 200,000 writing samples, and generates scores for over 50 personality metrics and several hundred LIWC categorizations. All of the samples were scored using clinical, survey, or other reported methods. The scores generated are percentiles: for example, an extroversion score of 85 means the text might show indicators of the trait more often than 85% of the samples. This percentile method and the very large sample size provides a standard measurement, eliminating self-report bias.
    3. Step 3: Store the Data. We have a nonrelational database. Basically, that means all of the variables - like demographic data and probability of a trait - are sitting in a soup, not associated with each other or the original writer, until our code fishes them out. We store each LIWC and NLP result raw - they go straight from the software into the soup.
    4. Step 4: Check Out the Al Gore Rhythms. Algorithms - the computations and operations done to the variables to create the program outputs - process the data. Our formulas originate with clinically verified statistical models that measure personality aspects of resilience or optimism. We use these studies to determine how much one facet of personality might contribute to a particular trait, coming up with formulas that give weight to each facet. Our algorithms then pull the appropriate raw numbers from the data soup and do the math.
    5. Step 5: Let the Machines Learn. Some of our systems are supervised; the program can adjust the weights if too many people come back and say our optimism score is too high, for instance. Some of our systems are unsupervised; the program may create new scales if it begins to see that Veterans who left the military more than five years ago are less diligent than their friends who are still serving, but also more diligent as a whole than the average person on the street.
    6. Step 6: Repeat and Grow. With each piece of feedback and each time the algorithms adjust, the system becomes a little more accurate and the final impact estimates a little more useful. Because it is always changing, it also means this system and the outputs would be nearly impossible to replicate. The whole platform is constantly being retuned to better understand MilVets, their families, their providers, and their communities.

The End Results

What Does All This Science Actually Create?

Graphs! And some other pretty pictures, but all help visualize the data. Once a month, we share a little piece of what we learned in our User and Provider newsletters, but there are also dashboards, reports, and other results.

For Feedback Contributors

  • A mini-dashboard in the profile section that - once several reviews are contributed - tracks the NLP results of several critical statistics like average resilience tendencies, probability of becoming lonely, need for affiliation or community, etc.
  • COMING SOON: Targeted recommendations for local organizations or programs to try out, using NLP to see where others like you are finding success.

For Providers and Programs

  • With Claimed Page Only (Free Access)
    1. Limited dashboard with ratings and feedback analysis (demographics of reviewing groups) and partial motivation results from LIWC.
    2. Other features as described on the Nonprofits Page under the Back Office.
  • With Analysis Agreement
    1. COMING SOON: Full dashboard for NLP and computed critical psychometrics, area statistics as available, and tactical recommendations.
    2. Periodic summary reports with targeted metrics, overall feedback analysis, and machine-trained suggestions for short-term improvements.
    3. Annual report with full-scope demographic and NLP categorized metrics, motivational drivers, and all levels of recommendations.

Where Can It Go From Here?

To the moon! Sort of.

The plan is to ultimately understand the different paths (see what we did there) we can take at different career stages that lead to a high probability of "success." So first we use unsupervised machine learning to train the machine in figuring out what success looks like. Then we can look at how users with different personalities, military career stages, and use of various providers found success. This lets us use supervised learning to find the patterns that led to their success.

For our providers, we are looking at how they find participants. We would like to help them target those who are most in need of their service, especially those who have room to grow in a particular attribute in which the provider specializes. We also want to match motivations, and steer people to providers that are a good personal fit.

IMPORTANT NOTE: We will never share an individual result with a provider or employer, only aggregated analysis, to avoid having personality probabilities be a part of active selections for services (too much potential to be discriminatory). We only will advise on what a provider might look for during the recruiting process or what types of questions they might consider asking applicants.

We are also using this technology to help programs and services offer more to their community by examining overall area needs, and comparing those needs to the services available.

Science is pretty great! Learn more about what we do, get connected, and contribute feedback. Do more!