The Exchange team is constantly building new integrations and supporting many different types of third-party services. We try to ensure there’s enough variety of Fraud prevention, pre-fill, IDV and decisioning services.
We have noticed a few sensitive areas worth looking at when evaluating a vendor’s performance and effectiveness on a user journey.
In this post, we will demonstrate how Insights can help in evaluating your chosen third-party vendor and fix issues quickly and easily.
But first, let’s talk about:
What could go wrong?
Here are some of the issues we’ve encountered:
- Applicant abandons before completing
- Applicants start a process, and while going through the stages, choose to stop interacting with the third-party service. Unfortunately, it’s not always possible to understand at which stage of the process this happens, or why.
- Service outages.
- Some of the services are free, and we can’t rely on SLA’s. It’s hard to know how many errors have occurred due to service provider related issues.
- These types of issues have a direct impact on the user experience and could increase application abandonment.
- Low engagement.
- Applicants choose not to use a particular third-party service which is part of the user journey.
- Technical issues.
- API changes, defects or misconfiguration are all the obvious things which are usually identified at the testing stage, however, some issues are related to a new type of mobile device or browser updates; those issues are very tricky to find.
How can we help?
Exchange packages have built-in Insights milestone events, uniquely designed for each vendor.
Insights can then automatically generate a visual representation that will assist you to track issues, early and easily.
Using Exchange packages in conjunction with Insights will help you determine the effectiveness of the vendor on your user journey.
Let’s see this in action!
For the purpose of this demonstration, we’ve chosen to use FraudForce – a device fraud prevention solution from iovation. FraudForce is a useful and reliable product which we often use in the user journeys we create.
So here’s the plan for this experiment:
1. We’ll create 40 application submissions which will result in all types of decision and errors.
2. We’ll create an automated script to run all applications in our testing environment.
3. We’ll generate the User Journey View in Insights by adding Exchange Milestones and Segments.
4. And finally, we’ll analyse the results together.
Ready to see the results?
In the User Journey below, we can see that the iovation check ran on all devices that opened this application.
We can also see that only 73.4% of those applications received a final decision; this means that 26.6% calls failed.
Now, this is just test data, but in production, if this number is high, it means that it’s either an issue with the set up or the service quality is poor.
But what’s more interesting is to see what actually happened with the applications that did receive a final decision.
Look at the segments section in the User Journey below.
39.6% were accepted, 45.3% got declined, and 15.1% were referred for review.
In production, if any of these results are significantly high or low, it is a clear indication that an investigation is required.
iovation is an example of a vendor which helps us improve the success and integrity of an application.
You can now do this analysis for all the third party integrations that are a part of your application.
If you see too many declines, not enough engagement, or errors without a final result, you might want to consider a change in the application flow or replacing a vendor.
We hope you found this useful.
Stay tuned for more tips from the Exchange team.