Our customer’s data is everywhere - how can we safely and securely use intelligent machine-learning technologies to understand, adjust and react to our customer’s needs in a seamless, auditable, process-centric fashion?
What it does
The next generation CRM leverages Appian in conjunction with Amazon’s DeepLens to identify, recognize and categorize customers to create context-driven business processes to provide highly focused customer service resulting in increased customer satisfaction. The solution uses the following high-level technologies:
- Customer Identification - We use Amazon DeepLens to translate raw video into images of faces.
- Customer Recognition - We use Amazon Lambda Functions, S3, Dynamo DB and Amazon Rekognition services to recognize existing customers stored in the database.
- Customer Categorization - We use Amazon Lambda Functions to categorize the customer as new or an existing customer.
- Context-driven Business Engagement - We use Appian to leverage customer’s context to start relevant business processes, which are routed to the right people and the right time. This on-demand and on-time business process execution allows us to provide better customer service by: providing more relevant service to our customers, save customer’s time and thus respond to more number of customers reduce our staff levels by reducing redundant/duplicative customer interactions
Our solution allows for further tailoring of the business processes and task management in real-time when the customer enters our business, engages with the service providers, completes their transactions, and finally exits the business. The framework maximize data collection on the interaction, ensure we leverage all previous data and 3rd party data sources and finally ensure that future tasks are created for follow up.
This framework can be applied to many businesses including retail, banking, insurance and other similar industries. We chose a Bank as a potential use case to show how some clients are more valuable to the bank than others and how we can detect potential new clients through the power of Appians cloud platform.
How I built it
We brainstormed the business problem that we could successfully solve by expanding Appian’s existing capabilities. The team converged to building a next-generation Customer Relationship Management (CRM) tool which would exploit all of Appian’s industry leading business process management capabilities and case management capabilities, and utilize Amazon’s DeepLens features to add machine learning and face recognition capabilities. We defined the high-level system architecture, and clearly defined the target state and definitions for APIs to be built to successfully integrate DeepLens and Rekognition services with Appian.
Next, we split the team in two. One team focused on investigating facial recognition and the other team focused on the Appian Process development that would leverage the information we had on hand in appian and marry that information with the face detected. Early on it was clear we would leverage RESTful calls to communicated between the face recognition cloud service and Appian. When we got to the point when it was time to test, we got together as a team and starting to experience for ourselves the accuracy and how we could make improvements.
DeepLens Amazon FaceRecognition Service
AWS DeepLens is a wireless video camera and API, it shows you how to use the latest Artificial Intelligence (AI) tools and technology to develop computer vision applications. This device uses deep convolutional neural networks (CNNs) to analyze visual imagery.
DeepLens work with following AWS Services:
- Amazon SageMaker, for model training and validation
- AWS Lambda, for running inference against CNN models
- AWS Greengrass, for deploying updates and functions to your device
AWS DeepLens produces two output streams:
Device Stream - The Video stream passed through without processing.
Project Stream - The results of the model’s processing video frames
- The inference Lambda function receives unprocessed video frames.
- The inference Lambda function passes the unprocessed frames to the project’s deep learning model, where they are processed.
- The inference Lambda function receives the processed frames from the model and passes the process frames on in the project stream.
Challenges I ran into
Learning Curve on DeepLens: As we are working with the Beta version of DeepLens, we had limited support. While the DeepLens forum is mostly helpful, there aren’t enough established QA capabilities, and only have a very small community to help in troubleshooting issues. This resulted in relatively long implementation cycle for DeepLens.
Debugging and Deploying: It’s difficult to debug a Lambda function because an important library called “awscam” is only available on the device. We had to wait for the lengthy deployment process to finish to be able to debug. Also, the team faced lot of challenges in deploying pre-built project, and restarting the greengrass service for every redeployment.
Project stream shows corrupt image: After deployment project stream would show corrupt image. After some research, we found Deeplens support team recommendation to use action recognition lambda as a template for deploying the model.
Accomplishments that I'm proud of
- Ability to expand Appian’s already vast number of real-life business related capabilities.
- Ability to expose multiple cutting edge technologies - face recognition, video capturing, machine learning libraries etc. all in a very short time window
- Full understand and appreciate the potential of Appian’s new integration capabilities that include WebAPIs and Integration Smart Service.
- And finally, completing the project and successfully testing the integrations real-time and experiencing the power of Appian low code platform
What I learned
Appian is truly a great platform for business process automation. In our experience the current business users mostly focus on manually starting the business processes based on certain human-introduced events. With DeepLens integration, business processes don’t have to wait for the human intervention. Instead, your IoT devices are be “taught” to start the right business processes, involve the right set of stakeholders, and collect/display the most relevant information. The team now have a whole new spectrum of use cases where we can leverage different AI techniques and integrate them into the Appian Platform.
What's next for Next Gen CRM
Customer satisfaction is always hard to capture. Irritated clients get even more irritated when you send them a survey. As a part of the 'customer exit' step, next Gen CRM is looking to leverage video footage further by performing sentiment analysis on the client by detecting smiles, frowns and other body movements. This data can then be leveraged through the Appian BPM to kick off processes to try to mitigate any negative experiences.