Loading...

Customer Payment Trend Analysis based on Clustering for Predicting the Financial Risk of Business Organizations

©2016 Textbook 72 Pages

Summary

With the opening of the Indian economy, many multinational corporations are shifting their manufacturing base to India. This includes setting up green field projects or acquiring established business firms of India. The region of this business unit is expanding globally. The variety and size of the customer base is expanding and the business risk related to bad debts is increasing. Close monitoring and analysis of payment trends helps to predict customer behavior and predict the chances of customer financial strength.
The present manufacturing companies generate and store tremendous amount of data. The amount of data is so huge that manual analysis of the data is difficult. This creates a great demand for data mining to extract useful information buried within these data sets. One of the major concerns that affect companies’ investments and profitability is bad debts; this can be reduced by identifying past customer behavior and reaching the suitable payment terms. The Clustering and Prediction module was implemented in WEKA – a free open source software written in Java. This study model can be extended to the development of a general purpose software package to predict payment trends of customers in any organisation.

Excerpt

Table Of Contents


The high value for cluster mean for Cluster 2 also is shown for 61-
90 days. The standard deviation value of the cluster 2 is also high
for 61-90 days. The inference is that most of the customers
belonging to this credit type have a tendency to pay after 60 days
and organization can think of changing the credit terms with these
customers. The high value of standard deviation implies that the
number of invoices of different customers show a wide variation
ranging from 0-50.
The high value for cluster mean for Cluster 3 is shown for 0-30
days. The standard deviation values of the cluster 3 is high for 0-30
days .The inference of the study is that this credit term customers
(I030RRT ­ within 30 days of receipt) are most reliable and less
risky and highly suitable to the organization in terms of risk of
payment risk.

TABLE OF CONTENTS
1. INTRODUCTION ... 9
1.1 Problem Formulation ... 9
1.2 Objective ... 10
1.3 Scope of the Study ... 11
2. REVIEW OF LITERATURE ... 12
2.1 Introduction to Clustering ... 12
2.2 Definition ... 12
2.3 Need for Data Mining in Organizations ... 13
2.4 Data Mining Methods ... 16
2.5 Types of Business Models ... 18
2.6 Classification of Clusters and Clustering Techniques ... 23
2.7 Similarity and Distance Measures ... 24
2.8 Representation of Clustering ... 25
2.9 Goal of Clustering ... 26
2.10 Examples of Clustering Application ... 27
3. METHODOLOGY ... 28
3.1 Partitioning Method ... 28
3.2 K-means Algorithm for Partitioning ... 29
3.3 Platform Specification ... 32
4. ANALYSIS AND DESIGN ... 36
4.1 Learning the Application Domain ... 36
4.2 Creating Target Data Set ... 37
4.3 Data Cleaning and Preprocessing ... 38
4.4 Data Reduction and Transformation ... 39

4.5 Identification of the Fields for the Study ... 41
4.6 K-Means Algorithm ... 42
5. IMPLEMENTATION RESULTS AND DISCUSSION ... 43
6. CONCLUSION AND SCOPE FOR FURTHER
ENHANCEMENTS ... 69
BIBLIOGRAPHY ... 71

9
Chapter 1
INTRODUCTION
1.1 Problem Formulation
This study is done at FCI's manufacturing base at Kochi, Kerala.
This manufacturing base is having an annual turnover of 250
million IRs and customer base of 1030 numbers. FCI is one of the
largest electronic components manufacturer in world having 40
manufacturing bases scattered in France, Spain, Italy, UK, Ireland,
Australia, USA, Canada, Mexico, Japan, Taiwan, China and India.
For different customers, the payment terms are selected from the
existing 30 types. This is initially fixed based on organization's
general reputation, industry feedback etc. Many of the customers
are in other continents and chances of right information on the
financial condition of these customers reaching FCI may get
delayed. Continuous monitoring and analysis of customer
payments to be done to see the customer's binding to agreed
payment terms will help to foresee the risk and take corrective
actions.
Some customers will adhere to payment terms and some will
deviate from agreed payment terms. It is necessary to identify the
class of customers who deviate from the payment terms on
generally so that the specific payment term can be given to future
customers in case of absolute necessity as well as for taking

10
actions to change that payment terms of customers who are in that
class.
This can be analyzed by clustering techniques. The clustering
based on partitioning methods was found suitable for solving this
problem. Hence the K-means algorithm which uses the partitioning
methods was used in this study.
1.2 Objective
The study is done at FCI Ltd., Kochi. The organization has 30
payment terms which was agreed and created for last several years
of business. Some terms are as old as 20 years and some are
irreverent in present business scenario. 1000VPM against VPP is
an example. Some are very vague and cannot be monitored or
controlled correctly. 1000CNC (after completion) is another
example. This does not specify the target date and keep the
payment receiving date in open. The payment term 1000COD
(cheque on delivery) is also very vague and open as it does not
specify the date to be mentioned on the cheque. Customers can
take this as granted and can issue cheque as per the date they
prefer.
Many payment terms are created for single customer. Some
payments terms are highly risky and there are many examples of
bad debts in past. The payment term risk can be found out by
analyzing the past data. The objective of this study is to analyze

11
the payment terms of customers in general and find out the
payment term in which most of the customers do not adhere to.
1.3 Scope of the Study
The scope of this study extends to analyzing the customer payment
terms set by the company and the risks related to this. The study
will help the organization to avoid providing risky payment terms
to new customers in future. This will also help management to
initiate action or encourage for changing existing customer to opt
for some other payment terms.

12
Chapter 2
REVIEW OF LITERATURE
2.1 Introduction to Clustering
Clustering is the classification of objects into different groups, or
more precisely, the partitioning of a data set into subset (clusters),
so that the data in each subset (ideally) share some common trait ­
often proximity according to some defined distance measure. The
distance between points in a cluster (inter cluster) is less than the
distance between a point in the cluster and any point outside it.
Clustering is similar to `database segmentation' [1].
Clustering has been used in many application domains including
biology, medicine, anthropology, marketing and economics.
Clustering applications include plant and animal classification,
disease classification, image processing, pattern recognition and
document retrieval.
2.2 Definition
The clustering problem is stated as follows. Assume that the
number of clusters to be created as the input value, k. The actual
content of each cluster, K
j
, 1<=j<=k, is determined as a result of
the function definition. Without loss of generality, we will view
that the result of solving a clustering problem is that a set of
clusters is created: K = {K
1
,K
2
,...,K
k
}

13
Given a database D = {t
1
,t
2
,...,t
n
} of tuples and an integer value k,
the clustering problem is to define a mapping f : D (1,2,...,k)
where each t
i
is
assigned to one cluster K
j
, 1<=j<=k. A cluster, K
j
,
contains precisely those tuples mapped to it; that is,
K
j
={t
i
|f(t
i
)=K
j
,1<=j<=n and t
i
D}[2].
2.3 Need for Data Mining in Organizations
Today's business environment is more competitive than ever. The
difference between survival and defeat often rests on a thin edge of
higher efficiency than the competition. The advantage is often the
result of better information technology providing the basis for
improved business decisions. The problem of how to make such
business decisions is therefore crucial. One answer is through
better analysis of data. Data mining is a methodology to assess the
value of the data and to leverage that value as an asset to provide
large returns on the analytic investment.
The problem that often confronts researchers new to the field is
that there are a variety of data mining techniques available and
which one to choose. Some are more difficult to use than others,
and they differ in other, superficial ways but most importantly, the
underlying algorithms used differ and the nature of these
algorithms is directly related to the quality of the results obtained
and ease of use.
Some estimates hold that the amount of information in the world
doubles every twenty years. In 1989, the total number of databases

14
in the world was estimated at five million, most of which were
small local computer files [4]. Today the automation of business
transactions produces a large amount of data because even simple
transactions like telephone calls, shopping trips, medical tests and
consumer product warranty registrations are recorded in a
computer. Scientific databases are also growing rapidly. NASA,
for example, has more data that it can analyze. The 2000 US
census data of over a billion bytes contains an untold quantity of
hidden patterns that describe the lifestyles of the population [5].
Most of this data will never be seen by human beings and even if
viewed could not be analyzed by hand.
Byte magazine reported that some companies have reaped returns
on investment of as much as 1000 times their initial investment on
a single project. More and more companies are realizing that the
massive amounts of data that they have been collecting over the
years can be their key to success. With the proliferation of data
warehouses, this data can be mined to uncover the hidden nuggets
of knowledge. Data mining tools are fast becoming a business
necessity. The Gartner group has predicted that data mining will be
one of the five hottest technologies in the early years of the new
century.
There are currently several data mining techniques available. Not
all of these are equal in effectiveness. Data mining is widely used
in business, science and military applications. Data mining can
allow an organization to market customers on an individual or

15
household basis, selecting those are most likely to be responsive
and suggesting targeted creative messages.
Data mining is the process of using analytic methods to explore
data to discover meaningful patterns that enable organizations to
operate in a more cost effective manner. It brings out linear
relationships and can handle noisy or incomplete data. It can also
model large numbers of variables which is useful in modeling
purchase transactions, click stream data or gene problems. Data
mining uses powerful new rule induction technology to make
explicit relationships in both numeric and nonnumeric data.
There are non data mining methods like query tools with graphical
components. Some tools support a degree of multi-dimensionality
such as cross tab reporting, time series analysis, drill down, slice,
dice and pivoting. These tools are sometimes a good adjunct to
data mining tools in that they allow the analyst an opportunity to
get a feel for the data. They can help to determine the quality of the
data and which variables might be relevant for a data mining
project to follow. They are useful for further explore the results
supplied by true data mining tools. These approaches have several
limitations. Querying is effective only when the investigation is
limited to a relatively small number of known questions.

16
2.4 Data Mining Methods
a) Statistical methods
There are several statistical methods used in data mining projects
that are widely used in science and industry that provide excellent
features for describing and visualizing large chunks of data. Some
of the methods commonly used are regression analysis, correlation,
discriminant analysis, hypothesis testing and prediction [6]. This is
a first step for good understanding of data. These methods deal
well with numerical data where the underlying probability
distributions of the data are known. They are not as good with
nominal or binary data [7].
Statistical methods require statistical expertise, or a project person
well versed in statistics who is heavily involved. Such methods
require difficult to verify statistical assumptions and do not deal
well with non-numeric data. They suffer from black box aversion
syndrome. This means that non-technical decision makers, those
who will either accept or reject the results of the study, are often
unwilling to make important decisions based on a technology that
gives them answers but does not explain how it got the answers.
To tell a non-statistician CEO that he or she must make a crucial
business decision because of a favourable R statistic is not usually
well received.
Another problem is that statistical methods are valid only if certain
assumptions about the data are met. Some of these assumptions are
linear relationships between pairs of variables, non-

17
multicolinearity,
normal
probability
distributions
and
independence of samples. If we do not validate these assumptions
because of time limitations or are not familiar with them, our
analysis may be faulty and therefore the results may not be valid.
b) Neural Networks
This is a popular technology, particularly in the financial
community. This method was originally developed in the 1940s to
model biological nervous systems in an attempt to mimic thought
processes. The end result of a neural net project is a mathematical
model of the process. It deals primarily with numerical attributes
but not as well with nominal data [8].
There is still much controversy regarding the efficacy of neural
nets. One major objection to the method is that the development of
a neural net model is partly an art and partly a science in that
results often depend on the individual who built the model. The
model called network topology may differ from one researcher to
another for the same data. There is the problem that often occurs of
over fitting that results in good prediction of the data used to build
the model but bad results with new data.
c) Decision Trees
Decision tree methods are techniques for partitioning a training file
into a tree representation [9]. The starting node is called the root
node. Depending upon the results of a test, this node is then
partitioned into two or more subsets. Each node is then further

18
partitioned until a tree is built. This tree can be mapped into a set
of rules. Fairly fast and results can be presented as rules.
The most important negative for decision trees is that they are
forced to make decisions along the way based on limited
information that implicitly leaves out of consideration the vast
majority of potential rules in the training file. This approach may
leave valuable rules undiscovered since decisions made early in the
process will preclude some good rules from being discovered later.
2.5 Types of Business Models
There are several business models in data mining which are used in
industries.
a) Claims Fraud Models
The number of challenges facing the property and casualty
insurance industry seems to have grown geometrically during the
past decade. In the past, poor underwriting results and high loss
ratio were compensated by excellent returns on investments.
However, the performance of financial markets today is not
sufficient to deliver the level of profitability that is necessary to
support the traditional insurance business model. In order to
survive in the bleak economic conditions that dictate the terms of
today's merciless and competitive market, insurers must change
the way they operate to improve their underwriting results and
profitability. An important element in the process of defining the
strategies that are essential to ensure the success and profitable

19
results of insurers is the ability to forecast the new directions in
which claims management should be developed. This endeavor has
become a crucial and challenging undertaking for the insurance
industry, given the dramatic events of the past years in the
insurance industry worldwide. We can check claims as they arrive
and score them as to the likelihood of they are fraudulent. This can
results in large savings to the insurance companies that use these
technologies [10].
b) Customer Clone Models
The process for selectively targeting prospects for your acquisition
efforts often utilizes a sophisticated analytical technique called
"best customer cloning." These models estimate which prospects
are most likely to respond based on characteristics of the
company's "best customers". To this end, we build the models or
demographic profiles that allow you to select only the best
prospects or "clones" for your acquisition programs. In a retail
environment, we can even identify the best prospects that are close
in proximity to your stores or distribution channels. Customer
clone models are appropriate when insufficient response data is
available, providing an effective prospect ranking mechanism
when response models cannot be built [10].
c) Response Models
The best method for identifying the customers or prospects to
target for a specific product offering is through the use of a model
developed specifically to predict response. These models are used

20
to identify the customers most likely to exhibit the behavior being
targeted. Predictive response models allow organizations to find
the patterns that separate their customer base so the organization
can contact those customers or prospects most likely to take the
desired action. These models contribute to more effective
marketing by ranking the best candidates for a specific product
offering thus identifying the low hanging fruit [10].
d) Revenue and Profit Predictive Models
Revenue and Profit Prediction models combine response/non-
response likelihood with a revenue estimate, especially if order
sizes, monthly billings, or margins differ widely. Not all responses
have equal value, and a model that maximizes responses doesn't
necessarily maximize revenue or profit. Revenue and profit
predictive models indicate those respondents who are most likely
to add a higher revenue or profit margin with their response than
other responders [10].
These models use a scoring algorithm specifically calibrated to
select revenue-producing customers and help identify the key
characteristics that best identify better customers. They can be used
to fine-tune standard response models or used in acquisition
strategies.
e) Cross-Sell and Up-Sell Models
Cross-sell/up-sell models identify customers who are the best
prospects for the purchase of additional products and services and
for upgrading their existing products and services. The goal is to

Details

Pages
Type of Edition
Erstausgabe
Year
2016
ISBN (PDF)
9783960676041
ISBN (Softcover)
9783960671046
File size
7.3 MB
Language
English
Publication date
2016 (November)
Grade
First Class with Distinction
Keywords
Economy in India Payment trend Customer behavior Data mining WEKA
Previous

Title: Customer Payment Trend Analysis based on Clustering for Predicting the Financial Risk of Business Organizations
book preview page numper 1
book preview page numper 2
book preview page numper 3
book preview page numper 4
book preview page numper 5
book preview page numper 6
book preview page numper 7
book preview page numper 8
book preview page numper 9
book preview page numper 10
book preview page numper 11
book preview page numper 12
book preview page numper 13
book preview page numper 14
72 pages
Cookie-Einstellungen