# Data mining problems and solutions for response modeling in CRM

@article{Cho2006DataMP, title={Data mining problems and solutions for response modeling in CRM}, author={Sungzoon Cho and Hyunjung Shin and Ha K Yu E and Douglas L. MacLachlan}, journal={Entrue Journal of Information Technology}, year={2006}, volume={5}, pages={55-64} }

This paper presents three data mining problems that are often encountered in building a response model. [...] Key Method A real world data set from Direct Marketing Educational Foundation, or DMEF4, is used to show their effectiveness. Proposed methods were found to solve the problems in a practical way. Expand

#### Figures, Tables, and Topics from this paper

#### References

SHOWING 1-10 OF 22 REFERENCES

Feature Subset Selection Using a Genetic Algorithm

- Computer Science
- IEEE Intell. Syst.
- 1998

The authors' approach uses a genetic algorithm to select subsets of attributes or features to represent the patterns to be classified, achieving multicriteria optimization in terms of generalization accuracy and costs associated with the features. Expand

Issues and problems in applying neural computing to target marketing

- Computer Science
- 1997

This article discusses several of the issues of applying a neural network to the targeting and prediction problems in target marketing, as applied to solo mailings, and offers remedies to some and discusses possible solutions to others. Expand

Fast Pattern Selection Algorithm for Support Vector Classifiers: Time Complexity Analysis

- Computer Science
- IDEAL
- 2003

A fast preprocessing algorithm which selects only the patterns near the decision boundary is proposed which is much smaller than that of the naive M 2 algorithm. Expand

Bagging predictors

- Computer Science
- Machine Learning
- 2004

Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. Expand

The Random Subspace Method for Constructing Decision Forests

- Mathematics, Computer Science
- IEEE Trans. Pattern Anal. Mach. Intell.
- 1998

A method to construct a decision tree based classifier is proposed that maintains highest accuracy on training data and improves on generalization accuracy as it grows in complexity. Expand

Extracting Support Data for a Given Task

- Mathematics, Computer Science
- KDD
- 1995

It is observed that three different types of handwritten digit classifiers construct their decision surface from strongly overlapping small subsets of the data base, which opens up the possibility of compressing data bases significantly by disposing of theData which is not important for the solution of a given task. Expand

Fast training of support vector machines using sequential minimal optimization, advances in kernel methods

- Mathematics, Computer Science
- 1999

SMO breaks this large quadratic programming problem into a series of smallest possible QP problems, which avoids using a time-consuming numerical QP optimization as an inner loop and hence SMO is fastest for linear SVMs and sparse data sets. Expand

Neural networks and the multinomial logit for brand choice modelling: a hybrid approach

- Economics, Computer Science
- 2000

It is shown that a Feedforward Neural Network with Softmax output units and shared weights can be viewed as a generalization of the Multinomial Logit model, which is used as a diagnostic and specification tool for the Logit, which will provide interpretable coefficients and significance statistics. Expand

Incremental and Decremental Support Vector Machine Learning

- Computer Science, Mathematics
- NIPS
- 2000

An on-line recursive algorithm for training support vector machines, one vector at a time, is presented and interpretation of decremental unlearning in feature space sheds light on the relationship between generalization and geometry of the data. Expand

Properties of Support Vector Machines

- Medicine, Mathematics
- Neural Computation
- 1998

It is shown that the decision surface can be written as the sum of two orthogonal terms, the first depending on only the margin vectors (which are SVs lying on the margin), the second proportional to the regularization parameter for almost all values of the parameter. Expand