Creation of the model is generally not the end of the project. Deployment is the process of using the discovered models and insights to improve the behavior’s organization, through new decisions. This phase often involves planning and monitoring the deployment of results or completing wrap-up tasks.
Adapting this phase to address context changes and model reuse handling involves:
- We need to determine in what way the versatile model is to be kept, used, evaluated and maintained for a long-term use.
- We may need to monitor the possible change of the context distribution or check whether its range is the same as expected.
- Task: Depending on the results of the reviewing the process of the initial data mining project, the project team decides how to proceed. The team decides whether (a) to continue to the deployment phase, (b) go back and refine or replace your models thus initiating further iterations, or (c) set up new data mining projects. This task includes analyses of remaining resources and budget, which may influence the decisions. If the results satisfactorily meet your data mining and business goals, start the deployment phase.
- Deployment plan: This task takes the evaluation results and determines a strategy for deployment. If a general procedure has been identified to create the relevant model(s) and integrate within your database systems. This procedure is documented (step-by-step plan and integration) here for later deployment (including technical details, benefits of monitoring, deployment problems, etc.). Furthermore, create a plan to disseminate the relevant information to strategy makers.
- Model selection w.r.t. the context: Determine the pace in which context values are captured or estimated. Determine how the pool of models is going to be kept and selected according to context.
- Task: Since your data mining work may be ongoing, monitoring and maintenance are important issues. In those cases the model(s) will likely need to be evaluated periodically to ensure its effectiveness and to make continuous improvements.
- Monitoring and maintenance plan: Summarize monitoring and maintenance strategy: factors or influences need to be tracked, validity and accuracy of each model, expiration issues,etc.
- Context change Monitoring: Determine the pace in which context values are captured or estimated. Determine how the pool of models is going to be kept and selected according to context.
- Task: At the end of the project, the project team writes up a final report to communicate the results.
- Final report: Final report where all the threads are brought together. It should include a thorough description of the original business problem, the process used to conduct data mining, how well initial data mining goals have been met, which (versatile) models are reused again and again, budget and costs (cost of reframing? And retraining? How significant has context been?)), deviations from the original plan, summary of data mining results, overview of the deployment process, recommendations and insights discovered, etc.
- Final presentation: Determine the pace in which context values are captured or estimated. Determine how the pool of models is going to be kept and selected according to context.
- Task: This is the final step of the CASP-DM methodology. In it we assess what went right and what went wrong (and need to be improved), the final impressions, lessons learned, etc.
- Experience documentation: Summarize important experiences made during the project (overall impressions, pitfalls, misleading approaches, etc.). Have contexts been well identified? How much model reuse has been performed? Were the models sufficiently flexible to be reframed? Should we change the definition of context? Can we make more versatile models?
Legend of the different representation of original and new/enhanced tasks and outputs: