Section: 16 ๐ Free Style: Prediction
16.1 Introduction
What is a prediction model?
Prediction model is a set of rules which, given the values of independent variables (predictors) determine the value of predicted (dependent variable). Here are example of such rules
If score > 80 and participation >0.6 then grade =โAโ
If score >60 and score <70 and major=โPsychologyโ and Ask_questions =โalwaysโ then grade =โBโ
If score <50 and score >40 and Doze_off =โalwaysโ then grade = โFโ
By freestyle prediction we mean building a prediction model without the R library functions such as rpart and other machine learning packages. In freestyle prediction one develops models from scratch, on the basis of plots as well as exploratory queries. Freestyle prediction is important for two reasons: First, building prediction models from scratch allows an aspiring data scientist to โfeel the dataโ - as opposed to often blind direct applications of these library functions, Second, even when one uses the prediction models based on library functions, the best models are often created by combining of several such models. These combinations often arise from skillful subsetting of datasets and applying different models to different subsets.
As our prediction challenge competitions indicate, the winning prediction models (the ones with the least error) are predominantly combinations of different models applied to different subsets of the data. Thus, freestyle prediction is almost always a part of the prediction model building. We start with showing an example of a simple freestyle prediction model.
16.2 Example of a simple freestyle prediction model
16.3 How to build a freestyle (your own code) prediction model?
The key idea behind building freestyle prediction models is to subset data and select the most frequent value of the predicted variable as prediction. Of course we are interested in finding highly discriminative subsets of data with one highly dominant (most frequent value), since such a very frequent value as prediction choice will lead to a small error. But how to find data subsets with such dominant most frequent values? It is a bit of a trial and error process. As we show below in the snippet 2, it is a sequence of one line exploratory queries, which the programmer can rely on. Later, in the next section we show how the rpart() package generates such discriminative subsets of data automatically, though recursive partitioning.
16.4 One-step crossvalidation
How do we know if our prediction model is any good? After all, we may easily build a model which is close to perfect on the training data set but performs miserably on the new, testing data. This is a nightmare for every prediction model builder and it is called a Kaggle surprise. Kaggle surprise happens quite often during our prediction competitions when students build models which are overfitting the data and which give them a false feeling of great, low error just to do the opposite on the testing data and yield a miserably high error.
To avoid this or at least to protect one against it, cross validation is needed. We illustrate cross-validation in the next snippet. We split training data into the real training data and the testing data, which is the remaining part of our training data set. Thus we use part of the training data as testing data. We do it by randomly splitting our data set. Although we show here just one step of cross-validation, we should do it multiple times. This helps us to observe how our model behaves for different random subsets of training data and helps us to observe inconsistent results (high variance of error) - which is a warning sign of future kaggle surprise.
We use selected data puzzles from section 4 in prediction challenges. Given a data puzzle (such as 4.1), we separate it into training data subset and testing data subset. The training data is given to students to build and cross-validate their prediction models. Then we use Kaggle to evaluate their models on the testing subset of the data puzzle. Each prediction challenge is structured as competition and Kaggle ranks studentsโ models by prediction accuracy. For categorical variables it is the fraction of values which are predicted correctly, for numerical variables it is MSE (mean square error).
16.5 General Structure of the Prediction Challenges
The submission will take place on Kaggle which is used for organizing these prediction challenges online, helping in validating submissions, placing deadlines for submission and also calculating the prediction scores along with ranking all the submissions.
The datasets provided for each prediction challenge is as follows:
Training Dataset
- It is used for training and cross-validation purposes in the prediction challenge. This data has all the training attributes along and the values of the attribute wich is predicted (so called, Target attribute).
- Models for prediction are to be trained using this dataset only.
- Training data set is the set which is used when you build your prediction model - since this is the only data set which has all values of target attribute.
Testing Dataset
It is used for applying your prediction model to new data. You do it only when you are finished with building your prediction model.
Testing data set consists of all the attributes that were used for training, but it does not contain any values of the target attribute.
It is disjoint with the training data set - it contains new data and it is missing the target variable.
Submission Dataset
- After prediction using the โtestingโ dataset, for submitting on Kaggle, we must copy the predicted attribute column to this Submission Dataset which only has 2 columns, first an index column(e.g.ย ID or name,etc) and second the predicted attribute column. Remember after copying the predicted attribute column to this dataset, one should also save this dataset into the same submission dataset file, which then can be used to upload on Kaggle.
To read the datasets use the read.csv() function and for writing the dataset to the file, use the write.csv() function. Offen times while writing the dataframe from R to a csv file, people make mistake of writing even the row names, which results in error upon submission of this file to Kaggle.
To avoid this, you can add the parameter, row.names = F in the write.csv() function. e.g.ย write.csv(*dataframe*,*fileaddress*,row.names = F)
.
16.5.1 Preparing submission.csv for Kaggle
Data League: https://data101.cs.rutgers.edu/?q=node/155
Kaggle competition: https://www.kaggle.com/competitions/predictive-challenge-2-2022/overview
Kaggle submission instructions: https://data101.cs.rutgers.edu/?q=node/150