Good stuff. You should try and test which is most effective of the four models by taking a subset of the books you have already rated as the training set and use the rest to generate ratings with each model and see how this matches up to your actual ratings (for the size of your sample, maybe an 80%-20% split). Iโd be interested to see which model matches your rating most accurately. (Also try rotating what books are in the 80 and which are in the 20 to see if that makes a difference.)

Thanks! So I had done train-test-split and had applied that 80/20 split ๐(and its still in the code) but I didn't include the accuracy results in the article. The classifiers were the most accurate: Log Reg got 0.705 and KNN Class got 0.655, followed by KNN Reg with 0.649 and Lin Reg with 0.552

Good stuff. You should try and test which is most effective of the four models by taking a subset of the books you have already rated as the training set and use the rest to generate ratings with each model and see how this matches up to your actual ratings (for the size of your sample, maybe an 80%-20% split). Iโd be interested to see which model matches your rating most accurately. (Also try rotating what books are in the 80 and which are in the 20 to see if that makes a difference.)

Thanks! So I had done train-test-split and had applied that 80/20 split ๐(and its still in the code) but I didn't include the accuracy results in the article. The classifiers were the most accurate: Log Reg got 0.705 and KNN Class got 0.655, followed by KNN Reg with 0.649 and Lin Reg with 0.552

Very interesting experience. Thanks for sharing.

Thank you so much, it really means a lot! I'm glad you liked it ๐