The ability to evaluate different algorithms in terms of accuracy / speed with respect to data sets or even other models that are created by others is great! We can use this service as baseline results if we have no idea what our model should be doing at all because it's not working well enough but they do provide us some guidance so its nice! I personally dislike how there isn't any way you could easily compare your own algorithm against their provided ones - maybe thats just me though? Also would like more options when creating my test set up (eg number/type etc) instead having everything pre-configured which sometimes makes things harder than necessary without giving much benefit either :/.
There really needs an option where one user has access control over who gets into his account & therefore only he sees certain fields eg. Model Name vs Algorithm used. This helps reduce confusion between 2 users trying out something similar while also helping them understand whats going wrong better too rather then both being.