It is shown that it is impossible for the regret to be O(T 1− ), where T is the number of iterations, if the dependency of the regret on the other parameters is polynomial and if NP 6⊆ BPP, which is a big open problem that is believed to be true.
We study an open problem, suggested by Kale in COLT 2014 ([1]). The problem is to find an efficient online sparse classification algorithm, with sub linear regret compared to the best sparse linear predictor, in terms of the squared loss. The classifier is sparse in the sense that it can depend on at most k features. We show that it is impossible for the regret to be O(T 1− ), where T is the number of iterations, if the dependency of the regret on the other parameters is polynomial and if NP 6⊆ BPP, which is a big open problem that is believed to be true. This was done in the milestone, but we tried to clarify the proof a bit. Zolghadr et al ([2]) dealt with a similar problem, where each feature has a specified cost and there is no limit on the number of features. We show similar hardness results. We extend the result in the problem offered by Kale by proving that even if we restrict k to be at most an inverse polynomial in the number of features, we still get the same hardness results.