Explainability is a crucial aspect of models which ensures their reliable use by both engineers and end-users. However, explainability depends on the user and the model’s usage context, making it an important dimension for user personalization. In this article, we explore the personalization of opaque-box image classifiers using an interactive hyperparameter tuning approach, in which the user iteratively rates the quality of explanations for a selected set of query images. Using a multi-objective Bayesian optimization (MOBO) algorithm, we optimize for both, the classifier’s accuracy and the perceived explainability ratings. In our user study, we found Pareto-optimal parameters for each participant, that could significantly improve explainability ratings of queried images while minimally impacting classifier accuracy. Furthermore, this improved explainability with tuned hyperparameters generalized to held-out validation images, with the extent of generalization being dependent on the variance within the queried images, and the similarity between the query and validation images. This MOBO-based method has the potential to be used in general to jointly optimize any machine learning objective along with any human-centric objective. The Pareto front produced after the interactive hyperparameter tuning can be useful during deployment, allowing for desired trade-offs between the objectives (if any) to be chosen by selecting the appropriate parameters. Additionally, user studies like ours can assess if commonly assumed trade-offs, such as accuracy versus explainability, exist in a given context.