Skip to Main Content

MongoByte MongoDB Logo

Welcome to the new MongoDB Feedback Portal!

{Improvement: "Your idea"}
We’ve upgraded our system to better capture and act on your feedback.
Your feedback is meaningful and helps us build better products.

Status Submitted
Categories Database
Created by Guest
Created on Mar 30, 2020

Add a "Limit" to Delete and Bulk Delete operations

Deleting tens of millions of documents can have a big impact on the performance of the Clusters, even using Bulk Delete. A "Limit" must be added to Delete and Bulk Delete to let us limit the number of operations, making sure we do not kill the Clusters' performance. - For the delete, this would make sure we only delete n number of documents. - For the Bulk Delete, this would also make sure we only delete n number of documents, or it could instead limit the number of batches/groups of documents to be deleted. Right now, the only solution is a hack, which is to query the documents with a Limit and a projection to get the IDs, then delete only those. This means we need to do large queries/projections and large delete operations. This workaround/hack is not efficient and not good in any possible way. It is, after all, just a temporary hack we have to use until a valid solution is supported by MongoDB, which is to add a Limit to deletes.
  • Attach files
  • Guest
    Aug 30, 2022
    Thinking of multiple threads: I would say throttle with the idea of using % of IOPS or queueing or spreading deletes over a longer period to not overload system.
  • Guest
    Sep 21, 2020
    Please throttle remove(). Please allow caller to throttle or limit "remove()". The syntax to remove includes a filter query much like find(). But, there is NO limit. So if a caller wants to delete all documents older than date X, but only delete 1MM of them at a time, there is NO good way to do that. It would be really nice to allow finer control of remove.