javascript – E11000重复键错误索引:创建唯一索引时
作者:互联网
我在robomongo中运行下面的查询.机器人它给出了如下所示的错误?我真的试图使用此查询删除url字段中的duplcate enties.我的查询有问题吗?
db.dummy_data.createIndex({"url":1},{unique:true},{dropDups:true})
我的错误是
E11000重复键错误索引:mydb.dummy_data.$url_1 dup key:{“some url”}
解决方法:
因此,当您的语法从错误的用法更正为:
db.dummy_data.ensureIndex({ "url": 1},{ "unique": true, "dropDups": true })
您报告仍然收到错误消息,但是新消息:
{ “connectionId” : 336, “err” : “too may dups on index build with dropDups=true”, “code” : 10092, “n” : 0, “ok” : 1 }
有这个message on google groups导致建议的方法:
Hi Daniel,
The assertion indicates that the number of duplicates met or exceeded 1000000. In addition, there’s a comment in the source that says, “we could queue these on disk, but normally there are very few dups, so instead we keep in ram and have a limit.” (where the limit == 1000000), so it might be best to start with an empty collection, ensureIndex with {dropDups: true}, and reimport the actual documents.
Let us know if that works better for you.
因此,建议创建一个新的集合并导入其中的所有内容.基本前提:
db.newdata.ensureIndex({ "url": 1},{ "unique": true, "dropDups": true });
db.dummy_data.find().forEach(function(doc) {
db.newdata.insert(doc);
});
或者更好的是:
db.newdata.ensureIndex({ "url": 1},{ "unique": true, "dropDups": true });
var bulk = db.newdata.initializeUnOrderedBulkOp();
var counter = 0;
db.dummy_data.find().forEach(function(doc) {
counter++;
bulk.insert( doc );
if ( counter % 1000 == 0 ) {
bulk.execute();
bulk = db.newdata.initializeUnOrderedBulkOp();
}
});
if ( counter % 1000 != 0 )
bulk.execute();
但是,当您从一个集合迁移到另一个集合时,在唯一键上具有大量重复项,这似乎是目前处理它的唯一方法.
标签:javascript,indexing,mongodb,robo3t 来源: https://codeday.me/bug/20190628/1318774.html