数据库
首页 > 数据库> > python – 使用SQLAlchemy批量upsert

python – 使用SQLAlchemy批量upsert

作者:互联网

参见英文答案 > SQLAlchemy – performing a bulk upsert (if exists, update, else insert) in postgresql                                     1个
>            How to UPSERT (MERGE, INSERT … ON DUPLICATE UPDATE) in PostgreSQL?                                    6个
我正在使用SQLAlchemy 1.1.0b将大量数据批量插入到PostgreSQL中,并且我遇到了重复的键错误.

from sqlalchemy import *
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.automap import automap_base

import pg

engine = create_engine("postgresql+pygresql://" + uname + ":" + passw + "@" + url)

# reflectively load the database.
metadata = MetaData()
metadata.reflect(bind=engine)
session = sessionmaker(autocommit=True, autoflush=True)
session.configure(bind=engine)
session = session()
base = automap_base(metadata=metadata)
base.prepare(engine, reflect=True)

table_name = "arbitrary_table_name" # this will always be arbitrary
mapped_table = getattr(base.classses, table_name)
# col and col2 exist in the table.
chunks = [[{"col":"val"},{"col2":"val2"}],[{"col":"val"},{"col2":"val3"}]]

for chunk in chunks:
    session.bulk_insert_mappings(mapped_table, chunk)
    session.commit()

当我运行它时,我得到了这个:

sqlalchemy.exc.IntegrityError: (pg.IntegrityError) ERROR:  duplicate key value violates unique constraint <constraint>

我似乎无法正确地将mapped_table实例化为Table()对象.

我正在处理时间序列数据,因此我正在大量抓取数据,并在时间范围内有一些重叠.我想进行批量upsert以确保数据一致性.

使用大型数据集进行批量upsert的最佳方法是什么?我现在知道了PostgreSQL support upserts,但我不确定如何在SQLAlchemy中这样做.

解决方法:

https://stackoverflow.com/a/26018934/465974

After I found this command, I was able to perform upserts, but it is
worth mentioning that this operation is slow for a bulk “upsert”.

The alternative is to get a list of the primary keys you would like to
upsert, and query the database for any matching ids:

标签:python,postgresql,upsert,sqlalchemy
来源: https://codeday.me/bug/20190702/1353461.html