12/3/2023 0 Comments Postgres create unique index![]() IS 'Fast, practically unique signature for the set of defining columns in table bank. 'SELECT hashtextextended(textin(record_out(($1,$2,$3,$4,$5,$6,$7,$8))), 0)' ĬOMMENT ON FUNCTION public.f_bank_bighash(text, text, text, text, text, text, text, text) LANGUAGE sql IMMUTABLE COST 25 PARALLEL SAFE AS ![]() , col5 text, col6 text, col7 text, col8 text) CREATE OR REPLACE FUNCTION public.f_bank_bighash(col1 text, col2 text, col3 text, col4 text Computed / calculated / virtual / derived columns in PostgreSQLĪssuming all text columns.Store the hash value in a generated column and create a UNIQUE constraint on that. So just: CREATE UNIQUE INDEX bank_hash_uni ON bank (hash_record_extended((col1, col2, col3, col4, col5, col6, col7, col8),0)) Now, an expression index seems more attractive than a generated column. It belongs to the same family of functions as hashtextextended() ( details below). hash_record_extended(record, bigint) -> bigint Postgres 14Ĭomes with a built-in hash function for records (including anonymous records!), which is substantially cheaper than my custom function below. This leads me to the answer I really want to give: Efficient solutionĬreate a UNIQUE index or constraint based on a cheap and sufficiently unique hash value of the row (reduced to defining columns). The downside of all solutions so far (including your original) is the large index on so many columns. You need a safe replacement for NULL that won't conflict with other legal values (empty string in my example). The same can be used for a single nullable column as well, obviously. That's assuming col3 & col7 are nullable string type columns, where the empty string ( '') and NULL are semantically equivalent. With multiple nullable columns, a simple solution would be a unique expression index with COALESCE like: CREATE UNIQUE INDEX bank_uni_idx ON bank To make it work including a single nullable column, you could use a partial index as outlined here:īut that gets impractical quickly with more than one nullable column. Bank data should have plenty of notnull columns. Alternative solutions (original answer)ĭo you need all columns to make rows unique? Typically, combining just a few should suffice. I would still consider an index on a hash value like outlined below. ![]() ![]() However, the underlying unique index is big and inefficient for many and/or wide columns.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |