Just a quick tip: If you need to import CSV data into a Postgres table, use
COPY. If the table doesn't already exist, you'll have to create it first using
COPY statement will then populate the table. If the table already exists and is non-empty,
COPY will append data to the table assuming it doesn't violate any constraints (e.g. uniqueness).
In its simplest form, here's what it looks like
COPY MyTable FROM '/path/to/my/file' DELIMITER ',' CSV;
If the first row of your file contains a header, you can tell Postgres thusly:
COPY MyTable FROM '/path/to/my/file' DELIMITER ',' CSV HEADER;
Explicitly specifying columns
If your table has an auto-incrementing column (e.g. a row id), you may need to tell Postgres which columns are contained in your file or it may complain that it can't find this column. The syntax is as follows:
COPY MyTable ("Column1", "Column2", ... "ColumnN") FROM '/path/to/my/file' DELIMITER ',' CSV;
Importing multiple files
You could copy-paste
COPY statements, but there's a neater way to do it using a
FOREACH loop (requires PostgreSQL 9.1). The idea is what follows. I haven't actually tested this so it may require some modification!
DO $BODY$ DECLARE files varchar := array['/path/1', '/path/2']; BEGIN FOREACH f SLICE 1 IN ARRAY files LOOP EXECUTE '''COPY MyTable FROM ''' || f || '''DELIMITER '','' CSV'''; END LOOP END $BODY$ language plpgsql