h1 stringclasses 12
values | h2 stringclasses 93
values | h3 stringlengths 0 64 | h5 stringlengths 0 79 | content stringlengths 24 533k | tokens int64 7 158k | content_embeddings_openai_text_embedding_3_small_512 listlengths 512 512 | content_embeddings_potion_base_8M_256 listlengths 256 256 |
|---|---|---|---|---|---|---|---|
Summary | This document contains [DuckDB's official documentation and guides](https://duckdb.org/) in a single-file easy-to-search form.
If you find any issues, please report them [as a GitHub issue](https://github.com/duckdb/duckdb-web/issues).
Contributions are very welcome in the form of [pull requests](https://github.com/duc... | 184 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |||
Connect | Connect | Connect or Create a Database | To use DuckDB, you must first create a connection to a database. The exact syntax varies between the [client APIs](#docs:api:overview) but it typically involves passing an argument to configure persistence. | 43 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Connect | Connect | Persistence | DuckDB can operate in both persistent mode, where the data is saved to disk, and in in-memory mode, where the entire data set is stored in the main memory.
> **Tip. ** Both persistent and in-memory databases use spilling to disk to facilitate larger-than-memory workloads (i.e., out-of-core-processing).
#### Persis... | 362 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Connect | Concurrency | Handling Concurrency | DuckDB has two configurable options for concurrency:
1. One process can both read and write to the database.
2. Multiple processes can read from the database, but no processes can write ([`access_mode = 'READ_ONLY'`](#docs:configuration:overview::configuration-reference)).
When using option 1, DuckDB supports multi... | 214 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Connect | Concurrency | Concurrency within a Single Process | DuckDB supports concurrency within a single process according to the following rules. As long as there are no write conflicts, multiple concurrent writes will succeed. Appends will never conflict, even on the same table. Multiple threads can also simultaneously update separate tables or separate subsets of the same tab... | 102 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Connect | Concurrency | Writing to DuckDB from Multiple Processes | Writing to DuckDB from multiple processes is not supported automatically and is not a primary design goal (see [Handling Concurrency](#::handling-concurrency)).
If multiple processes must write to the same file, several design patterns are possible, but would need to be implemented in application logic. For example, ... | 254 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Connect | Concurrency | Optimistic Concurrency Control | DuckDB uses [optimistic concurrency control](https://en.wikipedia.org/wiki/Optimistic_concurrency_control), an approach generally considered to be the best fit for read-intensive analytical database systems as it speeds up read query processing. As a result any transactions that modify the same rows at the same time wi... | 108 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Importing Data | The first step to using a database system is to insert data into that system.
DuckDB provides can directly connect to [many popular data sources](#docs:data:data_sources) and offers several data ingestion methods that allow you to easily and efficiently fill up the database.
On this page, we provide an overview of thes... | 79 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | ||
Data Import | Importing Data | `INSERT` Statements | `INSERT` statements are the standard way of loading data into a database system. They are suitable for quick prototyping, but should be avoided for bulk loading as they have significant per-row overhead.
```sql
INSERT INTO people VALUES (1, 'Mark');
```
For a more detailed description, see the [page on the `INSERT ... | 77 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Importing Data | CSV Loading | Data can be efficiently loaded from CSV files using several methods. The simplest is to use the CSV file's name:
```sql
SELECT * FROM 'test.csv';
```
Alternatively, use the [`read_csv` function](#docs:data:csv:overview) to pass along options:
```sql
SELECT * FROM read_csv('test.csv', header = false);
```
Or use... | 239 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Importing Data | Parquet Loading | Parquet files can be efficiently loaded and queried using their filename:
```sql
SELECT * FROM 'test.parquet';
```
Alternatively, use the [`read_parquet` function](#docs:data:parquet:overview):
```sql
SELECT * FROM read_parquet('test.parquet');
```
Or use the [`COPY` statement](#docs:sql:statements:copy::copy--... | 121 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Importing Data | JSON Loading | JSON files can be efficiently loaded and queried using their filename:
```sql
SELECT * FROM 'test.json';
```
Alternatively, use the [`read_json_auto` function](#docs:data:json:overview):
```sql
SELECT * FROM read_json_auto('test.json');
```
Or use the [`COPY` statement](#docs:sql:statements:copy::copy--from): ... | 114 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Importing Data | Appender | In several APIs (C, C++, Go, Java, and Rust), the [Appender](#docs:data:appender) can be used as an alternative for bulk data loading.
This class can be used to efficiently add rows to the database system without using SQL statements. | 56 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Data Sources | DuckDB sources several data sources, including file formats, network protocols, and database systems:
* [AWS S3 buckets and storage with S3-compatible API](#docs:extensions:httpfs:s3api)
* [Azure Blob Storage](#docs:extensions:azure)
* [Cloudflare R2](#docs:guides:network_cloud_storage:cloudflare_r2_import)
* [CSV](#... | 246 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | ||
Data Import | CSV Files | Examples | The following examples use the [`flights.csv`](https://duckdb.org/data/flights.csv) file.
Read a CSV file from disk, auto-infer options:
```sql
SELECT * FROM 'flights.csv';
```
Use the `read_csv` function with custom options:
```sql
SELECT *
FROM read_csv('flights.csv',
delim = '|',
header = true,
columns = {
'... | 388 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | CSV Loading | CSV loading, i.e., importing CSV files to the database, is a very common, and yet surprisingly tricky, task. While CSVs seem simple on the surface, there are a lot of inconsistencies found within CSV files that can make loading them a challenge. CSV files come in many different varieties, are often corrupt, and do not ... | 190 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Parameters | Below are parameters that can be passed to the CSV reader. These parameters are accepted by the [`read_csv` function](#::csv-functions). But not all parameters are accepted by the [`COPY` statement](#docs:sql:statements:copy::copy-to).
| Name | Description | Type | Default |
|:--|:-----|:-|:-|
| `all_varchar` | Optio... | 1,480 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | CSV Functions | The `read_csv` automatically attempts to figure out the correct configuration of the CSV reader using the [CSV sniffer](https://duckdb.org/2023/10/27/csv-sniffer). It also automatically deduces types of columns. If the CSV file has a header, it will use the names found in that header to name the columns. Otherwise, the... | 533 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Writing Using the `COPY` Statement | The [`COPY` statement](#docs:sql:statements:copy::copy-to) can be used to load data from a CSV file into a table. This statement has the same syntax as the one used in PostgreSQL. To load the data using the `COPY` statement, we must first create a table with the correct schema (which matches the order of the columns in... | 328 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Reading Faulty CSV Files | DuckDB supports reading erroneous CSV files. For details, see the [Reading Faulty CSV Files page](#docs:data:csv:reading_faulty_csv_files). | 34 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Limitations | The CSV reader only supports input files using UTF-8 character encoding. For CSV files using different encodings, use e.g., the [`iconv` command-line tool](https://linux.die.net/man/1/iconv) to convert them to UTF-8. For example:
```bash
iconv -f ISO-8859-2 -t UTF-8 input.csv > input-utf-8.csv
``` | 88 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Order Preservation | The CSV reader respects the `preserve_insertion_order` [configuration option](#docs:configuration:overview).
When `true` (the default), the order of the rows in the resultset returned by the CSV reader is the same as the order of the corresponding lines read from the file(s).
When `false`, there is no guarantee that th... | 74 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | CSV Auto Detection | When using `read_csv`, the system tries to automatically infer how to read the CSV file using the [CSV sniffer](https://duckdb.org/2023/10/27/csv-sniffer).
This step is necessary because CSV files are not self-describing and come in many different dialects. The auto-detection works roughly as follows:
* Detect the di... | 325 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | `sniff_csv` Function | It is possible to run the CSV sniffer as a separate step using the `sniff_csv(filename)` function, which returns the detected CSV properties as a table with a single row.
The `sniff_csv` function accepts an optional `sample_size` parameter to configure the number of rows sampled.
```sql
FROM sniff_csv('my_file.csv');... | 422 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Detection Steps | #### Dialect Detection {#docs:data:csv:auto_detection::dialect-detection}
Dialect detection works by attempting to parse the samples using the set of considered values. The detected dialect is the dialect that has (1) a consistent number of columns for each row, and (2) the highest number of columns for each row.
T... | 887 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Detection Steps | Overriding Type Detection | The detected types can be individually overridden using the `types` option. This option takes either of two options:
* A list of type definitions (e.g., `types = ['INTEGER', 'VARCHAR', 'DATE']`). This overrides the types of the columns in-order of occurrence in the CSV file.
* Alternatively, `types` takes a `name` → ... | 233 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... |
Data Import | CSV Files | Header Detection | Header detection works by checking if the candidate header row deviates from the other rows in the file in terms of types. For example, in [`flights.csv`](https://duckdb.org/data/flights.csv), we can see that the header row consists of only `VARCHAR` columns – whereas the values contain a `DATE` value for the `FlightDa... | 779 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Reading Faulty CSV Files | CSV files can come in all shapes and forms, with some presenting many errors that make the process of cleanly reading them inherently difficult. To help users read these files, DuckDB supports detailed error messages, the ability to skip faulty lines, and the possibility of storing faulty lines in a temporary table to ... | 67 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Structural Errors | DuckDB supports the detection and skipping of several different structural errors. In this section, we will go over each error with an example.
For the examples, consider the following table:
```sql
CREATE TABLE people (name VARCHAR, birth_date DATE);
```
DuckDB detects the following error types:
* `CAST`: Castin... | 1,046 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Using the `ignore_errors` Option | There are cases where CSV files may have multiple structural errors, and users simply wish to skip these and read the correct data. Reading erroneous CSV files is possible by utilizing the `ignore_errors` option. With this option set, rows containing data that would otherwise cause the CSV parser to generate an error w... | 435 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Retrieving Faulty CSV Lines | Being able to read faulty CSV files is important, but for many data cleaning operations, it is also necessary to know exactly which lines are corrupted and what errors the parser discovered on them. For scenarios like these, it is possible to use DuckDB's CSV Rejects Table feature.
By default, this feature creates two ... | 777 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Parameters | <div class="narrow_table"></div>
The parameters listed below are used in the `read_csv` function to configure the CSV Rejects Table.
| Name | Description | Type | Default |
|:--|:-----|:-|:-|
| `store_rejects` | If set to true, any errors in the file will be skipped and stored in the default rejects temporary table... | 602 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | CSV Import Tips | Below is a collection of tips to help when attempting to import complex CSV files. In the examples, we use the [`flights.csv`](https://duckdb.org/data/flights.csv) file. | 40 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Override the Header Flag if the Header Is Not Correctly Detected | If a file contains only string columns the `header` auto-detection might fail. Provide the `header` option to override this behavior.
```sql
SELECT * FROM read_csv('flights.csv', header = true);
``` | 47 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Provide Names if the File Does Not Contain a Header | If the file does not contain a header, names will be auto-generated by default. You can provide your own names with the `names` option.
```sql
SELECT * FROM read_csv('flights.csv', names = ['DateOfFlight', 'CarrierName']);
``` | 56 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Override the Types of Specific Columns | The `types` flag can be used to override types of only certain columns by providing a struct of `name` → `type` mappings.
```sql
SELECT * FROM read_csv('flights.csv', types = {'FlightDate': 'DATE'});
``` | 53 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Use `COPY` When Loading Data into a Table | The [`COPY` statement](#docs:sql:statements:copy) copies data directly into a table. The CSV reader uses the schema of the table instead of auto-detecting types from the file. This speeds up the auto-detection, and prevents mistakes from being made during auto-detection.
```sql
COPY tbl FROM 'test.csv';
``` | 73 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | CSV Files | Use `union_by_name` When Loading Files with Different Schemas | The `union_by_name` option can be used to unify the schema of files that have different or missing columns. For files that do not have certain columns, `NULL` values are filled in.
```sql
SELECT * FROM read_csv('flights*.csv', union_by_name = true);
``` | 62 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | JSON Overview | DuckDB supports SQL functions that are useful for reading values from existing JSON and creating new JSON data.
JSON is supported with the `json` extension which is shipped with most DuckDB distributions and is auto-loaded on first use.
If you would like to install or load it manually, please consult the [“Installing a... | 79 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | About JSON | JSON is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values).
While it is not a very efficient format for tabular data, it is very commonly used, especially as a data interc... | 63 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | Indexing | > **Warning. ** Following [PostgreSQL's conventions](#docs:sql:dialect:postgresql_compatibility), DuckDB uses 1-based indexing for its [`ARRAY`](#docs:sql:data_types:array) and [`LIST`](#docs:sql:data_types:list) data types but [0-based indexing for the JSON data type](https://www.postgresql.org/docs/17/functions-json... | 92 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | Examples | #### Loading JSON {#docs:data:json:overview::loading-json}
Read a JSON file from disk, auto-infer options:
```sql
SELECT * FROM 'todos.json';
```
Use the `read_json` function with custom options:
```sql
SELECT *
FROM read_json('todos.json',
format = 'array',
columns = {userId: 'UBIGINT',
id: 'UBIGINT',
title: '... | 520 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | JSON Creation Functions | The following functions are used to create JSON.
<div class="narrow_table"></div>
| Function | Description |
|:--|:----|
| `to_json(any)` | Create `JSON` from a value of `any` type. Our `LIST` is converted to a JSON array, and our `STRUCT` and `MAP` are converted to a JSON object. |
| `json_quote(any)` | Alias for ... | 397 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | Loading JSON | The DuckDB JSON reader can automatically infer which configuration flags to use by analyzing the JSON file. This will work correctly in most situations, and should be the first option attempted. In rare situations where the JSON reader cannot figure out the correct configuration, it is possible to manually configure th... | 65 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | JSON Read Functions | The following table functions are used to read JSON:
| Function | Description |
|:---|:---|
| `read_json_objects(filename)` | Read a JSON object from `filename`, where `filename` can also be a list of files or a glob pattern. |
| `read_ndjson_objects(filename)` | Alias for `read_json_objects` with parameter `format... | 2,145 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | `FORMAT JSON` | When the `json` extension is installed, `FORMAT JSON` is supported for `COPY FROM`, `COPY TO`, `EXPORT DATABASE` and `IMPORT DATABASE`. See the [`COPY` statement](#docs:sql:statements:copy) and the [`IMPORT` / `EXPORT` clauses](#docs:sql:statements:export).
By default, `COPY` expects newline-delimited JSON. If you pr... | 226 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | `COPY` Statement | The `COPY` statement can be used to load data from a JSON file into a table. For the `COPY` statement, we must first create a table with the correct schema to load the data into. We then specify the JSON file to load from plus any configuration options separately.
```sql
CREATE TABLE todos (userId UBIGINT, id UBIGINT... | 252 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | Parameters | | Name | Description | Type | Default |
|:--|:-----|:-|:-|
| `auto_detect` | Whether to auto-detect detect the names of the keys and data types of the values automatically | `BOOL` | `false` |
| `columns` | A struct that specifies the key names and value types contained within the JSON file (e.g., `{key1: 'INTEGER', ke... | 623 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | The `read_json` Function | The `read_json` is the simplest method of loading JSON files: it automatically attempts to figure out the correct configuration of the JSON reader. It also automatically deduces types of columns.
```sql
SELECT *
FROM read_json('todos.json')
LIMIT 5;
```
| userId | id | title ... | 451 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | Writing JSON | The contents of tables or the result of queries can be written directly to a JSON file using the `COPY` statement. See the [`COPY` statement](#docs:sql:statements:copy::copy-to) for more information. | 47 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | JSON Type | DuckDB supports `json` via the `JSON` logical type.
The `JSON` logical type is interpreted as JSON, i.e., parsed, in JSON functions rather than interpreted as `VARCHAR`, i.e., a regular string (modulo the equality-comparison caveat at the bottom of this page).
All JSON creation functions return values of this type.
W... | 275 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | JSON Extraction Functions | There are two extraction functions, which have their respective operators. The operators can only be used if the string is stored as the `JSON` logical type.
These functions supports the same two location notations as [JSON Scalar functions](#::json-scalar-functions).
| Function | Alias | Operator | Description |
|:-... | 950 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | JSON Scalar Functions | The following scalar JSON functions can be used to gain information about the stored JSON values.
With the exception of `json_valid(json)`, all JSON functions produce an error when invalid JSON is supplied.
We support two kinds of notations to describe locations within JSON: [JSON Pointer](https://datatracker.ietf.or... | 1,235 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | JSON Aggregate Functions | There are three JSON aggregate functions.
<div class="narrow_table"></div>
| Function | Description |
|:---|:----|
| `json_group_array(any)` | Return a JSON array with all values of `any` in the aggregation. |
| `json_group_object(key, value)` | Return a JSON object with all `key`, `value` pairs in the aggregation.... | 315 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | Transforming JSON to Nested Types | In many cases, it is inefficient to extract values from JSON one-by-one.
Instead, we can “extract” all values at once, transforming JSON to the nested types `LIST` and `STRUCT`.
<div class="narrow_table"></div>
| Function | Description |
|:---|:---|
| `json_transform(json, structure)` | Transform `json` according t... | 473 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | JSON Format Settings | The JSON extension can attempt to determine the format of a JSON file when setting `format` to `auto`.
Here are some example JSON files and the corresponding `format` settings that should be used.
In each of the below cases, the `format` setting was not needed, as DuckDB was able to infer it correctly, but it is incl... | 98 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | JSON Format Settings | Format: `newline_delimited` | With `format = 'newline_delimited'` newline-delimited JSON can be parsed.
Each line is a JSON.
We use the example file [`records.json`](https://duckdb.org/data/records.json) with the following content:
```json
{"key1":"value1", "key2": "value1"}
{"key1":"value2", "key2": "value2"}
{"key1":"value3", "key2": "value3"... | 171 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... |
Data Import | JSON Files | JSON Format Settings | Format: `array` | If the JSON file contains a JSON array of objects (pretty-printed or not), `array_of_objects` may be used.
To demonstrate its use, we use the example file [`records-in-array.json`](https://duckdb.org/data/records-in-array.json):
```json
[
{"key1":"value1", "key2": "value1"},
{"key1":"value2", "key2": "value2"},
{"key... | 178 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... |
Data Import | JSON Files | JSON Format Settings | Format: `unstructured` | If the JSON file contains JSON that is not newline-delimited or an array, `unstructured` may be used.
To demonstrate its use, we use the example file [`unstructured.json`](https://duckdb.org/data/unstructured.json):
```json
{
"key1":"value1",
"key2":"value1"
}
{
"key1":"value2",
"key2":"value2"
}
{
"key1":"value3",
"... | 627 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... |
Data Import | JSON Files | Installing and Loading the JSON extension | The `json` extension is shipped by default in DuckDB builds, otherwise, it will be transparently [autoloaded](#docs:extensions:overview::autoloading-extensions) on first use. If you would like to install and load it manually, run:
```sql
INSTALL json;
LOAD json;
``` | 66 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | SQL to/from JSON | The `json` extension also provides functions to serialize and deserialize `SELECT` statements between SQL and JSON, as well as executing JSON serialized statements.
| Function | Type | Description |
|:------|:-|:---------|
| `json_deserialize_sql(json)` | Scalar | Deserialize one or many `json` serialized statements... | 1,083 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | JSON Files | Equality Comparison | > **Warning. ** Currently, equality comparison of JSON files can differ based on the context. In some cases, it is based on raw text comparison, while in other cases, it uses logical content comparison.
The following query returns true for all fields:
```sql
SELECT
a != b, -- Space is part of physical JSON content... | 266 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Multiple Files | Reading Multiple Files | DuckDB can read multiple files of different types (CSV, Parquet, JSON files) at the same time using either the glob syntax, or by providing a list of files to read.
See the [combining schemas](#docs:data:multiple_files:combining_schemas) page for tips on reading files with different schemas. | 68 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Multiple Files | CSV | Read all files with a name ending in `.csv` in the folder `dir`:
```sql
SELECT *
FROM 'dir/*.csv';
```
Read all files with a name ending in `.csv`, two directories deep:
```sql
SELECT *
FROM '*/*/*.csv';
```
Read all files with a name ending in `.csv`, at any depth in the folder `dir`:
```sql
SELECT *
FROM 'd... | 197 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Multiple Files | Parquet | Read all files that match the glob pattern:
```sql
SELECT *
FROM 'test/*.parquet';
```
Read three Parquet files and treat them as a single table:
```sql
SELECT *
FROM read_parquet(['file1.parquet', 'file2.parquet', 'file3.parquet']);
```
Read all Parquet files from two specific folders:
```sql
SELECT *
FROM r... | 134 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Multiple Files | Multi-File Reads and Globs | DuckDB can also read a series of Parquet files and treat them as if they were a single table. Note that this only works if the Parquet files have the same schema. You can specify which Parquet files you want to read using a list parameter, glob pattern matching syntax, or a combination of both.
#### List Parameter {#... | 491 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Multiple Files | Filename | The `filename` argument can be used to add an extra `filename` column to the result that indicates which row came from which file. For example:
```sql
SELECT *
FROM read_csv(['flights1.csv', 'flights2.csv'], union_by_name = true, filename = true);
```
<div class="narrow_table"></div>
| FlightDate | OriginCityName... | 188 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Multiple Files | Glob Function to Find Filenames | The glob pattern matching syntax can also be used to search for filenames using the `glob` table function.
It accepts one parameter: the path to search (which may include glob patterns).
Search the current directory for all files.
```sql
SELECT *
FROM glob('*');
```
<div class="narrow_table"></div>
| file ... | 109 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Multiple Files | Combining Schemas | <!-- markdownlint-disable MD036 --> | 7 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Multiple Files | Examples | Read a set of CSV files combining columns by position:
```sql
SELECT * FROM read_csv('flights*.csv');
```
Read a set of CSV files combining columns by name:
```sql
SELECT * FROM read_csv('flights*.csv', union_by_name = true);
``` | 61 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Multiple Files | Combining Schemas | When reading from multiple files, we have to **combine schemas** from those files. That is because each file has its own schema that can differ from the other files. DuckDB offers two ways of unifying schemas of multiple files: **by column position** and **by column name**.
By default, DuckDB reads the schema of the ... | 147 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Multiple Files | Union by Position | By default, DuckDB unifies the columns of these different files **by position**. This means that the first column in each file is combined together, as well as the second column in each file, etc. For example, consider the following two files.
[`flights1.csv`](https://duckdb.org/data/flights1.csv):
```csv
FlightDat... | 332 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Multiple Files | Union by Name | If you are processing multiple files that have different schemas, perhaps because columns have been added or renamed, it might be desirable to unify the columns of different files **by name** instead. This can be done by providing the `union_by_name` option. For example, consider the following two files, where `flights... | 432 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Examples | Read a single Parquet file:
```sql
SELECT * FROM 'test.parquet';
```
Figure out which columns/types are in a Parquet file:
```sql
DESCRIBE SELECT * FROM 'test.parquet';
```
Create a table from a Parquet file:
```sql
CREATE TABLE test AS
SELECT * FROM 'test.parquet';
```
If the file does not end in `.parquet... | 610 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Parquet Files | Parquet files are compressed columnar files that are efficient to load and process. DuckDB provides support for both reading and writing Parquet files in an efficient manner, as well as support for pushing filters and projections into the Parquet file scans.
> Parquet data sets differ based on the number of files, th... | 108 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | `read_parquet` Function | | Function | Description | Example |
|:--|:--|:-----|
| `read_parquet(path_or_list_of_paths)` | Read Parquet file(s) | `SELECT * FROM read_parquet('test.parquet');` |
| `parquet_scan(path_or_list_of_paths)` | Alias for `read_parquet` | `SELECT * FROM parquet_scan('test.parquet');` |
If your file ends in `.parquet... | 477 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Partial Reading | DuckDB supports projection pushdown into the Parquet file itself. That is to say, when querying a Parquet file, only the columns required for the query are read. This allows you to read only the part of the Parquet file that you are interested in. This will be done automatically by DuckDB.
DuckDB also supports filter... | 194 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Inserts and Views | You can also insert the data into a table or create a table from the Parquet file directly. This will load the data from the Parquet file and insert it into the database:
Insert the data from the Parquet file in the table:
```sql
INSERT INTO people
SELECT * FROM read_parquet('test.parquet');
```
Create a table di... | 206 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Writing to Parquet Files | DuckDB also has support for writing to Parquet files using the `COPY` statement syntax. See the [`COPY` Statement page](#docs:sql:statements:copy) for details, including all possible parameters for the `COPY` statement.
Write a query to a snappy compressed Parquet file:
```sql
COPY
(SELECT * FROM tbl)
TO 'result-sn... | 577 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Encryption | DuckDB supports reading and writing [encrypted Parquet files](#docs:data:parquet:encryption). | 22 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Installing and Loading the Parquet Extension | The support for Parquet files is enabled via extension. The `parquet` extension is bundled with almost all clients. However, if your client does not bundle the `parquet` extension, the extension must be installed separately:
```sql
INSTALL parquet;
``` | 55 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Parquet Metadata | The `parquet_metadata` function can be used to query the metadata contained within a Parquet file, which reveals various internal details of the Parquet file such as the statistics of the different columns. This can be useful for figuring out what kind of skipping is possible in Parquet files, or even to obtain a quick... | 359 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Parquet Schema | The `parquet_schema` function can be used to query the internal schema contained within a Parquet file. Note that this is the schema as it is contained within the metadata of the Parquet file. If you want to figure out the column names and types contained within a Parquet file it is easier to use `DESCRIBE`.
Fetch th... | 242 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Parquet File Metadata | The `parquet_file_metadata` function can be used to query file-level metadata such as the format version and the encryption algorithm used:
```sql
SELECT *
FROM parquet_file_metadata('test.parquet');
```
Below is a table of the columns returned by `parquet_file_metadata`.
<div class="narrow_table monospace_table"... | 145 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Parquet Key-Value Metadata | The `parquet_kv_metadata` function can be used to query custom metadata defined as key-value pairs:
```sql
SELECT *
FROM parquet_kv_metadata('test.parquet');
```
Below is a table of the columns returned by `parquet_kv_metadata`.
<div class="narrow_table monospace_table"></div>
| Field | Type |
| --------... | 102 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Parquet Encryption | Starting with version 0.10.0, DuckDB supports reading and writing encrypted Parquet files.
DuckDB broadly follows the [Parquet Modular Encryption specification](https://github.com/apache/parquet-format/blob/master/Encryption.md) with some [limitations](#::limitations). | 58 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Reading and Writing Encrypted Files | Using the `PRAGMA add_parquet_key` function, named encryption keys of 128, 192, or 256 bits can be added to a session. These keys are stored in-memory:
```sql
PRAGMA add_parquet_key('key128', '0123456789112345');
PRAGMA add_parquet_key('key192', '012345678911234501234567');
PRAGMA add_parquet_key('key256', '012345678... | 285 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Limitations | DuckDB's Parquet encryption currently has the following limitations.
1. It is not compatible with the encryption of, e.g., PyArrow, until the missing details are implemented.
2. DuckDB encrypts the footer and all columns using the `footer_key`. The Parquet specification allows encryption of individual columns with ... | 154 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Performance Implications | Note that encryption has some performance implications.
Without encryption, reading/writing the `lineitem` table from [`TPC-H`](#docs:extensions:tpch) at SF1, which is 6M rows and 15 columns, from/to a Parquet file takes 0.26 and 0.99 seconds, respectively.
With encryption, this takes 0.64 and 2.21 seconds, both approx... | 99 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Parquet Tips | Below is a collection of tips to help when dealing with Parquet files. | 15 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Tips for Reading Parquet Files | #### Use `union_by_name` When Loading Files with Different Schemas {#docs:data:parquet:tips::use-union_by_name-when-loading-files-with-different-schemas}
The `union_by_name` option can be used to unify the schema of files that have different or missing columns. For files that do not have certain columns, `NULL` value... | 104 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Parquet Files | Tips for Writing Parquet Files | Using a [glob pattern](#docs:data:multiple_files:overview::glob-syntax) upon read or a [Hive partitioning](#docs:data:partitioning:hive_partitioning) structure are good ways to transparently handle multiple files.
#### Enabling `PER_THREAD_OUTPUT` {#docs:data:parquet:tips::enabling-per_thread_output}
If the final n... | 643 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Partitioning | Examples | Read data from a Hive partitioned data set:
```sql
SELECT *
FROM read_parquet('orders/*/*/*.parquet', hive_partitioning = true);
```
Write a table to a Hive partitioned data set:
```sql
COPY orders
TO 'orders' (FORMAT PARQUET, PARTITION_BY (year, month));
```
Note that the `PARTITION_BY` options cannot use expr... | 227 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Partitioning | Hive Partitioning | Hive partitioning is a [partitioning strategy](https://en.wikipedia.org/wiki/Partition_(database)) that is used to split a table into multiple files based on **partition keys**. The files are organized into folders. Within each folder, the **partition key** has a value that is determined by the name of the folder.
Be... | 714 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Partitioning | Examples | Write a table to a Hive partitioned data set of Parquet files:
```sql
COPY orders TO 'orders' (FORMAT PARQUET, PARTITION_BY (year, month));
```
Write a table to a Hive partitioned data set of CSV files, allowing overwrites:
```sql
COPY orders TO 'orders' (FORMAT CSV, PARTITION_BY (year, month), OVERWRITE_OR_IGNOR... | 147 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Partitioning | Partitioned Writes | When the `PARTITION_BY` clause is specified for the [`COPY` statement](#docs:sql:statements:copy), the files are written in a [Hive partitioned](#docs:data:partitioning:hive_partitioning) folder hierarchy. The target is the name of the root directory (in the example above: `orders`). The files are written in-order in t... | 554 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Appender | The Appender can be used to load bulk data into a DuckDB database. It is currently available in the [C, C++, Go, Java, and Rust APIs](#::appender-support-in-other-clients). The Appender is tied to a connection, and will use the transaction context of that connection when appending. An Appender always appends to a singl... | 379 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | ||
Data Import | Appender | Date, Time and Timestamps | While numbers and strings are rather self-explanatory, dates, times and timestamps require some explanation. They can be directly appended using the methods provided by `duckdb::Date`, `duckdb::Time` or `duckdb::Timestamp`. They can also be appended using the internal `duckdb::Value` type, however, this adds some addit... | 275 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Appender | Commit Frequency | By default, the appender performs a commits every 204,800 rows.
You can change this by explicitly using [transactions](#docs:sql:statements:transactions) and surrounding your batches of `AppendRow` calls by `BEGIN TRANSACTION` and `COMMIT` statements. | 57 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... | |
Data Import | Appender | Handling Constraint Violations | If the Appender encounters a `PRIMARY KEY` conflict or a `UNIQUE` constraint violation, it fails and returns the following error:
```console
Constraint Error: PRIMARY KEY or UNIQUE constraint violated: duplicate key "..."
```
In this case, the entire append operation fails and no rows are inserted. | 63 | [
0.11890102177858353,
0.01533113420009613,
-0.01635786145925522,
0.02390380948781967,
0.0552239827811718,
-0.0536290667951107,
0.0549049973487854,
0.024422157555818558,
0.00978381559252739,
-0.02972525544464588,
0.050518978387117386,
0.0003144664515275508,
-0.01877017319202423,
-0.031838521... | [
-0.10126056522130966,
0.08506700396537781,
-0.06488365679979324,
-0.09601891785860062,
-0.08698877692222595,
-0.025475014001131058,
-0.10939063876867294,
-0.08438839018344879,
-0.13741590082645416,
-0.006689759902656078,
0.030868330970406532,
0.07191573083400726,
-0.028996339067816734,
-0.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.