instance_id stringlengths 21 53 | repo stringclasses 188
values | language stringclasses 1
value | pull_number int64 20 148k | title stringlengths 6 144 | body stringlengths 0 83.4k | created_at stringdate 2015-09-25 03:17:17 2025-07-10 16:50:35 | problem_statement stringlengths 188 240k | hints_text stringlengths 0 145k | resolved_issues listlengths 1 6 | base_commit stringlengths 40 40 | commit_to_review dict | reference_review_comments listlengths 1 62 | merged_commit stringlengths 40 40 | merged_patch stringlengths 297 9.87M | metadata dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
voxel51__fiftyone-2353@02e9ba1 | voxel51/fiftyone | Python | 2,353 | Provide custom task name for CVAT | ## What changes are proposed in this pull request?
Closes #1753
1. Custom task name can be passed for labelling data in CVAT such as `dataset.annotate("anno_key", backend="cvat", task_name="Custom task name", ...)`
2. Default task name for CVAT to include an annotation key such as `FiftyOne_{dataset_name}_{anno... | 2022-11-28T09:18:12Z | [FR] Allow task names to be provided when annotating with CVAT
Currently, when annotating with the CVAT integration, the name for the tasks that get generated are hardcoded as `FiftyOne_{dataset_name}` which is not ideal when launching multiple annotation runs on the same dataset. There should be an argument allowing t... | [
{
"body": "Currently, when annotating with the CVAT integration, the name for the tasks that get generated are hardcoded as `FiftyOne_{dataset_name}` which is not ideal when launching multiple annotation runs on the same dataset. There should be an argument allowing the user to provide one or more task names to... | 0d0f1b51326a7859dea7c655e06c528aa775e02c | {
"head_commit": "02e9ba17a750b4f3193c54bff01b1dde443af821",
"head_commit_message": "provide custom task name when uploading to cvat",
"patch_to_review": "diff --git a/fiftyone/utils/cvat.py b/fiftyone/utils/cvat.py\nindex 4c33cfc741a..a0986607094 100644\n--- a/fiftyone/utils/cvat.py\n+++ b/fiftyone/utils/cvat.py... | [
{
"diff_hunk": "@@ -4290,8 +4293,18 @@ def upload_samples(self, samples, backend):\n project_id = self.create_project(project_name, cvat_schema)\n project_ids.append(project_id)\n \n- _dataset_name = samples_batch._dataset.name.replace(\" \", \"_\")\n- ... | 12208fbe141ad664e23f2b51c48d2f6d3a4414f1 | diff --git a/docs/source/integrations/cvat.rst b/docs/source/integrations/cvat.rst
index e7b1b38f7ea..b094454d2b3 100644
--- a/docs/source/integrations/cvat.rst
+++ b/docs/source/integrations/cvat.rst
@@ -492,6 +492,7 @@ provided:
otherwise a new project is created. By default, no project is used
- **project_id... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
} | |
voxel51__fiftyone-1793@dbbc6d9 | voxel51/fiftyone | Python | 1,793 | adding filename exception logging for failed xml parsing | Currently when an invalid or malformed xml file is parsed an `ExpatError` will be raised. This change logs the filename that produced this error before re-raising it to the calling code. | 2022-05-26T16:12:10Z | [BUG] Importing VOCDetectionDataset from disk fails due to bad XML - but what file?!
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: MacOS 10.15.7
- **FiftyOne installed from (pip or source)**: pip
- **FiftyOne version (run `fiftyone --version`)**: FiftyOne v0.15.1, Voxe... | Definitely agree that a more informative error that includes the offending filename is called for here 💪 | [
{
"body": "### System information\r\n\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: MacOS 10.15.7\r\n- **FiftyOne installed from (pip or source)**: pip\r\n- **FiftyOne version (run `fiftyone --version`)**: FiftyOne v0.15.1, Voxel51, Inc.\r\n- **Python version**: Python 3.6.8\r\n\r\n#... | 1a9c9bf0fe3f1a5a332629e2f711ab8d4358a7c4 | {
"head_commit": "dbbc6d949de59509441e459ca034d28ce23e23e5",
"head_commit_message": "adding filename exception logging for failed xml parsing",
"patch_to_review": "diff --git a/fiftyone/core/utils.py b/fiftyone/core/utils.py\nindex cbc6462d98f..f30e145b95d 100644\n--- a/fiftyone/core/utils.py\n+++ b/fiftyone/core... | [
{
"diff_hunk": "@@ -19,15 +14,24 @@\n import logging\n import ntpath\n import os\n-import posixpath\n import platform\n+import posixpath\n import signal\n import struct\n import subprocess\n import timeit\n import types\n import zlib\n \n+from base64 import b64decode",
"line": null,
"original_line": 26,... | cbd342bb38a25805450d635dcec6a213f2ceba47 | diff --git a/fiftyone/core/utils.py b/fiftyone/core/utils.py
index cbc6462d98f..3bec6a035cf 100644
--- a/fiftyone/core/utils.py
+++ b/fiftyone/core/utils.py
@@ -26,6 +26,7 @@
import subprocess
import timeit
import types
+from xml.parsers.expat import ExpatError
import zlib
try:
@@ -662,8 +663,11 @@ def load_xml_... | {
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
} |
voxel51__fiftyone-1878@33ca8a8 | voxel51/fiftyone | Python | 1,878 | Maintain active dataset fields | Resolves #1852 | 2022-06-13T19:26:55Z | [BUG] Changing session.view resets field visibility choices
On `fiftyone==0.16.2`, updating `session.view` resets any field visibility toggles I may have set (eg, unselected all label fields), and forces the defaults (all label fields visible). I don't think this used to be the case though?
This came up when I was t... | [
{
"body": "On `fiftyone==0.16.2`, updating `session.view` resets any field visibility toggles I may have set (eg, unselected all label fields), and forces the defaults (all label fields visible). I don't think this used to be the case though?\r\n\r\nThis came up when I was trying to work with an interactive plo... | 9fb2226edb77fd0eb34a7f70d8621da5c84cd7ce | {
"head_commit": "33ca8a8acdc169b7f202a9256ee5ff89ceb44dc3",
"head_commit_message": "reset active field only on dataset change",
"patch_to_review": "diff --git a/app/packages/app/src/Root/Datasets/Dataset.tsx b/app/packages/app/src/Root/Datasets/Dataset.tsx\nindex cce7f8843d3..a0f86b00407 100644\n--- a/app/packag... | [
{
"diff_hunk": "@@ -97,17 +98,24 @@ export const Dataset: Route<DatasetQuery> = ({ prepared }) => {\n const update = useStateUpdate();\n \n useEffect(() => {\n- update(({ reset }) => {\n+ update(({ reset, get }) => {\n reset(filters);\n- reset(_activeFields({ modal: false }));\n reset... | 81b0953addd1b6b11854df796bb64acce028989d | diff --git a/app/packages/app/src/Root/Datasets/Dataset.tsx b/app/packages/app/src/Root/Datasets/Dataset.tsx
index cce7f8843d3..d0085586cdb 100644
--- a/app/packages/app/src/Root/Datasets/Dataset.tsx
+++ b/app/packages/app/src/Root/Datasets/Dataset.tsx
@@ -1,18 +1,17 @@
import { Route, RouterContext } from "@fiftyone/... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
voxel51__fiftyone-1283@8c0bc4a | voxel51/fiftyone | Python | 1,283 | Relax `opencv-python-headless` version requirement | Removes the pin on `opencv-python-headless`. The original reason for this was to prevent building wheels from source for new installs during the few hours the source dist was available, but binary dists were not.
`opencv-python-headless` is popular package, only updated a few times a year, and therefore a loose requ... | 2021-09-22T17:12:28Z | [FR] update opencv-python-headless
### Proposal Summary
Currently this repo requires opencv-python-headless<=4.4.0.46. To cut a long story short there are no wheels available for python3.9 and I am unable to install fiftyone (I am using docker `image: jupyter/scipy-notebook:latest`). However version `4.5.3.56` is avai... | [
{
"body": "### Proposal Summary\r\nCurrently this repo requires opencv-python-headless<=4.4.0.46. To cut a long story short there are no wheels available for python3.9 and I am unable to install fiftyone (I am using docker `image: jupyter/scipy-notebook:latest`). However version `4.5.3.56` is available for inst... | 634f707fe4c02f925906efced047d681e6f2d1ca | {
"head_commit": "8c0bc4a90ff7dde7f03929b8883ba4441b6d876c",
"head_commit_message": "rm opencv-python-headless pin",
"patch_to_review": "diff --git a/setup.py b/setup.py\nindex 84cb5a72929..ed5c33b62d7 100644\n--- a/setup.py\n+++ b/setup.py\n@@ -78,6 +78,7 @@ def get_version():\n \"mongoengine==0.20.0\",\... | [
{
"diff_hunk": "@@ -78,6 +78,7 @@ def get_version():\n \"mongoengine==0.20.0\",\n \"motor>=2.3,<3\",\n \"numpy\",\n+ \"opencv-python-headless>=4.4,<5\",",
"line": null,
"original_line": 81,
"original_start_line": null,
"path": "setup.py",
"start_line": null,
... | 275d1bf698d882303b53a8011e958c7f6423aa09 | diff --git a/setup.py b/setup.py
index 84cb5a72929..7df55703079 100644
--- a/setup.py
+++ b/setup.py
@@ -78,6 +78,7 @@ def get_version():
"mongoengine==0.20.0",
"motor>=2.3,<3",
"numpy",
+ "opencv-python-headless",
"packaging",
"pandas",
"Pillow>=6.2",
@@ -96... | {
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Dependency Updates & Env Compatibility"
} | |
voxel51__fiftyone-4236@2af48de | voxel51/fiftyone | Python | 4,236 | Lazily connect to database when needed | Closes #182
Closes #1964
Closes #1804
Closes #3189
## What changes are proposed in this pull request?
Lazily connect to database so you can import `fiftyone` without a database connection.
Most of the work was testing.
## How is this patch tested? If it is not, please explain why.
### Lazy connection te... | 2024-04-05T15:38:30Z | Lazily start DB service to reduce import time?
Currently, running `fiftyone config`, which simply loads and prints one's FO config, takes ~2 seconds to execute on my machine, because `import fiftyone` currently triggers a DB service to be started, among other things.
Can we adopt a lazy initialization strategy where... | Related: the DB is also spinning up unnecessarily (?) when connecting to a remote session.
@tylerganter do you have any thoughts on this? I don't have a good understanding of what operations should cause the DB to spin up. Another reason (besides import time) why lazily starting the DB would be good is that MongoDB log... | [
{
"body": "Currently, running `fiftyone config`, which simply loads and prints one's FO config, takes ~2 seconds to execute on my machine, because `import fiftyone` currently triggers a DB service to be started, among other things.\r\n\r\nCan we adopt a lazy initialization strategy where the DB service is only ... | 85339fcecf03978435fed6fd94565d1c63f58dd6 | {
"head_commit": "2af48de0332c1254bea0e992ff1b8067e4387b37",
"head_commit_message": "Revert \"reorder\"\n\nThis reverts commit 3b26be28c9456042be521fa31cb53ab7a0f22bca.",
"patch_to_review": "diff --git a/docs/generate_docs.bash b/docs/generate_docs.bash\nindex 058c58d14dc..8ec30aedc68 100755\n--- a/docs/generate_... | [
{
"diff_hunk": "@@ -25,9 +25,15 @@\n from fiftyone.__public__ import *\n \n import fiftyone.core.logging as _fol\n-import fiftyone.migrations as _fom\n \n-_fol.init_logging()\n+# The old way of doing things, migrating database on import. If we\n+# REALLY need to do this, for example doc build, we can.\n+if (\... | 5cdb08027d2acd1f7fa591916439d96a1ddd44b5 | diff --git a/docs/source/conf.py b/docs/source/conf.py
index 49b1f2279b8..bee158f640e 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -75,6 +75,7 @@
"inherited-members": True,
"member-order": "bysource",
"autosummary": True,
+ "exclude-members": "objects",
}
autodoc_inherit_docstrings ... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Performance Optimizations"
} |
voxel51__fiftyone-4351@5e91389 | voxel51/fiftyone | Python | 4,351 | Gracefully handle None-valued tag fields | ## Change log
Resolves #3546
By convention, all non-required FO fields should be nullable, but the implementation of `tag_samples()`, `untag_samples()`, `tag_labels()`, and `untag_labels()` use the `$addToSet` operator, which gracefully handles missing fields, but unfortunately cannot handle `null` fields. So if... | 2024-05-05T19:13:15Z | [BUG] Clearing the tags field and then tagging samples raises error
To reproduce:
```python
import fiftyone.zoo as foz
dataset = foz.load_zoo_dataset("quickstart").clone()
dataset.clear_sample_field("tags")
dataset.tag_samples("test")
# ValueError: Cannot apply $addToSet to non-array field. Field named 'tags'... | [
{
"body": "To reproduce:\r\n```python\r\nimport fiftyone.zoo as foz\r\n\r\ndataset = foz.load_zoo_dataset(\"quickstart\").clone()\r\ndataset.clear_sample_field(\"tags\")\r\ndataset.tag_samples(\"test\")\r\n\r\n# ValueError: Cannot apply $addToSet to non-array field. Field named 'tags' has non-array type null\r\... | ec20c512099e97a2ee012442dca62560696f48e6 | {
"head_commit": "5e913890deef4191a6b42fecd41302e5c286265c",
"head_commit_message": "handle None-valued tag fields",
"patch_to_review": "diff --git a/fiftyone/core/collections.py b/fiftyone/core/collections.py\nindex 43098224227..ab15faf4f27 100644\n--- a/fiftyone/core/collections.py\n+++ b/fiftyone/core/collecti... | [
{
"diff_hunk": "@@ -1778,7 +1778,7 @@ def clear_sample_field(self, field_name):\n The field will remain in the dataset's schema, and all samples will\n have the value ``None`` for the field.\n \n- You can use dot notation (``embedded.field.name``) to clone embedded\n+ You can use d... | e08fb57295bba4b9263436be43c9d6015e99517c | diff --git a/fiftyone/core/collections.py b/fiftyone/core/collections.py
index 43098224227..ab15faf4f27 100644
--- a/fiftyone/core/collections.py
+++ b/fiftyone/core/collections.py
@@ -1611,7 +1611,27 @@ def tag_samples(self, tags):
# We only need to process samples that are missing a tag of interest
... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
voxel51__fiftyone-1601@84c3032 | voxel51/fiftyone | Python | 1,601 | Keypoint updates | Resolves #1581
Requires https://github.com/voxel51/eta/pull/556
## Change log
- [x] Implement keypoint skeletons as per #1563
- [x] Implement skeleton rendering in `draw_labels()`
- [x] Add a `filter_keypoints()` view stage that applies per-`point` filters to keypoint objects
- [x] Remove `visible` usage in... | 2022-02-16T15:19:16Z | [FR] Add support for per-point confidence/visibility
### Proposal Summary
Add support for per-point confidence/visibility - currently Keypoint label class (which represents a set of points associated with an instance) is a flat list of (x, y) which doesn't provide additional per-point information. In addition, Coco... | Thanks for the FR!
One option we have to represent not visible is to insert `(NaN, NaN)` for those points. The app allows this, and won't render the points. In this case, we would not need a separate way to store visibility flags.
However, `visible=True/False` is a more generic concept that we could introduce for... | [
{
"body": "### Proposal Summary\r\n\r\nAdd support for per-point confidence/visibility - currently Keypoint label class (which represents a set of points associated with an instance) is a flat list of (x, y) which doesn't provide additional per-point information. In addition, Coco import skips over the point wh... | 53e0372fcc16068b10a32b6b9b1ad2e0edd941ac | {
"head_commit": "84c3032afbf73f11ca034edc4985d43b7178cae6",
"head_commit_message": "skeleton work",
"patch_to_review": "diff --git a/app/packages/app/src/components/Actions/Options.tsx b/app/packages/app/src/components/Actions/Options.tsx\nindex 6b2d6d8e3cd..4c8ae926eba 100644\n--- a/app/packages/app/src/compone... | [
{
"diff_hunk": "@@ -341,3 +343,115 @@ def classification_to_detections(sample_collection, in_field, out_field):\n image[out_field] = fol.Detections(detections=[detection])\n \n sample.save()\n+\n+\n+def filter_keypoints(sample_collection, field, expr=None, labels=None):",
"line": null,
... | 4ae35d90723bd5c4a2e0c52e1bff52c480c38796 | diff --git a/app/packages/app/src/components/Actions/Options.tsx b/app/packages/app/src/components/Actions/Options.tsx
index 6b2d6d8e3cd..4c8ae926eba 100644
--- a/app/packages/app/src/components/Actions/Options.tsx
+++ b/app/packages/app/src/components/Actions/Options.tsx
@@ -45,25 +45,25 @@ export const RefreshButton ... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
} |
voxel51__fiftyone-322@5739367 | voxel51/fiftyone | Python | 322 | Reorganize installation docs | Closes #272
I think I cleaned up all of the cross-references between the two docs, but an extra set of eyes would be appreciated. | 2020-07-31T19:24:44Z | Migrate virtualenv setup instructions to separate docs page
I like @lethosor's suggestion below. We can provide pip-only instructions first and then put virtualenv stuff on a separate page that users would refer to if they want help setting up their own dedicated environment:
> @lethosor Maybe we should move the vir... | [
{
"body": "I like @lethosor's suggestion below. We can provide pip-only instructions first and then put virtualenv stuff on a separate page that users would refer to if they want help setting up their own dedicated environment:\r\n\r\n> @lethosor Maybe we should move the virtual environment guide to to a separa... | 3c21edf7840178286d03462c47fa8c0b345a93a7 | {
"head_commit": "5739367f563d94fec967cd5bfafed627d222cb25",
"head_commit_message": "making link more obvious",
"patch_to_review": "diff --git a/docs/source/getting_started/install.rst b/docs/source/getting_started/install.rst\nindex a11b6d0dd00..974cecf2919 100644\n--- a/docs/source/getting_started/install.rst\n... | [
{
"diff_hunk": "@@ -0,0 +1,136 @@\n+\n+.. _virtualenv-guide:\n+\n+Virtual Environment Setup\n+=========================\n+\n+.. default-role:: code\n+\n+.. toctree::",
"line": null,
"original_line": 9,
"original_start_line": null,
"path": "docs/source/getting_started/virtualenv.rst",
"start_... | c031b717f13294e9e7c4e2f03d61de184d88759c | diff --git a/docs/source/getting_started/install.rst b/docs/source/getting_started/install.rst
index a11b6d0dd00..e1fd3af0829 100644
--- a/docs/source/getting_started/install.rst
+++ b/docs/source/getting_started/install.rst
@@ -3,13 +3,10 @@ FiftyOne Installation
.. default-role:: code
-This page describes how to... | {
"difficulty": "low",
"estimated_review_effort": 1,
"problem_domain": "Documentation Updates"
} | |
voxel51__fiftyone-459@c311516 | voxel51/fiftyone | Python | 459 | View stage enhancements | Closes #465 and #466.
And adds array slicing! | 2020-08-26T22:01:38Z | View bar does not handle default values appropriately
If I try to create an instance of a view stage in the App with a parameter with a default value, like the `reverse` parameter of the `SortBy` stage, which is supposed to default to `False`, I get an error.
I can make it function by changing the backend param vali... | [
{
"body": "If I try to create an instance of a view stage in the App with a parameter with a default value, like the `reverse` parameter of the `SortBy` stage, which is supposed to default to `False`, I get an error.\r\n\r\nI can make it function by changing the backend param validation as follows:\r\n\r\n```py... | 4976c9cf20f04ffe6df547ed15633bf625bbed88 | {
"head_commit": "c311516945ebc853dbdc1d6dd537665b98393663",
"head_commit_message": "include private fields",
"patch_to_review": "diff --git a/electron/app/components/CheckboxGrid.tsx b/electron/app/components/CheckboxGrid.tsx\nindex 2a55e5627f2..5c50fe7cf2f 100644\n--- a/electron/app/components/CheckboxGrid.tsx\... | [
{
"diff_hunk": "@@ -1002,7 +1002,7 @@ class SelectFields(ViewStage):\n \"\"\"\n \n def __init__(self, field_names=None):\n- default_fields = default_sample_fields(include_private=False)\n+ default_fields = default_sample_fields(include_private=True)",
"line": null,
"original_line":... | ac73b19754c59941553b12491c82746de18a7742 | diff --git a/docs/source/cli/index.rst b/docs/source/cli/index.rst
index 005a50d6f75..18a59dbc9c3 100644
--- a/docs/source/cli/index.rst
+++ b/docs/source/cli/index.rst
@@ -645,9 +645,10 @@ View datasets in the FiftyOne App without persisting them to the database.
.. code-block:: text
- fiftyone app view [-h] [... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} | |
voxel51__fiftyone-1078@d9a17d0 | voxel51/fiftyone | Python | 1,078 | Adding support for customizing dataset imports and exports | Resolves #707, and deprecates #1050.
Adds a variety of new syntaxes for importing and exporting datasets that allow, among other things, independent control of the data and labels locations for many standard formats via new `data_path`, `labels_path`, and `export_media` parameters (plus additional parameters on a pe... | 2021-06-23T05:20:08Z | [FR] Flag to avoid copying data when exporting Dataset labels
I updated labels for a dataset in FiftyOne and want to use them in my own scripts to train a new model. The images are the same and I don't want to have to copy 118K images that I can just link back to them using the new labels.
It's easy enough to take ... | yep makes sense, I like it. | [
{
"body": "I updated labels for a dataset in FiftyOne and want to use them in my own scripts to train a new model. The images are the same and I don't want to have to copy 118K images that I can just link back to them using the new labels. \r\n\r\nIt's easy enough to take an `Exporter` and remove the step where... | 99d7c9686f93691088642a311d1a5c8817b27c7d | {
"head_commit": "d9a17d05d34f66dfdc4fcd6b424a610911288d21",
"head_commit_message": "improving label_field for zoo datasets",
"patch_to_review": "diff --git a/docs/source/integrations/lightning_flash.rst b/docs/source/integrations/lightning_flash.rst\nindex 7c83cd8e56c..f87fe3b64de 100644\n--- a/docs/source/integ... | [
{
"diff_hunk": "@@ -47,13 +47,50 @@ a |DatasetView| into any format of your choice via the basic recipe below.\n \n # Export the dataset!\n dataset_or_view.export(\n- export_dir=export_dir, dataset_type=dataset_type, label_field=label_field\n+ export_dir=export_dir,\n+ ... | b576a8decc277316957ffdbc3b7610994d95ef19 | diff --git a/docs/source/integrations/lightning_flash.rst b/docs/source/integrations/lightning_flash.rst
index 7c83cd8e56c..f87fe3b64de 100644
--- a/docs/source/integrations/lightning_flash.rst
+++ b/docs/source/integrations/lightning_flash.rst
@@ -39,6 +39,30 @@ In order to use the Lightning Flash integration, you'll ... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
} |
tobymao__sqlglot-5284@3f57fcd | tobymao/sqlglot | Python | 5,284 | feat(databricks): GROUP_CONCAT to LISTAGG | Fixes #5281
**DOCS**
[Databricks LISTAGG](https://docs.databricks.com/aws/en/sql/language-manual/functions/listagg) | 2025-06-25T17:09:10Z | Use STRING_AGG insted of GROUP_CONCAT in databricks
sqlglot==26.25.3
`
SELECT
test,
STRING_AGG(email, '') AS Email
FROM organizations
GROUP BY
test
`
The above query gets converted to
`
SELECT
test,
GROUP_CONCAT(email, '') AS Email
FROM organizations
GROUP BY
test
`
When passed through the below code,
`
expr... | [
{
"body": "sqlglot==26.25.3\n\n`\nSELECT\n test,\n STRING_AGG(email, '') AS Email\nFROM organizations\nGROUP BY\ntest\n`\n\nThe above query gets converted to\n`\nSELECT\n test,\n GROUP_CONCAT(email, '') AS Email\nFROM organizations\nGROUP BY\ntest\n`\n\nWhen passed through the below code,\n`\nexpression_tre... | 9a95af1c725cd70ffa8206f1d88452a7faab93b2 | {
"head_commit": "3f57fcd44964a60de94d6f044dac2db1d8c69f85",
"head_commit_message": "fix validate identity",
"patch_to_review": "diff --git a/sqlglot/dialects/databricks.py b/sqlglot/dialects/databricks.py\nindex f13b4ca447..b93b0529ca 100644\n--- a/sqlglot/dialects/databricks.py\n+++ b/sqlglot/dialects/databrick... | [
{
"diff_hunk": "@@ -87,6 +88,7 @@ class Generator(Spark.Generator):\n e.this,\n ),\n exp.DatetimeTrunc: timestamptrunc_sql(),\n+ exp.GroupConcat: lambda self, e: groupconcat_sql(self, e),",
"line": null,
"original_line": 91,
"original_start_line": n... | b676df92bb07f1d1da23237c8ba942d2dfc66c88 | diff --git a/sqlglot/dialects/databricks.py b/sqlglot/dialects/databricks.py
index f13b4ca447..8d40758bcb 100644
--- a/sqlglot/dialects/databricks.py
+++ b/sqlglot/dialects/databricks.py
@@ -9,6 +9,7 @@
build_date_delta,
timestamptrunc_sql,
build_formatted_time,
+ groupconcat_sql,
)
from sqlglot.dia... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-5323@ec47ab1 | tobymao/sqlglot | Python | 5,323 | fix(parser): avoid CTE values ALIAS gen, when ALIAS exists | Fixes #5318 | 2025-07-01T12:30:40Z | Bug: Databricks/Spark loses column-list for `VALUES … AS alias(col1, …)` inside a CTE
#### Environment
| | |
|---|---|
| **sqlglot version** | `latest` |
| **Python** | 3.12.8 (on macOS arm64) |
| **Read dialect** | `databricks` (identical with `spark`) |
| **Write dialect** | `databricks` |
---
#### Fully reproduc... | [
{
"body": "#### Environment \n| | |\n|---|---|\n| **sqlglot version** | `latest` |\n| **Python** | 3.12.8 (on macOS arm64) |\n| **Read dialect** | `databricks` (identical with `spark`) |\n| **Write dialect** | `databricks` |\n\n---\n\n#### Fully reproducible snippet\n\n```python\nfrom sqlglot import parse_one\... | d2f7c41f9f30f4cf0c74782be9be0cc6e75565f3 | {
"head_commit": "ec47ab184fc494a81ba39779a3f2e17f45e80273",
"head_commit_message": "fix(parser): avoid CTE values ALIAS gen, when ALIAS exists",
"patch_to_review": "diff --git a/sqlglot/parser.py b/sqlglot/parser.py\nindex 775ff5a1a3..6c42d03c38 100644\n--- a/sqlglot/parser.py\n+++ b/sqlglot/parser.py\n@@ -3394,... | [
{
"diff_hunk": "@@ -3394,8 +3394,9 @@ def _parse_cte(self) -> t.Optional[exp.CTE]:\n comments=comments,\n )\n \n- if isinstance(cte.this, exp.Values):\n- cte.set(\"this\", exp.select(\"*\").from_(exp.alias_(cte.this, \"_values\", table=True)))\n+ values = cte.this\n+... | dbe4d9adbc232c6178138cf9425899a0194b35b4 | diff --git a/sqlglot/parser.py b/sqlglot/parser.py
index 775ff5a1a3..35b7bacd0c 100644
--- a/sqlglot/parser.py
+++ b/sqlglot/parser.py
@@ -3394,8 +3394,12 @@ def _parse_cte(self) -> t.Optional[exp.CTE]:
comments=comments,
)
- if isinstance(cte.this, exp.Values):
- cte.set("this... | {
"difficulty": "medium",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-5320@37af9ea | tobymao/sqlglot | Python | 5,320 | fix(spark)!: distinguish STORED AS from USING | Fixes #5317
This PR distinguishes `STORED AS` from `USING`.
Spark and Databricks supports hive format when creating a table. This means that if we only use the previous approach, generating `STORED AS` with `USING` the generated SQL can be wrong.
Example:
```CREATE TABLE student (id INT, name STRING, age I... | 2025-07-01T11:13:57Z | Spark incorrectly roundtrips `STORED AS` -> `USING`
**Before you file an issue**
- Make sure you specify the "read" dialect eg. `parse_one(sql, read="spark")`
- Make sure you specify the "write" dialect eg. `ast.sql(dialect="duckdb")`
- Check if the issue still exists on main
we are going from spark to spark in this ca... | Looks like the both get parsed as the same AST but they probably shouldnt?
```
>>> sqlglot.parse_one('CREATE TABLE student (id INT) STORED AS ORC')
Create(
this=Schema(
this=Table(
this=Identifier(this=student, quoted=False)),
expressions=[
ColumnDef(
this=Identifier(this=id, quoted=False)... | [
{
"body": "**Before you file an issue**\n- Make sure you specify the \"read\" dialect eg. `parse_one(sql, read=\"spark\")`\n- Make sure you specify the \"write\" dialect eg. `ast.sql(dialect=\"duckdb\")`\n- Check if the issue still exists on main\nwe are going from spark to spark in this case\n\n**Fully reprodu... | d2f7c41f9f30f4cf0c74782be9be0cc6e75565f3 | {
"head_commit": "37af9ea1744724a074d2b3268afafd9944e851aa",
"head_commit_message": "add input output format",
"patch_to_review": "diff --git a/sqlglot/dialects/spark2.py b/sqlglot/dialects/spark2.py\nindex afb1a1c537..b1a771e025 100644\n--- a/sqlglot/dialects/spark2.py\n+++ b/sqlglot/dialects/spark2.py\n@@ -284,... | [
{
"diff_hunk": "@@ -284,7 +284,10 @@ class Generator(Hive.Generator):\n # (DAY_OF_WEEK(datetime) % 7) + 1 is equivalent to DAYOFWEEK_ISO(datetime)\n exp.DayOfWeekIso: lambda self, e: f\"(({self.func('DAYOFWEEK', e.this)} % 7) + 1)\",\n exp.DayOfYear: rename_func(\"DAYOFYEAR\"... | ce7ff6f224ad6b3f6057d428226f741cc52610e5 | diff --git a/sqlglot/dialects/hive.py b/sqlglot/dialects/hive.py
index 5300b0148b..966de4dc84 100644
--- a/sqlglot/dialects/hive.py
+++ b/sqlglot/dialects/hive.py
@@ -550,8 +550,6 @@ class Generator(generator.Generator):
e: f"CAST(DATE_FORMAT({self.sql(e, 'this')}, {Hive.DATEINT_FORMAT}) AS INT)",
... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-5208@c2b7c77 | tobymao/sqlglot | Python | 5,208 | Feat(spark): support ALTER ADD PARTITION | Fixes #5204
References:
- https://docs.databricks.com/aws/en/sql/language-manual/sql-ref-partition#partition
- https://spark.apache.org/docs/latest/sql-ref-syntax-ddl-alter-table.html
| 2025-06-11T10:14:02Z | Alter table does not handle `ADD PARTITION`
I would expect this syntax to do a roundtrip successfully:
```
>>> sqlglot.parse_one("ALTER TABLE foo ADD PARTITION(event='click')").sql()
"ALTER TABLE foo PARTITION(event = 'click')"
```
but it drops the `ADD` portion. This is valid syntax in spark (where I ran into it) so i... | What is the input dialect?
@georgesittas the code is a repro as is in a clean session.
but if you want a dialect then as I said in the comment, use `spark`
Ah apologies, I missed it. Will take a look, thanks. | [
{
"body": "I would expect this syntax to do a roundtrip successfully:\n```\n>>> sqlglot.parse_one(\"ALTER TABLE foo ADD PARTITION(event='click')\").sql()\n\"ALTER TABLE foo PARTITION(event = 'click')\"\n```\nbut it drops the `ADD` portion. This is valid syntax in spark (where I ran into it) so it at least needs... | ad8a4e73e1a9e4234f0b711163fb49630acf736c | {
"head_commit": "c2b7c7726d8fc250929c3931326f2b195f8ce232",
"head_commit_message": "Feat(spark): support ALTER ADD PARTITION",
"patch_to_review": "diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py\nindex 1003ae5faa..f1a927a7c1 100644\n--- a/sqlglot/expressions.py\n+++ b/sqlglot/expressions.py\n@@ -492... | [
{
"diff_hunk": "@@ -7368,13 +7368,20 @@ def _parse_drop_partition(self, exists: t.Optional[bool] = None) -> exp.DropPart\n )\n \n def _parse_alter_table_add(self) -> t.List[exp.Expression]:\n- def _parse_add_column_or_constraint():\n+ def _parse_add_alteration() -> t.Optional[exp.Expre... | 434b86cc6e932f27c499e5ef2665c831a29000a4 | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index 1003ae5faa..4f879753a6 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -4926,6 +4926,10 @@ class AddConstraint(Expression):
arg_types = {"expressions": True}
+class AddPartition(Expression):
+ arg_types = {"this": True, "... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-5189@e49917a | tobymao/sqlglot | Python | 5,189 | fix: Issue 5188 eliminate_join_marks has multiple issues | ```
select t1.a, t2.b
from t1, t2
where t1.id = case when t2.id (+) = "n/a" then null else t2.id (+) end
```
becomes a broken CASE statement
```
SELECT
t1.a,
t2.b
FROM t1
LEFT JOIN t2
ON t2.id = "n/a" AND t1.id = CASE WHEN THEN NULL ELSE t2.id END
```
```
select t1.a, t2.b
from t1, t2
wh... | 2025-06-08T22:56:53Z | eliminate_join_marks has multiple issues
```
select t1.a, t2.b
from t1, t2
where t1.id = case when t2.id (+) = "n/a" then null else t2.id (+) end
```
becomes a broken CASE statement
```
SELECT
t1.a,
t2.b
FROM t1
LEFT JOIN t2
ON t2.id = "n/a" AND t1.id = CASE WHEN THEN NULL ELSE t2.id END
```
```
select t1.a,... | Moving the discussion here: https://github.com/tobymao/sqlglot/pull/5189. | [
{
"body": "```\nselect t1.a, t2.b\nfrom t1, t2\nwhere t1.id = case when t2.id (+) = \"n/a\" then null else t2.id (+) end\n```\n\nbecomes a broken CASE statement\n\n```\nSELECT\n t1.a,\n t2.b\nFROM t1\nLEFT JOIN t2\n ON t2.id = \"n/a\" AND t1.id = CASE WHEN THEN NULL ELSE t2.id END\n```\n\n\n```\nselect t1.a... | 696150dcb3337e328290434debbb28055233b2f8 | {
"head_commit": "e49917a943e519e86279738477f657adfc746747",
"head_commit_message": "fix for parentheses",
"patch_to_review": "diff --git a/sqlglot/transforms.py b/sqlglot/transforms.py\nindex 57059d9d52..3bee5f5158 100644\n--- a/sqlglot/transforms.py\n+++ b/sqlglot/transforms.py\n@@ -842,113 +842,123 @@ def stru... | [
{
"diff_hunk": "@@ -842,113 +842,123 @@ def struct_kv_to_alias(expression: exp.Expression) -> exp.Expression:\n \n \n def eliminate_join_marks(expression: exp.Expression) -> exp.Expression:\n- \"\"\"\n- Remove join marks from an AST. This rule assumes that all marked columns are qualified.\n- If this d... | 813db2cbef8f06b3e7db5e7a9e44ee55181717fb | diff --git a/sqlglot/transforms.py b/sqlglot/transforms.py
index 57059d9d52..4c62b51949 100644
--- a/sqlglot/transforms.py
+++ b/sqlglot/transforms.py
@@ -842,113 +842,124 @@ def struct_kv_to_alias(expression: exp.Expression) -> exp.Expression:
def eliminate_join_marks(expression: exp.Expression) -> exp.Expression... | {
"difficulty": "high",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-5230@65342a1 | tobymao/sqlglot | Python | 5,230 | Fix(optimizer)!: resolve table "columns" in bigquery that produce structs | Fixes #5207
I've described the issue in this comment: https://github.com/tobymao/sqlglot/issues/5207#issuecomment-2961924796. Let me know what you think. | 2025-06-16T09:51:04Z | Optimizer fails in some specific cases
Sqlglot version: 26.25.3
Reading dialect: BigQuery
Writing dialect: N/A
**Fully reproducible code snippet**
```python
from loguru import logger
from sqlglot import Expression
from sqlglot.dialects.dialect import Dialect
from sqlglot.errors import OptimizeError
from sqlglot.opti... | Thanks, this is a legit bug. An easier way to see what's happening is:
```python
from sqlglot import parse_one
from sqlglot.optimizer.qualify import qualify
ast = parse_one("with t as (select 1) select to_json_string(t) from t", dialect="bigquery")
qualify(ast, dialect="bigquery")
# sqlglot.errors.OptimizeError: Colu... | [
{
"body": "Sqlglot version: 26.25.3\n\nReading dialect: BigQuery\nWriting dialect: N/A\n\n**Fully reproducible code snippet**\n\n```python\nfrom loguru import logger\nfrom sqlglot import Expression\nfrom sqlglot.dialects.dialect import Dialect\nfrom sqlglot.errors import OptimizeError\nfrom sqlglot.optimizer.qu... | 1b4c083fff8d7c44bf1dbba28c1225fa1e28c4d2 | {
"head_commit": "65342a1161488e779d124d81cca59153d9b981b5",
"head_commit_message": "Refactor",
"patch_to_review": "diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py\nindex 21db7f8fba..8521d8cb76 100644\n--- a/sqlglot/expressions.py\n+++ b/sqlglot/expressions.py\n@@ -7045,6 +7045,12 @@ class Semicolon(... | [
{
"diff_hunk": "@@ -290,9 +290,48 @@ def annotate_scope(self, scope: Scope) -> None:\n elif isinstance(source.expression, exp.Unnest):\n self._set_type(col, source.expression.type)\n \n+ if isinstance(self.schema, MappingSchema):\n+ for pseudocolumn in scope... | 6b66e6da320360ff204640bff6b2a7eb97c9ec40 | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index 21db7f8fba..0b685a64a6 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -7045,6 +7045,12 @@ class Semicolon(Expression):
arg_types = {}
+# BigQuery allows SELECT t FROM t and treats the projection as a struct value. This expr... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-5180@854e0f7 | tobymao/sqlglot | Python | 5,180 | fix(parser)!: virtual column with AS(expr) as ComputedColumnConstraint | Fixes #5173 | 2025-06-06T12:24:50Z | Snowflake Parser: Virtual Column AS (<expression>) Incorrectly Parsed as Column Constraint
sqlglot Version: 26.25.3
Dialect: Snowflake
Problem Description:
When parsing Snowflake DDL, sqlglot does not correctly interpret the AS (<expression>) syntax used for defining virtual (generated) columns. Instead of recognizing... | [
{
"body": "sqlglot Version: 26.25.3 \nDialect: Snowflake\nProblem Description:\nWhen parsing Snowflake DDL, sqlglot does not correctly interpret the AS (<expression>) syntax used for defining virtual (generated) columns. Instead of recognizing this as the definition of the column's generation expression and pop... | bc001cef4c907d8fa421d3190b4fa91865d9ff6c | {
"head_commit": "854e0f7686d76f37a817c35624c750a7aa743eef",
"head_commit_message": "fix(parse)!: virtual column with computation as ComputedColumnConstraint",
"patch_to_review": "diff --git a/sqlglot/parser.py b/sqlglot/parser.py\nindex 07f8e10e66..c86f124161 100644\n--- a/sqlglot/parser.py\n+++ b/sqlglot/parser... | [
{
"diff_hunk": "@@ -5897,11 +5897,15 @@ def _parse_column_def(\n )\n ):\n self._advance()\n- constraints.append(\n- self.expression(\n- exp.ColumnConstraint,\n- kind=exp.TransformColumnConstraint(this=self._parse_dis... | 22798f86f74ccf922b5c2f57cc981398350a1466 | diff --git a/sqlglot/dialects/risingwave.py b/sqlglot/dialects/risingwave.py
index 97a68a59a6..7a1775d38d 100644
--- a/sqlglot/dialects/risingwave.py
+++ b/sqlglot/dialects/risingwave.py
@@ -1,5 +1,6 @@
from __future__ import annotations
from sqlglot.dialects.postgres import Postgres
+from sqlglot.generator import Ge... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-5179@056b21b | tobymao/sqlglot | Python | 5,179 | feat(postgres)!: Add support for ANY_VALUE for versions 16+ | Fixes https://github.com/TobikoData/sqlmesh/issues/4674
Postgres 16 (released 24 Sep 2023) added support for `ANY_VALUE`. As this may be too recent for some Postgres workloads, I thought it'd make for another use case of `Version`, but if we want to simplify we can instead always preserve the roundtrip.
Docs
-... | 2025-06-06T10:12:14Z | Support any_value aggregate function for PostgreSQL dialect
Description:
Currently, the `any_value()` aggregate function is not supported in the Postgres dialect. This function is useful when any non-deterministic non-null value from a group is acceptable. It is currently being replaced by `max()`, which cannot be use... | [
{
"body": "Description:\n\nCurrently, the `any_value()` aggregate function is not supported in the Postgres dialect. This function is useful when any non-deterministic non-null value from a group is acceptable. It is currently being replaced by `max()`, which cannot be used with some data types (e.g., UUID).\n\... | 83de4e11bc1547aa22b275b20c0326dfbe43b2b8 | {
"head_commit": "056b21b4b62ed9323e2b244f147146a518fa00a9",
"head_commit_message": "feat(postgres): Add support for ANY_VALUE",
"patch_to_review": "diff --git a/sqlglot/dialects/postgres.py b/sqlglot/dialects/postgres.py\nindex 949bea5cc3..8ddfa22946 100644\n--- a/sqlglot/dialects/postgres.py\n+++ b/sqlglot/dial... | [
{
"diff_hunk": "@@ -1436,3 +1436,21 @@ def test_json_extract(self):\n \"clickhouse\": \"SELECT JSONExtractString(foo, '12')\",\n },\n )\n+\n+ def test_any_value(self):\n+ self.validate_all(\n+ \"SELECT ANY_VALUE(1) AS col\",\n+ ... | 1ae80f7e7f392fa75f5acbb792fe152a550e9d88 | diff --git a/sqlglot/dialects/postgres.py b/sqlglot/dialects/postgres.py
index 949bea5cc3..8ddfa22946 100644
--- a/sqlglot/dialects/postgres.py
+++ b/sqlglot/dialects/postgres.py
@@ -36,6 +36,7 @@
strposition_sql,
count_if_to_sum,
groupconcat_sql,
+ Version,
)
from sqlglot.generator import unsupport... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
} | |
tobymao__sqlglot-5069@b563cf4 | tobymao/sqlglot | Python | 5,069 | fix!: unqualify UNNEST only the left most part of a column | Fixes #5062
In the previous implementation, `unqualify_unnest` stripped qualifying prefixes from every aliased `UNNEST` expression when the column path matched the `table` or `db` name. This behavior was intended to support dialects like BigQuery and Redshift because the optimizer adds via the `qualify` aliases to t... | 2025-05-13T14:55:25Z | Missing Struct field wrapped in UNNEST when transpiling to BigQuery dialect
**Before you file an issue**
I have a SQL that tries to unnest a list field of a struct. However, if the table alias of the unnest table factor is the same as the field name in the struct, it will be removed after transpiling to BigQuery dialec... | Hey @goldmedal, can you provide some more details on what you're trying to achieve? Looks like you're transpiling from another dialect to BigQuery, but it's not clear what that is.
> Hey [@goldmedal](https://github.com/goldmedal), can you provide some more details on what you're trying to achieve? Looks like you're tra... | [
{
"body": "**Before you file an issue**\nI have a SQL that tries to unnest a list field of a struct. However, if the table alias of the unnest table factor is the same as the field name in the struct, it will be removed after transpiling to BigQuery dialect.\n\n**Fully reproducible code snippet**\nThe following... | 07bf71bae5d2a5c381104a86bb52c06809c21174 | {
"head_commit": "b563cf4c2ad08c6261b6bfd92a378452e9aaefe8",
"head_commit_message": "fix!: unqualify UNNEST only the left most part of a column",
"patch_to_review": "diff --git a/sqlglot/transforms.py b/sqlglot/transforms.py\nindex 84e4c45dc9..da462732bf 100644\n--- a/sqlglot/transforms.py\n+++ b/sqlglot/transfor... | [
{
"diff_hunk": "@@ -1684,6 +1684,39 @@ def test_bigquery(self):\n \"EXPORT DATA WITH CONNECTION myproject.us.myconnection OPTIONS (URI='gs://path*.csv.gz', FORMAT='CSV') AS SELECT * FROM all_rows\"\n )\n \n+ self.validate_all(\n+ \"SELECT * FROM t1, UNNEST(`t1`) AS `col`\",... | 92c88a9905481a87dd4640bfcd4f4ad88a0cff8c | diff --git a/sqlglot/transforms.py b/sqlglot/transforms.py
index 84e4c45dc9..da462732bf 100644
--- a/sqlglot/transforms.py
+++ b/sqlglot/transforms.py
@@ -309,10 +309,9 @@ def unqualify_unnest(expression: exp.Expression) -> exp.Expression:
}
if unnest_aliases:
for column in expression.fin... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-4941@346eae2 | tobymao/sqlglot | Python | 4,941 | fix(sqlite): transpile double quoted PRIMARY KEY | Fixes #4938
**DOCS**
[SQLite double quotes identifier](https://www.sqlite.org/quirks.html#double_quoted_string_literals_are_accepted) | 2025-04-04T13:34:16Z | Crash while reformatting a valid Sqlite statement
This piece of code used to run perfectly for months, at least until `sqlglot 26.2.1`.
After a recent update of our `sqlglot` version, it fails. Bisection shows the issue appears specifically with` sqlglot 26.3.0` (and upper versions).
The code to reproduce the error i... | Further investigation shows sqlglot running fine if the PRIMARY KEY column name is quoted with single quotes rather than double quotes. This is however not standard : Sqlite's documentation states that column names shall be double quoted, and that it comes from the SQL standard:
https://www.sqlite.org/quirks.html#doub... | [
{
"body": "This piece of code used to run perfectly for months, at least until `sqlglot 26.2.1`.\n\nAfter a recent update of our `sqlglot` version, it fails. Bisection shows the issue appears specifically with` sqlglot 26.3.0` (and upper versions).\n\nThe code to reproduce the error is:\n\n```\nimport sqlglot\n... | 09882e32f057670a9cbd97c1e5cf1a00c774b5d2 | {
"head_commit": "346eae200c95cbdb5794f9ffd685e3f95d598fa3",
"head_commit_message": "fix test",
"patch_to_review": "diff --git a/sqlglot/dialects/sqlite.py b/sqlglot/dialects/sqlite.py\nindex 659ba64492..9d67dd223e 100644\n--- a/sqlglot/dialects/sqlite.py\n+++ b/sqlglot/dialects/sqlite.py\n@@ -44,7 +44,9 @@ def _... | [
{
"diff_hunk": "@@ -44,7 +44,9 @@ def _transform_create(expression: exp.Expression) -> exp.Expression:\n primary_key = e\n \n if primary_key and len(primary_key.expressions) == 1:\n- column = defs[primary_key.expressions[0].name]\n+ expr = primary_key.expressions[0]... | a0012f6ba196a14d759cefc784c5092735d46229 | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index 4a2531bd5f..531b740fd0 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -2660,6 +2660,10 @@ class Sort(Order):
class Ordered(Expression):
arg_types = {"this": True, "desc": False, "nulls_first": True, "with_fill": False}
+ ... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-4596@36e8b31 | tobymao/sqlglot | Python | 4,596 | Fix!: exp.Merge condition for Trino/Postgres | Fixes #4595
Further building upon https://github.com/tobymao/sqlglot/pull/3940 there was a gap whereby the UPDATE part of a MERGE query needed to be fully qualified if it's used within a function.
The specific example we had was when we were concatenating arrays but I've substituted for a coalesce in the unit test. | 2025-01-10T16:13:08Z | Trino - when matched ambiguous column when using a function
- Read dialect Trino
- Write dialect Trino
- Still exists in main
**Fully reproducible code snippet**
```sql
MERGE INTO table_a AS target USING(
SELECT
pk,
my_array
FROM table_b
) AS source ON source.pk = target.pk
WHEN MATCHED THEN UP... | [
{
"body": "- Read dialect Trino\r\n- Write dialect Trino\r\n- Still exists in main\r\n\r\n**Fully reproducible code snippet**\r\n```sql\r\nMERGE INTO table_a AS target USING(\r\n SELECT\r\n pk,\r\n my_array\r\n FROM table_b\r\n) AS source ON source.pk = target.pk\r\nWHEN MATCHED THEN UPDATE SET target.m... | eb04a1fa9af3200ea2b15878a6b4eb69f6949775 | {
"head_commit": "36e8b317ca29b9d2e6488c6a1223e7958cec726d",
"head_commit_message": "address PR review comment",
"patch_to_review": "diff --git a/sqlglot/dialects/dialect.py b/sqlglot/dialects/dialect.py\nindex 5afa059608..fe93f62aea 100644\n--- a/sqlglot/dialects/dialect.py\n+++ b/sqlglot/dialects/dialect.py\n@@... | [
{
"diff_hunk": "@@ -1570,19 +1570,25 @@ def normalize(identifier: t.Optional[exp.Identifier]) -> t.Optional[str]:\n targets.add(normalize(alias.this))\n \n for when in expression.args[\"whens\"].expressions:\n- # only remove the target names from the THEN clause\n- # theyre still valid... | 4db59cae0c404f52aa60fa6b632cb581cda190f1 | diff --git a/sqlglot/dialects/dialect.py b/sqlglot/dialects/dialect.py
index 5afa059608..532ab854c5 100644
--- a/sqlglot/dialects/dialect.py
+++ b/sqlglot/dialects/dialect.py
@@ -1570,19 +1570,25 @@ def normalize(identifier: t.Optional[exp.Identifier]) -> t.Optional[str]:
targets.add(normalize(alias.this))
... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-4876@79f9aa6 | tobymao/sqlglot | Python | 4,876 | fix(hive)!: support STRUCT(*) and MAP(*) | Fixes #4871
This PR adds support for `STRUCT(*)` and `MAP(*)` for hive, spark, spark2, databricks dialects. | 2025-03-12T17:17:40Z | Spark support for map(*) and struct(*)
Theres a few ways to covert columns to json in spark depending on the use case, typically using map or struct. While its common to specify the schema, thats not required and sometimes not desired.
```python
import sqlglot
sql = """
select
to_json(map(*)),
to_json(struct(... | [
{
"body": "Theres a few ways to covert columns to json in spark depending on the use case, typically using map or struct. While its common to specify the schema, thats not required and sometimes not desired. \n```python\nimport sqlglot\n\nsql = \"\"\"\nselect\n to_json(map(*)),\n to_json(struct(*))\nfrom ... | 038da09f620cf057e4576b719c4e2f6712cbb804 | {
"head_commit": "79f9aa6cd11ad94edbe7df7a4bb50aea73b178af",
"head_commit_message": "fix(hive)!: support STRUCT(*) and MAP(*)",
"patch_to_review": "diff --git a/sqlglot/dialects/hive.py b/sqlglot/dialects/hive.py\nindex 587fee3482..2d702c068a 100644\n--- a/sqlglot/dialects/hive.py\n+++ b/sqlglot/dialects/hive.py\... | [
{
"diff_hunk": "@@ -446,6 +446,9 @@ def _parse_parameter(self) -> exp.Parameter:\n return self.expression(exp.Parameter, this=this, expression=expression)\n \n def _to_prop_eq(self, expression: exp.Expression, index: int) -> exp.Expression:\n+ if isinstance(expression, exp.Star):"... | 78bd9031dbfe0737185f7916aa92ea9b2d8f01a1 | diff --git a/sqlglot/dialects/hive.py b/sqlglot/dialects/hive.py
index 587fee3482..048907ac42 100644
--- a/sqlglot/dialects/hive.py
+++ b/sqlglot/dialects/hive.py
@@ -446,6 +446,9 @@ def _parse_parameter(self) -> exp.Parameter:
return self.expression(exp.Parameter, this=this, expression=expression)
... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-4431@d945f91 | tobymao/sqlglot | Python | 4,431 | Fix(optimizer)!: fix datetime coercion in the canonicalize rule | Fixes #4429 | 2024-11-20T10:02:44Z | Bigquery annotate_types incorrectly sets TIMESTAMP as DATETIME even when schema type is provided
```python
import sqlglot
import sqlglot.errors
from sqlglot.optimizer import optimize
dialect = "bigquery"
schema = {"test": {"products": {"created_at": "TIMESTAMP"}}}
expression = sqlglot.parse_one("SELECT * FROM t... | ```
diff --git a/sqlglot/optimizer/canonicalize.py b/sqlglot/optimizer/canonicalize.py
index fd002622..4382dc70 100644
--- a/sqlglot/optimizer/canonicalize.py
+++ b/sqlglot/optimizer/canonicalize.py
@@ -138,7 +138,7 @@ def _coerce_date(a: exp.Expression, b: exp.Expression) -> None:
and b.type
... | [
{
"body": "```python\r\nimport sqlglot\r\nimport sqlglot.errors\r\nfrom sqlglot.optimizer import optimize\r\n\r\ndialect = \"bigquery\"\r\nschema = {\"test\": {\"products\": {\"created_at\": \"TIMESTAMP\"}}}\r\nexpression = sqlglot.parse_one(\"SELECT * FROM test.products WHERE created_at > '2024-10-01'\", read=... | 69d4a8ccdf5954f293acbdf61c420b72dde5b8af | {
"head_commit": "d945f917796e91e4ad563dbe21b02ef810394c30",
"head_commit_message": "Fix(optimizer): fix datetime coercion in the canonicalize rule",
"patch_to_review": "diff --git a/sqlglot/optimizer/canonicalize.py b/sqlglot/optimizer/canonicalize.py\nindex fd0026228b..f1c4ba573b 100644\n--- a/sqlglot/optimizer... | [
{
"diff_hunk": "@@ -138,7 +138,7 @@ def _coerce_date(a: exp.Expression, b: exp.Expression) -> None:\n and b.type\n and b.type.this in exp.DataType.TEXT_TYPES\n ):\n- _replace_cast(b, exp.DataType.Type.DATETIME)\n+ _replace_cast(b, a.type)",
"line": null,... | 37e1ddcc30fdc538e47f883d07d4bd20553fa9a0 | diff --git a/sqlglot/dialects/dialect.py b/sqlglot/dialects/dialect.py
index 38a3723f24..2fbd4e6864 100644
--- a/sqlglot/dialects/dialect.py
+++ b/sqlglot/dialects/dialect.py
@@ -397,6 +397,13 @@ class Dialect(metaclass=_Dialect):
ARRAY_AGG_INCLUDES_NULLS: t.Optional[bool] = True
"""Whether ArrayAgg needs to ... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-4317@fe998b2 | tobymao/sqlglot | Python | 4,317 | feat!: Support MEDIAN() function | Fixes #4315
This PR adds support for `MEDIAN(<expr>)` across major dialects (DuckDB, Spark3, Databricks, Snowflake, Redshift, Clickhouse, Oracle). For the dialects that don't support it, the transpilation remains as `PERCENTILE_CONT(<expr>, 0.5)` which is a best-effort attempt.
Note: BigQuery supports `PERCENTILE... | 2024-10-30T13:25:32Z | duckdb `median` parses as `percentile_cont` which produces `quantile_cont`, making some median calls invalid
DuckDB `median` is not equivalent to `quantile_cont`.
`median` allows string inputs, while `quantile_cont` does not:
```
v1.1.2 f680b7d08f
Enter ".help" for usage hints.
Connected to a transient in-memo... | Here's the sqlglot parsing and emitting:
```
In [1]: import sqlglot as sg
In [2]: sg.parse_one("select median(x)", read="duckdb").sql('duckdb')
Out[2]: 'SELECT QUANTILE_CONT(x, 0.5)'
```
For context, I think this was based on DuckDB's [docs](https://duckdb.org/docs/sql/functions/aggregates.html#medianx) for `m... | [
{
"body": "DuckDB `median` is not equivalent to `quantile_cont`.\r\n\r\n`median` allows string inputs, while `quantile_cont` does not:\r\n\r\n```\r\nv1.1.2 f680b7d08f\r\nEnter \".help\" for usage hints.\r\nConnected to a transient in-memory database.\r\nUse \".open FILENAME\" to reopen on a persistent database.... | 50a1c919d0d46384e3bd9ba1d45c24dd07efe6d2 | {
"head_commit": "fe998b23de97ee1a94ee6616ca85bdea4faea66e",
"head_commit_message": "feat: Add support for MEDIAN() function",
"patch_to_review": "diff --git a/sqlglot/dialects/clickhouse.py b/sqlglot/dialects/clickhouse.py\nindex 7d86075d12..d3700066be 100644\n--- a/sqlglot/dialects/clickhouse.py\n+++ b/sqlglot/... | [
{
"diff_hunk": "@@ -433,6 +433,9 @@ class Generator(metaclass=_Generator):\n # Whether CONVERT_TIMEZONE() is supported; if not, it will be generated as exp.AtTimeZone\n SUPPORTS_CONVERT_TIMEZONE = False\n \n+ # Whether MEDIAN(expr) is supported; if not, it will be generated as PERCENTILE_CONT(expr, 0... | 60b7a88355d8a94321b51c6030863f19bb638df7 | diff --git a/sqlglot/dialects/clickhouse.py b/sqlglot/dialects/clickhouse.py
index 7d86075d12..fa86e3ac66 100644
--- a/sqlglot/dialects/clickhouse.py
+++ b/sqlglot/dialects/clickhouse.py
@@ -431,6 +431,7 @@ class Parser(parser.Parser):
**parser.Parser.FUNCTION_PARSERS,
"ARRAYJOIN": lambda self... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-4274@2d7f19e | tobymao/sqlglot | Python | 4,274 | fix(optimizer)!: Fix chained exp.SetOperation type annotation | Fixes #4261
Consider this repro:
```Python
>>> sql = """
WITH t AS (
SELECT NULL AS col
UNION ALL
SELECT 'a' AS col
UNION ALL
SELECT NULL AS col
)
SELECT * FROM t;
"""
>>> optimized = sqlglot.optimize(sql)
>>> optimized.selects[0].type
DataType(this=Type.NULL)
```
To annotate this query, `annot... | 2024-10-21T09:47:25Z | Optimizer relies on subquery's order to infer correct expression type with a multiple tables query
The Optimizer cannot infer the correct type for a given expression when analyzing a query with a `UNION ALL` clause with multiple tables (3 or more). It relies on the order of the `SELECT`.
### SQL
The following cod... | Appreciate the detailed report, we'll take a look shortly. | [
{
"body": "The Optimizer cannot infer the correct type for a given expression when analyzing a query with a `UNION ALL` clause with multiple tables (3 or more). It relies on the order of the `SELECT`.\r\n\r\n### SQL \r\nThe following code will result in a `DataType(this=Type.NULL)` for the `some_id_feature` ex... | 4543fb3cd052dfb20428f5a6254b38def9e756ee | {
"head_commit": "2d7f19e29d125ad4aab29ce08c681f8e5114ecda",
"head_commit_message": "Cache result of SetOperation subtrees",
"patch_to_review": "diff --git a/sqlglot/optimizer/annotate_types.py b/sqlglot/optimizer/annotate_types.py\nindex 052b74229c..be3e2e1240 100644\n--- a/sqlglot/optimizer/annotate_types.py\n+... | [
{
"diff_hunk": "@@ -192,6 +192,10 @@ def __init__(\n # Caches the ids of annotated sub-Expressions, to ensure we only visit them once\n self._visited: t.Set[int] = set()\n \n+ # Maps an exp.SetOperation's id (e.g. UNION) to it's annotated selected columns. This is done if the exp.SetOpera... | 6ece0413d4dd59cf15c0e9c1e2591384ad40a419 | diff --git a/sqlglot/optimizer/annotate_types.py b/sqlglot/optimizer/annotate_types.py
index 052b74229c..c75d48ac32 100644
--- a/sqlglot/optimizer/annotate_types.py
+++ b/sqlglot/optimizer/annotate_types.py
@@ -192,6 +192,11 @@ def __init__(
# Caches the ids of annotated sub-Expressions, to ensure we only visi... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-4266@9a86984 | tobymao/sqlglot | Python | 4,266 | fix(optimizer): Fix merge_subqueries.py::rename_inner_sources() | Fixes #4245
Consider this minimum repro, for the optimizer rules `[qualify, eliminate_subqueries, merge_subqueries]`:
```SQL
WITH tbl AS (select 1 as id)
SELECT
id
FROM (
SELECT OTBL.id
FROM (
SELECT OTBL.id
FROM (
SELECT OTBL.id
FROM tbl AS OTBL
LEFT OUTER JOIN tbl AS I... | 2024-10-18T12:09:14Z | Optimizer fails when trying to optimize a query with joins and reused alias
Optimizer fails to run when a table alias is reused:
Read dialect: trino
SQL:
```
SELECT
COUNT(DISTINCT (
"order_detail.count"
)) AS "C1"
FROM (
SELECT
"OTBL"."order_detail.count"
FROM (
SELECT
"OTBL"."o... | [
{
"body": "Optimizer fails to run when a table alias is reused:\r\nRead dialect: trino\r\n\r\nSQL:\r\n```\r\nSELECT\r\n COUNT(DISTINCT (\r\n \"order_detail.count\"\r\n )) AS \"C1\"\r\nFROM (\r\n SELECT\r\n \"OTBL\".\"order_detail.count\"\r\n FROM (\r\n SELECT\r\n \"OTBL\".\"order_detail.menu_i... | 7a5c7e036fa84eb30bcae75829f3cb94503fa99e | {
"head_commit": "9a8698484a23afb5560d157d17175f067bac964a",
"head_commit_message": "fix(optimizer): Fix rename_inner_sources for merge_subqueries rule",
"patch_to_review": "diff --git a/sqlglot/optimizer/merge_subqueries.py b/sqlglot/optimizer/merge_subqueries.py\nindex 603f5df0c4..bd268f63f3 100644\n--- a/sqlgl... | [
{
"diff_hunk": "@@ -227,10 +227,13 @@ def _rename_inner_sources(outer_scope, inner_scope, alias):\n inner_scope (sqlglot.optimizer.scope.Scope)\n alias (str)\n \"\"\"\n- taken = set(outer_scope.selected_sources)\n- conflicts = taken.intersection(set(inner_scope.selected_sources))\n+ ... | 1343aff781c5c012dcc773fd132980d6e3f22d84 | diff --git a/sqlglot/optimizer/merge_subqueries.py b/sqlglot/optimizer/merge_subqueries.py
index 603f5df0c4..866f78c239 100644
--- a/sqlglot/optimizer/merge_subqueries.py
+++ b/sqlglot/optimizer/merge_subqueries.py
@@ -227,10 +227,13 @@ def _rename_inner_sources(outer_scope, inner_scope, alias):
inner_scope (s... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-4426@9891635 | tobymao/sqlglot | Python | 4,426 | fix!: Tokenize hints as comments | Fixes #4425
Currently, hints are not tokenized as a single entity like we do for the comments (i.e through `_scan_comments()`), but instead the following token stream is produced:
```
>>> tokens = tokenize("SELECT /*+ hint */ * FROM t")
>>> for token in tokens:
... print(token)
<Token token_type: TokenTy... | 2024-11-19T17:15:59Z | TokenError: Error tokenizing 'select /*+ ORDERED */* from dual'
The tokenizer fails to parse a hint followed by a `*` if it is not separated by a space:
/*+ ... */* from dual
```
import sqlglot
sqlglot.parse_one(
'''
select /*+ ORDERED */* from dual
'''
)
```
sqlglot.errors.T... | [
{
"body": "The tokenizer fails to parse a hint followed by a `*` if it is not separated by a space:\r\n\r\n /*+ ... */* from dual\r\n\r\n```\r\nimport sqlglot\r\n\r\nsqlglot.parse_one(\r\n '''\r\n select /*+ ORDERED */* from dual \r\n '''\r\n)\r\n```\r\n\r\n sqlglot.errors.TokenError: Error ... | fc591ae2fa80be5821cb53d78906afe8e5505654 | {
"head_commit": "989163584f121c283f24098556fe96c137d0c985",
"head_commit_message": "fix: Tokenize & parse hints as comments",
"patch_to_review": "diff --git a/sqlglot/dialects/mysql.py b/sqlglot/dialects/mysql.py\nindex 1404f0fc32..434a212917 100644\n--- a/sqlglot/dialects/mysql.py\n+++ b/sqlglot/dialects/mysql.... | [
{
"diff_hunk": "@@ -1230,6 +1235,9 @@ def _scan_comment(self, comment_start: str) -> bool:\n self._advance(alnum=True)\n self._comments.append(self._text[comment_start_size:])\n \n+ if comment_start == \"/*+\":",
"line": null,
"original_line": 1238,
"original_start... | 50533e61ad7dad56209ad8e4854d58669bf9643f | diff --git a/sqlglot/dialects/oracle.py b/sqlglot/dialects/oracle.py
index 1025a075a6..cda13e7897 100644
--- a/sqlglot/dialects/oracle.py
+++ b/sqlglot/dialects/oracle.py
@@ -15,7 +15,6 @@
from sqlglot.helper import seq_get
from sqlglot.parser import OPTIONS_TYPE, build_coalesce
from sqlglot.tokens import TokenType
... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-4173@0e09ed1 | tobymao/sqlglot | Python | 4,173 | fix(redshift): Add unsupported warnings for UNNEST | Fixes #4169
Redshift supports [implicit unnest of an array](https://stackoverflow.com/a/72870096) but not an `UNNEST` function as other dialects. This PR adds unsupported warnings for the following cases:
- If an `exp.Unnest` is used with a _known_ non-array type
- If an `exp.Unnest` is not used under a `FROM` or ... | 2024-09-27T15:47:40Z | incorrect transpilation of unnest for redshift
running:
`sqlglot.transpile("select unnest({\"a\": 123}) as x;", read="duckdb", write="redshift")[0]`
actual:
`'SELECT UNNEST(STRUCT(123 AS a)) AS x'`
however redshift doesn't support unnest. there is no perfect equivalent in redshift so i'm curious if this is a situ... | I think the highest ROI for now is to use `self.unsupported` if `UNNEST` is being used as a projection or something, @VaggelisD could you take a look?
@bjabes how complicated would it be to convert the DuckDB query into an equivalent Redshift one?
Yeah this isn't blocking anything on my side, it was an exploration f... | [
{
"body": "running:\r\n`sqlglot.transpile(\"select unnest({\\\"a\\\": 123}) as x;\", read=\"duckdb\", write=\"redshift\")[0]`\r\n\r\nactual:\r\n`'SELECT UNNEST(STRUCT(123 AS a)) AS x'`\r\nhowever redshift doesn't support unnest. there is no perfect equivalent in redshift so i'm curious if this is a situation wh... | 7af33a2f74dd1300bcd45f1974b7fd28abe66b8e | {
"head_commit": "0e09ed12fb6667dc5820e39efe4f2306204c75b2",
"head_commit_message": "fix(redshift): Add unsupported warnings for UNNEST",
"patch_to_review": "diff --git a/sqlglot/dialects/redshift.py b/sqlglot/dialects/redshift.py\nindex 77e54e3f36..66847f0f8e 100644\n--- a/sqlglot/dialects/redshift.py\n+++ b/sql... | [
{
"diff_hunk": "@@ -388,13 +389,28 @@ def unnest_sql(self, expression: exp.Unnest) -> str:\n args = expression.expressions\n num_args = len(args)\n \n- if num_args > 1:\n+ if not num_args == 1:\n self.unsupported(f\"Unsupported number of arguments in... | 92cd91c7de12ec8f7f122073975d370e9a8b628a | diff --git a/sqlglot/dialects/redshift.py b/sqlglot/dialects/redshift.py
index 77e54e3f36..dc4f334ccb 100644
--- a/sqlglot/dialects/redshift.py
+++ b/sqlglot/dialects/redshift.py
@@ -184,6 +184,7 @@ class Generator(Postgres.Generator):
exp.DateDiff: date_delta_sql("DATEDIFF"),
exp.DistKeyPrope... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-4199@89c288b | tobymao/sqlglot | Python | 4,199 | feat(duckdb): Add more Postgres operators | Fixes #4189
The `^@` operator is an alias of the `STARTS_WITH(a, b)` function:
```SQL
──────────────┬────────────────────┬─────────┬──────────────────┬──────────────────┬──────────┬──────────────┬────────────────────────┬────────────┐
│ database_name │ database_oid │ schema_name │ function_name │ function_type ... | 2024-10-02T09:40:58Z | DuckDB `!~~`, `!~~*`, `&&`, `**`, `<@`, `@>`, `^@` operators are not parsed
When using SQLGlot's DuckDB dialect, it fails to parse several valid SQL statements containing operators that are supported by DuckDB. The issue appears to stem from SQLGlot not recognizing these operators.
#### Example Queries (Valid in Du... | Hey @rustyconover,
Thanks once again for these issues, they really help increase coverage. You mentioned that these tests were done automatically, do you think these specific operators are widely used? I personally haven't come across these before, and the first two do not seem documented with a description yet.
... | [
{
"body": "When using SQLGlot's DuckDB dialect, it fails to parse several valid SQL statements containing operators that are supported by DuckDB. The issue appears to stem from SQLGlot not recognizing these operators.\r\n\r\n#### Example Queries (Valid in DuckDB but Failing in SQLGlot):\r\n\r\n```sql\r\nSELECT ... | 6a659736f3a176e335c68fdd07d8265c3d0421dc | {
"head_commit": "89c288be31e3ab40b1e7405b179d58c8a5d522b2",
"head_commit_message": "Remove tokens from base tokenizer",
"patch_to_review": "diff --git a/sqlglot/dialects/duckdb.py b/sqlglot/dialects/duckdb.py\nindex 8eb766b3c3..13b5fa1312 100644\n--- a/sqlglot/dialects/duckdb.py\n+++ b/sqlglot/dialects/duckdb.py... | [
{
"diff_hunk": "@@ -290,6 +291,9 @@ class Tokenizer(tokens.Tokenizer):\n **tokens.Tokenizer.KEYWORDS,\n \"//\": TokenType.DIV,\n \"**\": TokenType.DSTAR,\n+ \"^@\": TokenType.CARET_AMP,",
"line": null,
"original_line": 294,
"original_start_line": null,
... | 9d3ce376a91942bd8c0a2d2f29f8f31a5ce7d15b | diff --git a/sqlglot/dialects/duckdb.py b/sqlglot/dialects/duckdb.py
index 8eb766b3c3..c32dd083c0 100644
--- a/sqlglot/dialects/duckdb.py
+++ b/sqlglot/dialects/duckdb.py
@@ -38,6 +38,7 @@
)
from sqlglot.helper import seq_get
from sqlglot.tokens import TokenType
+from sqlglot.parser import binary_range_parser
DAT... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-4113@29236ad | tobymao/sqlglot | Python | 4,113 | fix(optimizer): Enable USING expansion with multiple joins | fixes tobymao/sqlglot#4112
also added test for this case | 2024-09-13T09:33:04Z | `optimizer.qualify.qualify` raises "Cannot automatically join" for multiple joins with `USING` and `ON` clauses
When there are two or more `JOIN` operations, and a `JOIN` with an `ON` condition precedes a `JOIN` with a `USING` clause, the `optimizer.qualify.qualify` method throws the error: **"Cannot automatically join... | [
{
"body": "When there are two or more `JOIN` operations, and a `JOIN` with an `ON` condition precedes a `JOIN` with a `USING` clause, the `optimizer.qualify.qualify` method throws the error: **\"Cannot automatically join\"**.\r\n\r\n\r\n## Steps to Reproduce\r\n\r\n```python\r\nfrom sqlglot import parse_one\r\n... | 44579b496db7195eb08271ed00b4ea43e928770e | {
"head_commit": "29236ad380dc71ec6bdcf3a6dedffdf87b2e3a0f",
"head_commit_message": "Fix for #4112 optimizer.qualify._expand_using now correctly process multiple joins with USING and ON clauses",
"patch_to_review": "diff --git a/sqlglot/optimizer/qualify_columns.py b/sqlglot/optimizer/qualify_columns.py\nindex c7... | [
{
"diff_hunk": "@@ -170,6 +167,10 @@ def _expand_using(scope: Scope, resolver: Resolver) -> t.Dict[str, t.Any]:\n \n source_table = ordered[-1]\n ordered.append(join_table)\n+\n+ if not using:\n+ continue\n+",
"line": null,
"original_line": 173,
"original_start_line... | 8daad20a4968e9c3b59dfd590dc0b9abee41fb30 | diff --git a/sqlglot/optimizer/qualify_columns.py b/sqlglot/optimizer/qualify_columns.py
index c7f3cda0a4..bffd0ab5b8 100644
--- a/sqlglot/optimizer/qualify_columns.py
+++ b/sqlglot/optimizer/qualify_columns.py
@@ -153,23 +153,22 @@ def _expand_using(scope: Scope, resolver: Resolver) -> t.Dict[str, t.Any]:
column_... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-4023@646fe97 | tobymao/sqlglot | Python | 4,023 | fix(starrocks): exp.Create transpilation | Fix #3997
> We closed #3998 because it was too convoluted, so I split out Starrocks own create table transpile fix here.
Supports starrocks create table parse and generator
This PR adds support for the following:
- Supports `DISTRIBUTED BY HASH|RANDOM`, `DUPLICATE KEY`, `PROPERTIES` to be parsed correctly
- ... | 2024-08-31T12:00:12Z | Incorect tranformation of create table using Starrocks
**Before you file an issue**
sqlglot.transpile(sql, read="starrocks", write="hive") # or write="spark"
**Fully reproducible code snippet**
```
CREATE TABLE if not exists `sample_table` (
`tenantid` varchar(1048576) NULL COMMENT "",
`create_day` dat... | This is low priority for us right now, so I'll go ahead and close the ticket. Regardless, feel free to work on it. Make sure to check out the comments I left in #3998. | [
{
"body": "**Before you file an issue**\r\nsqlglot.transpile(sql, read=\"starrocks\", write=\"hive\") # or write=\"spark\"\r\n\r\n**Fully reproducible code snippet**\r\n```\r\nCREATE TABLE if not exists `sample_table` (\r\n `tenantid` varchar(1048576) NULL COMMENT \"\",\r\n `create_day` date NOT NULL COM... | 1108426a0eb23bbcaec8bed946f1dae6682bc1dd | {
"head_commit": "646fe972c9d65739e840e15c85c04731c995823e",
"head_commit_message": "fix(starrocks): exp.Create transpilation\n\nFix #3997\n\nSupports starrocks create table parse and generator\n\nThis PR adds support for the following:\n- Supports `DISTRIBUTED BY HASH|RANDOM`, `DUPLICATE KEY`, `PROPERTIES` to be p... | [
{
"diff_hunk": "@@ -13,11 +13,58 @@\n )\n from sqlglot.dialects.mysql import MySQL\n from sqlglot.helper import seq_get\n+from sqlglot.tokens import TokenType\n+\n+\n+def _duplicate_key_sql(self, expression: exp.DuplicateKeyProperty) -> str:\n+ expressions = self.expressions(expression, flat=True)\n+ opti... | e2c95927762707b467476307bd6c2a320a74bb82 | diff --git a/sqlglot/dialects/dialect.py b/sqlglot/dialects/dialect.py
index 294a329ff8..488ef413b6 100644
--- a/sqlglot/dialects/dialect.py
+++ b/sqlglot/dialects/dialect.py
@@ -1039,6 +1039,10 @@ def no_map_from_entries_sql(self: Generator, expression: exp.MapFromEntries) ->
return ""
+def property_sql(self:... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-3945@6a91477 | tobymao/sqlglot | Python | 3,945 | Fix(parser): Support sqls with DESCRIBE partition | Fix https://github.com/tobymao/sqlglot/issues/3941 | 2024-08-21T00:46:58Z | Failed to parse a statement with desc partition
**Before you file an issue**
- Make sure you specify the "read" dialect eg. `parse_one(sql, read="spark")`
- Make sure you specify the "write" dialect eg. `ast.sql(dialect="duckdb")`
- Check if the issue still exists on main
**Fully reproducible code snippet**
Plea... | low priority, feel free to make a pr | [
{
"body": "**Before you file an issue**\r\n- Make sure you specify the \"read\" dialect eg. `parse_one(sql, read=\"spark\")`\r\n- Make sure you specify the \"write\" dialect eg. `ast.sql(dialect=\"duckdb\")`\r\n- Check if the issue still exists on main\r\n\r\n**Fully reproducible code snippet**\r\nPlease includ... | a84a21aaef0e65754e67ecebdfcbf7136c77acc7 | {
"head_commit": "6a914773f9309e8cdadb5a38de6aab9f579929c5",
"head_commit_message": "Update generator.py",
"patch_to_review": "diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py\nindex 54f75087f1..00a54a04d1 100644\n--- a/sqlglot/expressions.py\n+++ b/sqlglot/expressions.py\n@@ -1437,7 +1437,13 @@ class... | [
{
"diff_hunk": "@@ -1117,7 +1117,11 @@ def clone_sql(self, expression: exp.Clone) -> str:\n def describe_sql(self, expression: exp.Describe) -> str:\n style = expression.args.get(\"style\")\n style = f\" {style}\" if style else \"\"\n- return f\"DESCRIBE{style} {self.sql(expression, '... | dceb8bf491b9f707877b1e28b5fdce124f15142f | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index 54f75087f1..00a54a04d1 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -1437,7 +1437,13 @@ class Clone(Expression):
class Describe(Expression):
- arg_types = {"this": True, "style": False, "kind": False, "expressions": False}... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-3887@5f4322a | tobymao/sqlglot | Python | 3,887 | fix: Fix COLLATE's RHS parsing | Fixes #3880
`COLLATE` should have a higher precedence than constructs such as `LIKE` but it's not possible to override it's parsing. For this reason, this PR unrolls `_parse_term` and transforms `COLLATE`'s RHS into an `exp.Var` (instead of `exp.Column`) if it's an unquoted identifier. | 2024-08-07T15:36:15Z | T-SQL COLLATE argument being interpreted as a column
SQLGlot versions: tested with 25.9.0 and 25.8.1
When parsing a code block, I'm finding that the parsed query is returning the COLLATE collation argument as a column.
Example SQL:
```
select
sql_text
, some_id
from
my_catalog.my_schema.my_tab... | [
{
"body": "SQLGlot versions: tested with 25.9.0 and 25.8.1\r\n\r\n\r\nWhen parsing a code block, I'm finding that the parsed query is returning the COLLATE collation argument as a column.\r\n\r\nExample SQL:\r\n```\r\nselect\r\n sql_text\r\n , some_id\r\nfrom\r\n my_catalog.my_schema.my_table a\r\n ... | 2ad9bfef71ae707b83f604f16b47aa583d082c3b | {
"head_commit": "5f4322a677ec45751e87b1de54bd97f2e547ddbe",
"head_commit_message": "fix: Fix COLLATE's RHS parsing",
"patch_to_review": "diff --git a/sqlglot/parser.py b/sqlglot/parser.py\nindex d43a916e33..56fda369e3 100644\n--- a/sqlglot/parser.py\n+++ b/sqlglot/parser.py\n@@ -4360,7 +4360,25 @@ def _parse_bit... | [
{
"diff_hunk": "@@ -4360,7 +4360,25 @@ def _parse_bitwise(self) -> t.Optional[exp.Expression]:\n return this\n \n def _parse_term(self) -> t.Optional[exp.Expression]:\n- return self._parse_tokens(self._parse_factor, self.TERM)\n+ this = self._parse_factor()\n+\n+ while self._mat... | ac4e06cbe00266d0aa5f713162df7c8ea1887375 | diff --git a/sqlglot/parser.py b/sqlglot/parser.py
index d43a916e33..23e6a60469 100644
--- a/sqlglot/parser.py
+++ b/sqlglot/parser.py
@@ -4360,7 +4360,26 @@ def _parse_bitwise(self) -> t.Optional[exp.Expression]:
return this
def _parse_term(self) -> t.Optional[exp.Expression]:
- return self._par... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-3883@566e669 | tobymao/sqlglot | Python | 3,883 | feat(duckdb): Transpile Snowflake's CONVERT_TIMEZONE 3-arg version | Fixes #3875
This PR attempts to transpile Snowflake's `CONVERT_TIMEZONE(source_tz, target_tz, timestamp_ntz)` 3-arg version to DuckDB.
Although there isn't a 1:1 mapping, the same result can be acquired by generating nested `TIMEZONE()` calls in DuckDB:
```SQL
TIMEZONE(target_tz, TIMEZONE(source_tz, timestamp... | 2024-08-07T08:18:11Z | transpile from snowflake to duckdb with convert timezone is not working
**Fully reproducible code snippet**
The following snowflake query is transpile into a duckdb and fails
```
SELECT date_trunc('month', CONVERT_TIMEZONE('UTC', 'Europe/Berlin', anchor_hour)) AS ANCHOR,
SUM(sum_paid_usd) AS SU... | Hey @milonimrod,
Thanks for reporting this. From what I see, in Snowflake `CONVERT_TIMEZONE` has two versions depending on the parameter count, with the 3-arg version being `CONVERT_TIMEZONE(source_tz, target_tz, timestamp_ntz)`; This means that Snowflake can take a `TIMESTAMP` without timezone information and trans... | [
{
"body": "**Fully reproducible code snippet**\r\n\r\nThe following snowflake query is transpile into a duckdb and fails\r\n```\r\nSELECT date_trunc('month', CONVERT_TIMEZONE('UTC', 'Europe/Berlin', anchor_hour)) AS ANCHOR,\r\n SUM(sum_paid_usd) AS SUM_PAID,\r\n SUM(CASE WHEN... | 3e4fcf7e8f6a322c14470de6c5dbba152bc9b2fe | {
"head_commit": "566e6692f2555205ff441a9ab595257d8e1d4c93",
"head_commit_message": "Remove unnecessary arg",
"patch_to_review": "diff --git a/sqlglot/dialects/mysql.py b/sqlglot/dialects/mysql.py\nindex 2fa355e58f..145a7d1cb2 100644\n--- a/sqlglot/dialects/mysql.py\n+++ b/sqlglot/dialects/mysql.py\n@@ -302,6 +30... | [
{
"diff_hunk": "@@ -4079,3 +4082,20 @@ def arrayconcat_sql(self, expression: exp.ArrayConcat, name: str = \"ARRAY_CONCAT\n rhs = self.expressions(expression)\n \n return self.func(name, expression.this, rhs)\n+\n+ def converttimezone_sql(self, expression: exp.ConvertTimezone) -> str:\n+ ... | e00b50a8561a9b628684a0cdd00ba6cbaf7095fb | diff --git a/sqlglot/dialects/mysql.py b/sqlglot/dialects/mysql.py
index 2fa355e58f..145a7d1cb2 100644
--- a/sqlglot/dialects/mysql.py
+++ b/sqlglot/dialects/mysql.py
@@ -302,6 +302,9 @@ class Parser(parser.Parser):
FUNCTIONS = {
**parser.Parser.FUNCTIONS,
+ "CONVERT_TZ": lambda args:... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-3766@4dee1d7 | tobymao/sqlglot | Python | 3,766 | fix(clickhouse): Allow TokenType.SELECT as a Tuple field identifier | Fixes #3763
Docs
-------
[Clickhouse Tuple](https://clickhouse.com/docs/en/sql-reference/data-types/tuple) | 2024-07-15T09:54:48Z | ClickHouse tuple field named `select` fails to parse without quotes
```
In [7]: import sqlglot as sg, sqlglot.expressions as sge
In [8]: sg.__version__
Out[8]: '25.5.1'
In [9]: sg.parse_one('Tuple(select Int64)', into=sge.DataType, dialect="clickhouse")
```
Unfortunately I don't have control over the input ... | [
{
"body": "```\r\nIn [7]: import sqlglot as sg, sqlglot.expressions as sge\r\n\r\nIn [8]: sg.__version__\r\nOut[8]: '25.5.1'\r\n\r\nIn [9]: sg.parse_one('Tuple(select Int64)', into=sge.DataType, dialect=\"clickhouse\")\r\n```\r\n\r\nUnfortunately I don't have control over the input here, this is coming directly... | 321051aef30f11f2778444040a2078633e617144 | {
"head_commit": "4dee1d7612eab3f56db37e00f71d579c70aa1e0d",
"head_commit_message": "fix(clickhouse): Allow TokenType.SELECT as a Tuple identifier",
"patch_to_review": "diff --git a/sqlglot/dialects/clickhouse.py b/sqlglot/dialects/clickhouse.py\nindex 3841ea6480..adf7b446ea 100644\n--- a/sqlglot/dialects/clickho... | [
{
"diff_hunk": "@@ -332,6 +332,8 @@ class Parser(parser.Parser):\n TokenType.SET,\n }\n \n+ RESERVED_TOKENS = {*parser.Parser.RESERVED_TOKENS} - {TokenType.SELECT}",
"line": null,
"original_line": 335,
"original_start_line": null,
"path": "sqlglot/dialects/clickhouse.p... | 356dab863bb38007a3503736e7c85874dfd31f66 | diff --git a/sqlglot/dialects/clickhouse.py b/sqlglot/dialects/clickhouse.py
index 3841ea6480..5adc3cf8a5 100644
--- a/sqlglot/dialects/clickhouse.py
+++ b/sqlglot/dialects/clickhouse.py
@@ -332,6 +332,8 @@ class Parser(parser.Parser):
TokenType.SET,
}
+ RESERVED_TOKENS = parser.Parser.RE... | {
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-3682@27d2f89 | tobymao/sqlglot | Python | 3,682 | Fix(parser): handle another edge case in struct field type parser | fixes #3680 | 2024-06-20T14:59:44Z | sqlglot fails to parse BigQuery type cast
**Fully reproducible code snippet**
```python
import sqlglot
sqlglot.parse_one(sql="select sources::STRUCT<list ARRAY<STRUCT<element STRUCT<property STRING, dataset STRING, record_id STRING, confidence FLOAT64>>>> from bar", dialect="bigquery")
```
Produces
```
sql... | [
{
"body": "**Fully reproducible code snippet**\r\n\r\n```python\r\nimport sqlglot\r\nsqlglot.parse_one(sql=\"select sources::STRUCT<list ARRAY<STRUCT<element STRUCT<property STRING, dataset STRING, record_id STRING, confidence FLOAT64>>>> from bar\", dialect=\"bigquery\")\r\n```\r\n\r\nProduces\r\n\r\n```\r\nsq... | ac0e89c4401f2f278d32c3e956670b262ab21ce7 | {
"head_commit": "27d2f89b5d2c71ca3e47e356a0f2702b198432fd",
"head_commit_message": "Fix(parser): handle another edge case in struct field type parser",
"patch_to_review": "diff --git a/sqlglot/parser.py b/sqlglot/parser.py\nindex 0d2cc2b9e9..a27a47f013 100644\n--- a/sqlglot/parser.py\n+++ b/sqlglot/parser.py\n@@... | [
{
"diff_hunk": "@@ -4486,10 +4486,22 @@ def _parse_types(\n \n def _parse_struct_types(self, type_required: bool = False) -> t.Optional[exp.Expression]:\n index = self._index\n- this = (\n- self._parse_type(parse_interval=False, fallback_to_identifier=True)\n- or self._p... | e1e8665e10653b73e53a8eabf821a4277ef98d3b | diff --git a/sqlglot/parser.py b/sqlglot/parser.py
index 0d2cc2b9e9..0aae2e4088 100644
--- a/sqlglot/parser.py
+++ b/sqlglot/parser.py
@@ -4486,10 +4486,22 @@ def _parse_types(
def _parse_struct_types(self, type_required: bool = False) -> t.Optional[exp.Expression]:
index = self._index
- this = (... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-3809@7d66371 | tobymao/sqlglot | Python | 3,809 | fix(duckdb): Fix STRUCT_PACK -> ROW due to is_struct_cast | Fixes #3808
The recent PR https://github.com/tobymao/sqlglot/pull/3751 enabled transpilation of BQ's `STRUCT<type>(values)` to DuckDB `CAST(ROW(<values>) AS STRUCT(<struct>))`, and the core DDB generation logic was based on the `is_struct_cast` condition. However, a struct can be casted to other types such as `JSON`... | 2024-07-24T13:31:23Z | SqlGlot messes up DuckDb Struct_pack + Json
DuckDB transforms struct_pack to json to row, which is wrong:
```python
import sqlglot
print(
sqlglot.parse_one(
"""select struct_pack(pig:='other 42')::json""",
dialect="duckdb",
).sql("duckdb")
)
# prints SELECT CAST(ROW('other 42') AS J... | [
{
"body": "DuckDB transforms struct_pack to json to row, which is wrong:\r\n\r\n```python\r\nimport sqlglot\r\n\r\nprint(\r\n sqlglot.parse_one(\r\n \"\"\"select struct_pack(pig:='other 42')::json\"\"\",\r\n dialect=\"duckdb\",\r\n ).sql(\"duckdb\")\r\n)\r\n# prints SELECT CAST(ROW('other 42... | 5c93acd7046cdd1ed1f872fa024c1bb85da282c8 | {
"head_commit": "7d6637144bb22131fea2839cb97ac40fbd63eeed",
"head_commit_message": "fix(duckdb): Fix STRUCT_PACK -> ROW due to is_struct_cast",
"patch_to_review": "diff --git a/sqlglot/dialects/duckdb.py b/sqlglot/dialects/duckdb.py\nindex ca747069af..aae2c18030 100644\n--- a/sqlglot/dialects/duckdb.py\n+++ b/sq... | [
{
"diff_hunk": "@@ -134,7 +134,14 @@ def _struct_sql(self: DuckDB.Generator, expression: exp.Struct) -> str:\n \n # BigQuery allows inline construction such as \"STRUCT<a STRING, b INTEGER>('str', 1)\" which is\n # canonicalized to \"ROW('str', 1) AS STRUCT(a TEXT, b INT)\" in DuckDB\n- is_struct_cas... | 96d48b9e2c58b507a275525811e79dca3de29f34 | diff --git a/sqlglot/dialects/duckdb.py b/sqlglot/dialects/duckdb.py
index ca747069af..9e772616ab 100644
--- a/sqlglot/dialects/duckdb.py
+++ b/sqlglot/dialects/duckdb.py
@@ -134,7 +134,12 @@ def _struct_sql(self: DuckDB.Generator, expression: exp.Struct) -> str:
# BigQuery allows inline construction such as "ST... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-3786@5414bc5 | tobymao/sqlglot | Python | 3,786 | feat: Move ANNOTATORS to Dialect for dialect-aware annotation | Fixes #3778
The issue arises from the following chain reaction:
1. Transpiled divisions from Spark to Presto/Trino will be casted to `DOUBLE` due to typed division semantics
2. Certain math functions in Presto/Trino such as `FLOOR()` will return the same type as their input (`DOUBLE` in this case), so that type wi... | 2024-07-19T15:34:12Z | Incorrect type cast in conversion from Spark to Presto in timestampad function
I have the following line in my SELECT statement, (consider the second argument in the `timestampadd` Spark function)...
```
q = "select region_uuid,
timestampadd(minute, floor(extract(minute from delivery_created_time_local)/30)*3... | Hey @ddelzell, thanks for reporting this once again. The reason this cast is there is to match Spark's division semantics but the `DOUBLE` type bubbles up to `DATE_ADD` because `FLOOR()` (among other math functions) is type-dependent on Presto/Trino.
I have the fix ready but it required a non-trivial change in our t... | [
{
"body": "I have the following line in my SELECT statement, (consider the second argument in the `timestampadd` Spark function)...\r\n\r\n```\r\nq = \"select region_uuid, \r\n timestampadd(minute, floor(extract(minute from delivery_created_time_local)/30)*30, date_trunc(‘hour’, time_org)) as time \r\n fr... | 898f523a8db9f73b59055f1e38cf4acb07157f00 | {
"head_commit": "5414bc52b98c1c883bd276745ae1f55544e443c2",
"head_commit_message": "feat: Move ANNOTATORS to Dialect for dialect-aware annotation",
"patch_to_review": "diff --git a/sqlglot/dialects/dialect.py b/sqlglot/dialects/dialect.py\nindex 3ba72ea757..b018326e82 100644\n--- a/sqlglot/dialects/dialect.py\n+... | [
{
"diff_hunk": "@@ -256,6 +256,19 @@ class Presto(Dialect):\n # https://github.com/prestodb/presto/issues/2863\n NORMALIZATION_STRATEGY = NormalizationStrategy.CASE_INSENSITIVE\n \n+ # The result of certain math functions in Presto/Trino is of type\n+ # equal to the input type e.g: FLOOR(5.5/2) ->... | fa4a9a22c378ea797c1455143841eb5e08462f88 | diff --git a/sqlglot/dialects/dialect.py b/sqlglot/dialects/dialect.py
index 3ba72ea757..a6409f862d 100644
--- a/sqlglot/dialects/dialect.py
+++ b/sqlglot/dialects/dialect.py
@@ -8,7 +8,7 @@
from sqlglot import exp
from sqlglot.errors import ParseError
from sqlglot.generator import Generator
-from sqlglot.helper imp... | {
"difficulty": "high",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-3470@a964317 | tobymao/sqlglot | Python | 3,470 | Fix: preserve quotes for projections produced by the eliminate_qualify rule | 2024-05-14T12:20:21Z | Sqlglot produces Syntax Error on rewriting QUALIFY
```python
print(
sqlglot.parse_one(
"SELECT tr.`User_-_iD`, tr.time_stamp, tr.__is_deleted, tr.__timestamp FROM tmp_9154874609124146377 AS tr WHERE __timestamp > CAST('2024-05-14 10:09:06.470000+00:00' AS TIMESTAMP) QUALIFY ROW_NUMBER() OVER (PARTITION B... | My guess is that we're probably missing a `quoted=True` or omitting the arg altogether in the `eliminate_qualify` rule when copying an expression. This shouldn't be too hard, we'll take a look in a bit, but well-tested PRs are also welcome. | [
{
"body": "```python\r\nprint(\r\n sqlglot.parse_one(\r\n \"SELECT tr.`User_-_iD`, tr.time_stamp, tr.__is_deleted, tr.__timestamp FROM tmp_9154874609124146377 AS tr WHERE __timestamp > CAST('2024-05-14 10:09:06.470000+00:00' AS TIMESTAMP) QUALIFY ROW_NUMBER() OVER (PARTITION BY `User_-_iD` ORDER BY __... | 065281e28be75597f3f97cee22995423ed483660 | {
"head_commit": "a964317af685d227098026a62ecdbf29fb445617",
"head_commit_message": "fixes #https://github.com/tobymao/sqlglot/issues/3467",
"patch_to_review": "diff --git a/sqlglot/transforms.py b/sqlglot/transforms.py\nindex ec2a0df01a..10cc4f90a3 100644\n--- a/sqlglot/transforms.py\n+++ b/sqlglot/transforms.py... | [
{
"diff_hunk": "@@ -105,7 +105,9 @@ def eliminate_qualify(expression: exp.Expression) -> exp.Expression:\n select.replace(exp.alias_(select, alias))\n taken.add(alias)\n \n- outer_selects = exp.select(*[select.alias_or_name for select in expression.selects])\n+ oute... | 4be343d95bc40f8715f548b050acf24869e95dfb | diff --git a/sqlglot/transforms.py b/sqlglot/transforms.py
index ec2a0df01a..c214768d05 100644
--- a/sqlglot/transforms.py
+++ b/sqlglot/transforms.py
@@ -105,7 +105,15 @@ def eliminate_qualify(expression: exp.Expression) -> exp.Expression:
select.replace(exp.alias_(select, alias))
tak... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-3449@d625ce2 | tobymao/sqlglot | Python | 3,449 | fix(snowflake): COPY Subquery postfix | Fixes #3434
This PR addresses the following issues:
- COPY would only allow files in the `FROM` clause, extend it to allow subqueries as well
- When parsing staged files such as `@foo` we'd also consume the following parentheses coming from (optional) staged file options such as `SELECT * FROM @foo (FILE_FORMAT ... | 2024-05-10T08:42:52Z | Parsing issue for Snowflake COPY INTO queries with subqueries
Hi folks,
looks like we're facing an issue when parsing Snowflake `COPY INTO` queries with `SELECT` subqueries:
```
query = "COPY INTO test (c1) FROM (SELECT $1:c1::string FROM @mystage)"
parse_one(query, read="snowflake")
```
... which yields the... | [
{
"body": "Hi folks, \r\n\r\nlooks like we're facing an issue when parsing Snowflake `COPY INTO` queries with `SELECT` subqueries:\r\n```\r\nquery = \"COPY INTO test (c1) FROM (SELECT $1:c1::string FROM @mystage)\"\r\nparse_one(query, read=\"snowflake\")\r\n```\r\n\r\n... which yields the following error:\r\n``... | 856f70e80111a9eb482186232c6661f03927320d | {
"head_commit": "d625ce2cf31355344fbdaf1349e107381c23ef56",
"head_commit_message": "fix(snowflake): COPY Subquery postfix",
"patch_to_review": "diff --git a/sqlglot/dialects/snowflake.py b/sqlglot/dialects/snowflake.py\nindex 538cb9cb4a..9a28488570 100644\n--- a/sqlglot/dialects/snowflake.py\n+++ b/sqlglot/diale... | [
{
"diff_hunk": "@@ -682,14 +682,23 @@ def _parse_location_property(self) -> exp.LocationProperty:\n return self.expression(exp.LocationProperty, this=self._parse_location_path())\n \n def _parse_file_location(self) -> t.Optional[exp.Expression]:\n- return self._parse_table_parts()... | 740a686eb9d25cd7ae28f552a8a18d1f132e4d81 | diff --git a/sqlglot/dialects/hive.py b/sqlglot/dialects/hive.py
index d86691e4f2..22b7029189 100644
--- a/sqlglot/dialects/hive.py
+++ b/sqlglot/dialects/hive.py
@@ -422,6 +422,15 @@ def _parse_partition_and_order(
super()._parse_order(skip_order_token=self._match(TokenType.SORT_BY)),
)
... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-3398@8c14d8a | tobymao/sqlglot | Python | 3,398 | fix(snowflake): COPY postfix | Fixes #3388
Design notes
---------------
- Snowflake's `FILE_PARAM` option is now parsed & generated on its own to accommodate for the optional `formatTypeOptions` that might come after the `TYPE` opts:
```
FILE_FORMAT = (
FORMAT_NAME = '[<namespace>.]<file_format_name>' |
TYPE = { CSV | ... | XML } [ f... | 2024-05-02T12:20:38Z | Version 23.12.2 fails to parse some COPY clauses
Looks as if in prior versions it didn't fully parse and fell back to the default Command but with 23.12.2 it's trying to parse and failing. Maybe we can have it fall back to Command as we work through various options COPY has?
**Fully reproducible code snippet**
``... | Sorry for the trouble @dangoldin, this was an oversight on our end. Should have deployed a minor version. We'll fix this soon and look into deploying a patch.
All good and appreciate the responsiveness. I just fell back to 23.12.1 in my code and would attempt a fix here but haven't been keeping up with the changes here... | [
{
"body": "Looks as if in prior versions it didn't fully parse and fell back to the default Command but with 23.12.2 it's trying to parse and failing. Maybe we can have it fall back to Command as we work through various options COPY has?\r\n\r\n**Fully reproducible code snippet**\r\n\r\n```python\r\nfrom sqlglo... | d1b4f1f256cd772bec366d6bf13d9589e1fdfc4b | {
"head_commit": "8c14d8aa517bdad4d31ae9f73199453ecfaf3bdb",
"head_commit_message": "Add missing type hints",
"patch_to_review": "diff --git a/sqlglot/dialects/snowflake.py b/sqlglot/dialects/snowflake.py\nindex c87f416d90..ce2fa7bb53 100644\n--- a/sqlglot/dialects/snowflake.py\n+++ b/sqlglot/dialects/snowflake.p... | [
{
"diff_hunk": "@@ -3820,7 +3820,7 @@ def copy_sql(self, expression: exp.Copy) -> str:\n this = f\" INTO {this}\" if self.COPY_HAS_INTO_KEYWORD else f\" {this}\"\n \n credentials = self.sql(expression, \"credentials\")\n- credentials = f\" {credentials}\" if credentials else \"\"\n+ ... | 75dae594c64e4781dc5b0b17971f015aaf55aac6 | diff --git a/sqlglot/dialects/snowflake.py b/sqlglot/dialects/snowflake.py
index c87f416d90..b79386a708 100644
--- a/sqlglot/dialects/snowflake.py
+++ b/sqlglot/dialects/snowflake.py
@@ -440,7 +440,7 @@ class Parser(parser.Parser):
PROPERTY_PARSERS = {
**parser.Parser.PROPERTY_PARSERS,
- ... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-3380@b2b673d | tobymao/sqlglot | Python | 3,380 | fix(optimizer): Remove XOR from connector simplifications | Fixes #3372
The `simplify` rule aims to simplify `AND` & `OR` connectors by removing duplicate exprs such as `A OR A -> A` but fails to consider boolean `XOR` (supported by MySQL) which has different semantics i.e for `A = True`, `A XOR A = False` | 2024-04-30T11:40:41Z | XOR transformed into an OR depending on the alphabetical order of operands
Hello,
I stumbled upon an apparent bug still present in the 23.12.1 release (I was using 20.9.0 when I first saw the bug). When using the `optimize` function on a MySQL constraint, result depends on the alphabetical order of the operands (wit... | This look like a bug, thanks for reporting. We'll take a look soon. | [
{
"body": "Hello,\r\n\r\nI stumbled upon an apparent bug still present in the 23.12.1 release (I was using 20.9.0 when I first saw the bug). When using the `optimize` function on a MySQL constraint, result depends on the alphabetical order of the operands (with one of the results being wrong):\r\n\r\n```python\... | 3e8de7124b735a6ab52971a3e51702c4e7b74be5 | {
"head_commit": "b2b673d3124a475219c0b59dfcaeeed72293619d",
"head_commit_message": "fix(optimizer): Remove XOR from connector simplifications",
"patch_to_review": "diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py\nindex 862b3f7a01..d62b463ee3 100644\n--- a/sqlglot/expressions.py\n+++ b/sqlglot/expres... | [
{
"diff_hunk": "@@ -427,6 +427,12 @@ def test_simplify(self):\n \"\"\".strip(),\n )\n \n+ xor_query = parse_one(\"A XOR D XOR B XOR E XOR F XOR G XOR E XOR A\", read=\"mysql\")\n+ simplified_xor = optimizer.simplify.simplify(xor_query)\n+ self.assertEqual(\n+ simplified_x... | e8ec486c952125a8c002cbde1b12f918cd577390 | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index 862b3f7a01..0675c510ab 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -6519,7 +6519,7 @@ def condition(
def and_(
*expressions: t.Optional[ExpOrStr], dialect: DialectType = None, copy: bool = True, **opts
-) -> Condition:
+)... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-3367@94bf079 | tobymao/sqlglot | Python | 3,367 | feat(mysql): Transpile TimestampTrunc | Fixes #3366
MySQL cannot generate `exp.TimestampTrunc` using an existing function; However, one can simulate it with a combination of date add & diff ([source](https://stackoverflow.com/a/32955740))
Docs for `DATE_TRUNC`
-----------
- [Postgres](https://www.postgresql.org/docs/current/functions-datetime.html#FU... | 2024-04-29T11:12:42Z | Timestamp trunc method issue from postgres to MySQL
I was trying to convert an SQL query from Postgres to MySQL. Check the following code snippet
```
import sqlglot
sqlglot.transpile("SELECT date_trunc('hour', timestamp '2001-02-16 20:38:40') FROM dual", read="postgres", write="mysql")
```
This returns
`["... | I have a similar issue
```
import sqlglot
print(sqlglot.transpile("SELECT mt.mode_was, AVG(EXTRACT(epoch FROM (mt.cross_time::TIMESTAMP - mt.second_cross_time::TIMESTAMP))) AS average_time FROM main_table mt GROUP BY mt.mode_was ORDER BY average_time DESC LIMIT 1", read="postgres", write="mysql")[0])
```
thi... | [
{
"body": "I was trying to convert an SQL query from Postgres to MySQL. Check the following code snippet\r\n\r\n```\r\nimport sqlglot\r\nsqlglot.transpile(\"SELECT date_trunc('hour', timestamp '2001-02-16 20:38:40') FROM dual\", read=\"postgres\", write=\"mysql\") \r\n```\r\n\r\nThis returns\r\n\r\n`[\"SELECT T... | e82a30b6563547daea0bb087e1b6b5bf3b0532d3 | {
"head_commit": "94bf0798823a0945e3708757dd2df54dc078afc1",
"head_commit_message": "feat(mysql): Transpile TimestampTrunc",
"patch_to_review": "diff --git a/sqlglot/dialects/mysql.py b/sqlglot/dialects/mysql.py\nindex 03576d29e5..fd92243559 100644\n--- a/sqlglot/dialects/mysql.py\n+++ b/sqlglot/dialects/mysql.py... | [
{
"diff_hunk": "@@ -867,3 +867,29 @@ def chr_sql(self, expression: exp.Chr) -> str:\n charset = expression.args.get(\"charset\")\n using = f\" USING {self.sql(charset)}\" if charset else \"\"\n return f\"CHAR({this}{using})\"\n+\n+ def timestamptrunc_sql(self, expressi... | b234dad33f7c88cae959d4a6d6a2a8bf28983801 | diff --git a/sqlglot/dialects/mysql.py b/sqlglot/dialects/mysql.py
index 03576d29e5..7b4d0e496a 100644
--- a/sqlglot/dialects/mysql.py
+++ b/sqlglot/dialects/mysql.py
@@ -867,3 +867,16 @@ def chr_sql(self, expression: exp.Chr) -> str:
charset = expression.args.get("charset")
using = f" USING {... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-3224@83a4e05 | tobymao/sqlglot | Python | 3,224 | feat(optimizer): Support for UNION BY NAME | Fixes #3222
Introduce support for `UNION [ALL] BY NAME` in `pushdown_projections` rule, which matches referenced columns by name instead of by position:
```
>>> optimize(sqlglot.parse_one("select a from (select 1 a, 2 c union all by name select 3 c, 4 a)")).sql()
'WITH "_q_0" AS (SELECT 1 AS "a" UNION ALL BY N... | 2024-03-26T13:41:00Z | Incorrect optimized SQL.
```
import sqlglot
from sqlglot.optimizer import optimize
optimize(sqlglot.parse_one("select a from (select 1 a, 2 c union all by name select 3 c, 4 a)", dialect="duckdb")).sql(dialect="duckdb")
```
```
WITH "_q_0" AS (SELECT 1 AS "a" UNION ALL BY NAME SELECT 3 AS "c") SELECT "_q_0"."a" A... | [
{
"body": "```\r\nimport sqlglot\r\nfrom sqlglot.optimizer import optimize\r\noptimize(sqlglot.parse_one(\"select a from (select 1 a, 2 c union all by name select 3 c, 4 a)\", dialect=\"duckdb\")).sql(dialect=\"duckdb\")\r\n```\r\n```\r\nWITH \"_q_0\" AS (SELECT 1 AS \"a\" UNION ALL BY NAME SELECT 3 AS \"c\") S... | b50dc5ecc7d29bce43229d050da8c4e37951853c | {
"head_commit": "83a4e05e96b77c0e01193b5771e8ea86cd10913f",
"head_commit_message": "feat(optimizer): Support for UNION BY NAME",
"patch_to_review": "diff --git a/sqlglot/optimizer/pushdown_projections.py b/sqlglot/optimizer/pushdown_projections.py\nindex 53490bfce3..dac7314d30 100644\n--- a/sqlglot/optimizer/pus... | [
{
"diff_hunk": "@@ -54,11 +54,17 @@ def pushdown_projections(expression, schema=None, remove_unused_selections=True)\n if any(select.is_star for select in right.expression.selects):\n referenced_columns[right] = parent_selections\n elif not any(select.is_star for select i... | 1ba646b6408195bd089d11c16c1114523890d72f | diff --git a/sqlglot/optimizer/pushdown_projections.py b/sqlglot/optimizer/pushdown_projections.py
index 53490bfce3..d97fd36c8e 100644
--- a/sqlglot/optimizer/pushdown_projections.py
+++ b/sqlglot/optimizer/pushdown_projections.py
@@ -54,11 +54,15 @@ def pushdown_projections(expression, schema=None, remove_unused_selec... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-3150@a85f7e2 | tobymao/sqlglot | Python | 3,150 | feat(mysql): Support for multi arg GROUP_CONCAT | Fixes #3142
Change MySQL parsing for `exp.GroupConcat` to reduce/normalize the list of expressions into an `exp.Concat` node, thus making it easily transpilable to other dialects which only accept a single expression.
Docs
------------
- https://dev.mysql.com/doc/refman/8.3/en/aggregate-functions.html#funct... | 2024-03-15T14:12:38Z | feat: support for mysql group_concat with multi columns
**Is your feature request related to a problem? Please describe.**
```python
import sqlglot
sql = "select a, group_concat(b, ' ', c SEPARATOR ',') from table group by a;"
sqlglot.parse_one(sql, read="mysql")
# or
sqlglot.transpile(sql, read="mysql", writ... | [
{
"body": "**Is your feature request related to a problem? Please describe.**\r\n\r\n```python \r\nimport sqlglot\r\nsql = \"select a, group_concat(b, ' ', c SEPARATOR ',') from table group by a;\"\r\nsqlglot.parse_one(sql, read=\"mysql\")\r\n# or \r\nsqlglot.transpile(sql, read=\"mysql\", write=\"postgres\", p... | 61e21edd0129a26fb69e94177c35ed08903cf73c | {
"head_commit": "a85f7e204c65198e57ee6f0755bf010769253b1c",
"head_commit_message": "Refactor #1",
"patch_to_review": "diff --git a/sqlglot/dialects/mysql.py b/sqlglot/dialects/mysql.py\nindex 750d4a36f3..c81baae583 100644\n--- a/sqlglot/dialects/mysql.py\n+++ b/sqlglot/dialects/mysql.py\n@@ -319,11 +319,7 @@ cla... | [
{
"diff_hunk": "@@ -617,6 +613,32 @@ def _parse_chr(self) -> t.Optional[exp.Expression]:\n \n return self.expression(exp.Chr, **kwargs)\n \n+ def _parse_group_concat(self) -> t.Optional[exp.Expression]:\n+ def concat_exprs(node, exprs):\n+ if node and isinstance(node... | 03f5600d3e524be88a4d56df2337722311264d1e | diff --git a/sqlglot/dialects/mysql.py b/sqlglot/dialects/mysql.py
index 750d4a36f3..b3c45b1de2 100644
--- a/sqlglot/dialects/mysql.py
+++ b/sqlglot/dialects/mysql.py
@@ -319,11 +319,7 @@ class Parser(parser.Parser):
FUNCTION_PARSERS = {
**parser.Parser.FUNCTION_PARSERS,
"CHAR": lambd... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
} | |
tobymao__sqlglot-3145@9691676 | tobymao/sqlglot | Python | 3,145 | Feat(tsql): transpile LIMIT with OFFSET properly | Fixes #3144
Reference:
- https://www.microsoftpressstore.com/articles/article.aspx?p=2314819
- https://learn.microsoft.com/en-us/sql/t-sql/queries/select-order-by-clause-transact-sql?view=sql-server-ver16 | 2024-03-14T14:27:31Z | Concerns about Translating SQLite Queries to TSQL
1. **Offset Issue:**
When attempting to convert an SQLite query to TSQL using the provided code snippet:
```python
import sqlglot
sql_a = "SELECT name, (SELECT COUNT(*) FROM orders WHERE orders.customer_id = customers.id) AS order_count FROM cus... | Hey, thanks for the report. I'll take a look, this seems like a bug.
> I had a question about the thoroughness of the translation process from SQLite to TSQL. Does the transpiler cover all possible cases of SQLite queries that are incompatible with TSQL syntax? Are there any known clauses or scenarios where the tran... | [
{
"body": "1. **Offset Issue:**\r\n \r\n When attempting to convert an SQLite query to TSQL using the provided code snippet:\r\n\r\n ```python\r\n import sqlglot\r\n\r\n sql_a = \"SELECT name, (SELECT COUNT(*) FROM orders WHERE orders.customer_id = customers.id) AS order_count FROM customers LIMIT 5, ... | d6bac3e54c6445c52daa04015b1b2e4a6933e682 | {
"head_commit": "9691676cd689d6691d864fd87db82444f621344d",
"head_commit_message": "Feat(tsql): transpile LIMIT with OFFSET properly",
"patch_to_review": "diff --git a/sqlglot/dialects/tsql.py b/sqlglot/dialects/tsql.py\nindex 45855772cb..c4e946012d 100644\n--- a/sqlglot/dialects/tsql.py\n+++ b/sqlglot/dialects/... | [
{
"diff_hunk": "@@ -328,6 +328,21 @@ def _json_extract_sql(\n return self.func(\"ISNULL\", json_query, json_value)\n \n \n+def _replace_limit_with_offset_fetch(expression: exp.Expression) -> exp.Expression:",
"line": null,
"original_line": 331,
"original_start_line": null,
"path": "sqlglot/d... | 14c89176e548848253b7682b4e37367bf03a9efb | diff --git a/sqlglot/dialects/tsql.py b/sqlglot/dialects/tsql.py
index 45855772cb..755360cc16 100644
--- a/sqlglot/dialects/tsql.py
+++ b/sqlglot/dialects/tsql.py
@@ -810,6 +810,22 @@ class Generator(generator.Generator):
exp.VolatileProperty: exp.Properties.Location.UNSUPPORTED,
}
+ def ... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-3096@7d81930 | tobymao/sqlglot | Python | 3,096 | Refactor!(optimizer): only create scopes for the DDL queries | We used to create Scopes for both Query and DDL expressions, but the logic that handled the latter was brittle, incomplete and resulted in another expression type that users of the Scope module had to take into account, thus making the interface more complex to use.
This makes it so that we'll only optimize the _que... | 2024-03-07T15:07:31Z | Lineage - not downstreaming to subquery expression
I'm trying to generate column level lineage from given INSERT statement.
I'm exploring column RUN_DATE, and expect to get expression for last available downstream.
I expect to get same expression of last step in walk for sql_nok as in sql_ok.
Name: sqlglot... | Hey @sunrutcon, we'll need more time to figure out how we want to deal with lineage on non-queries, because it will require a bit more work than initially expected. Until then, I'm marking this one as not planned. You can work around this issue by intercepting `DDL` expressions with queries (`expression`) and then comp... | [
{
"body": "I'm trying to generate column level lineage from given INSERT statement.\r\n\r\nI'm exploring column RUN_DATE, and expect to get expression for last available downstream.\r\n\r\nI expect to get same expression of last step in walk for sql_nok as in sql_ok.\r\n\r\n\r\nName: sqlglot\r\nVersion: 22.2.1... | 4fb74ff61effd9e5fa8593cdf1c9229d5106ab7e | {
"head_commit": "7d81930e33df69f18d9b08cecfda56a4359e74b3",
"head_commit_message": "Refactor!(optimizer): don't create scopes for DDLs\n\nWe used to create Scopes for both Query and DDL expressions, but the\nlogic that handled the latter was brittle, incomplete and resulted in\nanother expression type that users o... | [
{
"diff_hunk": "@@ -89,4 +94,8 @@ def qualify(\n if validate_qualify_columns:\n validate_qualify_columns_func(expression)\n \n+ if isinstance(original, exp.DDL) and isinstance(original.expression, exp.Query):",
"line": null,
"original_line": 97,
"original_start_line": null,
"path"... | b17750822c32c0f382bb237f5828cd58c9574b45 | diff --git a/setup.py b/setup.py
index ba18f646b2..74ca975679 100644
--- a/setup.py
+++ b/setup.py
@@ -32,6 +32,7 @@ def sqlglotrs_version():
"duckdb>=0.6",
"mypy",
"pandas",
+ "pandas-stubs",
"pyspark",
"python-dateutil",
"pdoc",
... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-3137@e537fca | tobymao/sqlglot | Python | 3,137 | fix(duckdb): Slice + Array bug | Fixes #3136
Caught 3 consecutive bugs:
- When matching `R_BRACKET` or `R_BRACE` inside `parse_bracket`, the short-circuit should be flipped, otherwise a single `parse_bracket` call can match 2 closing tokens, e.g. in `{'x': arr[i]}`, the parsing of `arr[i]` will also match the following `}`
- When transformi... | 2024-03-13T19:34:37Z | valid duckdb query fails to parse
**Fully reproducible code snippet**
This query cannot be parsed by sqlglot, but it is a valid duckdb query:
```
In [18]: import sqlglot as sg, sqlglot.expressions as sge
In [19]: import duckdb
In [20]: sql = """
...: SELECT
...: LIST_APPLY(
...: RANGE(... | A smaller example to reproduce this issue `{x: arr[i]}`.
Ah, thanks! | [
{
"body": "**Fully reproducible code snippet**\r\n\r\nThis query cannot be parsed by sqlglot, but it is a valid duckdb query:\r\n\r\n```\r\nIn [18]: import sqlglot as sg, sqlglot.expressions as sge\r\n\r\nIn [19]: import duckdb\r\n\r\nIn [20]: sql = \"\"\"\r\n ...: SELECT\r\n ...: LIST_APPLY(\r\n ...... | a9db8ff6ac528da8c3a7a66f0b80a3f0d1a0ed7e | {
"head_commit": "e537fca54d9eff10e10a82eb4798281c1ab15690",
"head_commit_message": "Refactoring duckdb's bracket_sql",
"patch_to_review": "diff --git a/sqlglot/dialects/duckdb.py b/sqlglot/dialects/duckdb.py\nindex dab0775068..9837025ace 100644\n--- a/sqlglot/dialects/duckdb.py\n+++ b/sqlglot/dialects/duckdb.py\... | [
{
"diff_hunk": "@@ -711,6 +723,10 @@ def test_array_index(self):\n \"WARNING:sqlglot:Applying array index offset (1)\",\n \"WARNING:sqlglot:Applying array index offset (-1)\",\n \"WARNING:sqlglot:Applying array index offset (1)\",\n+ ... | cadab8c70a841c8d9db975b852afc61f0d074aed | diff --git a/sqlglot/dialects/duckdb.py b/sqlglot/dialects/duckdb.py
index dab0775068..9837025ace 100644
--- a/sqlglot/dialects/duckdb.py
+++ b/sqlglot/dialects/duckdb.py
@@ -571,3 +571,9 @@ def generateseries_sql(self, expression: exp.GenerateSeries) -> str:
return rename_func("RANGE")(self, expressio... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-3100@7279401 | tobymao/sqlglot | Python | 3,100 | Feat(doris):add support for to_number(x) | fix [#3093](https://github.com/tobymao/sqlglot/issues/3093) | 2024-03-08T02:51:01Z | TO_NUMBER(number) is parsed wrongly from oracle to doris
**Before you file an issue**
- Make sure you specify the "read" dialect eg. `parse_one(sql, read="spark")`
- Make sure you specify the "write" dialect eg. `ast.sql(dialect="duckdb")`
- Check if the issue still exists on main
**Fully reproducible code snipp... | This is a bit more nuanced than just simply converting `TO_NUMBER` into a cast at parse time, because there are more arguments involved and you need to preserve them. It's not high priority for us so it's out of scope right now, but feel free to make a PR if you need it. I'd suggest looking at how `TO_CHAR` is handled ... | [
{
"body": "**Before you file an issue**\r\n- Make sure you specify the \"read\" dialect eg. `parse_one(sql, read=\"spark\")`\r\n- Make sure you specify the \"write\" dialect eg. `ast.sql(dialect=\"duckdb\")`\r\n- Check if the issue still exists on main\r\n\r\n**Fully reproducible code snippet**\r\nPlease includ... | 8a34fb433bc33551febe96665e16668de73e5bd6 | {
"head_commit": "7279401ab912b2ce7b68d21da29ab52d20c32d1a",
"head_commit_message": "fix make style error",
"patch_to_review": "diff --git a/sqlglot/dialects/oracle.py b/sqlglot/dialects/oracle.py\nindex bccdad0f4f..dc358e5e9a 100644\n--- a/sqlglot/dialects/oracle.py\n+++ b/sqlglot/dialects/oracle.py\n@@ -97,6 +9... | [
{
"diff_hunk": "@@ -97,6 +97,7 @@ class Parser(parser.Parser):\n \"TO_CHAR\": _build_timetostr_or_tochar,\n \"TO_TIMESTAMP\": build_formatted_time(exp.StrToTime, \"oracle\"),\n \"TO_DATE\": build_formatted_time(exp.StrToDate, \"oracle\"),\n+ \"TO_NUMBER\": lambda a... | 29c17d59cf4ba02f4fe8fd6714f0edb26d79a49b | diff --git a/sqlglot/dialects/bigquery.py b/sqlglot/dialects/bigquery.py
index 79ea318698..c66517bed6 100644
--- a/sqlglot/dialects/bigquery.py
+++ b/sqlglot/dialects/bigquery.py
@@ -566,6 +566,7 @@ class Generator(generator.Generator):
IGNORE_NULLS_IN_FUNC = True
JSON_PATH_SINGLE_QUOTE_ESCAPE = True
... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-3021@5dcced3 | tobymao/sqlglot | Python | 3,021 | Fix!(bigquery): preserve quoted table paths | Fixes #3014 | 2024-02-23T21:17:02Z | Unexpected unnest
**Before you file an issue**
- Make sure you specify the "read" dialect eg. parse_one(sql, read="spark")
- Check if the issue still exists on main
**Fully reproducible code snippet**
Please include a fully reproducible code snippet or the input sql, dialect, and expected output.
```
>>> impor... | these are equivalent so it looks right to me
~Your example contains backticks which is actually invalid BQ:~
```
-- Not found: Dataset ...:Coordinates was not found in location ...
WITH Coordinates AS (SELECT [1,2] AS position)
SELECT results FROM Coordinates, `Coordinates.position` AS results;
```
The resu... | [
{
"body": "**Before you file an issue**\r\n- Make sure you specify the \"read\" dialect eg. parse_one(sql, read=\"spark\")\r\n- Check if the issue still exists on main \r\n\r\n**Fully reproducible code snippet**\r\nPlease include a fully reproducible code snippet or the input sql, dialect, and expected output.\... | 17e34e79d22e3c8211f1bf42047d4ed3557628b6 | {
"head_commit": "5dcced370f95e11456b837bc2a166d147e3c39db",
"head_commit_message": "Improve coverage",
"patch_to_review": "diff --git a/sqlglot/dialects/bigquery.py b/sqlglot/dialects/bigquery.py\nindex b7db71d76c..0e22a219fd 100644\n--- a/sqlglot/dialects/bigquery.py\n+++ b/sqlglot/dialects/bigquery.py\n@@ -275... | [
{
"diff_hunk": "@@ -778,6 +780,11 @@ class Generator(generator.Generator):\n \"within\",\n }\n \n+ def table_sql(self, expression: exp.Table, sep: str = \" AS \") -> str:\n+ if expression.meta.get(\"quoted_table\"):",
"line": 792,
"original_line": 784,
"original... | 5ce2899ffd9329d33e17b97fe3d99185a904081e | diff --git a/sqlglot/dialects/bigquery.py b/sqlglot/dialects/bigquery.py
index b7db71d76c..fed488d98c 100644
--- a/sqlglot/dialects/bigquery.py
+++ b/sqlglot/dialects/bigquery.py
@@ -275,15 +275,16 @@ def normalize_identifier(self, expression: E) -> E:
# by default. The following check uses a heuristic to ... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-2800@4db93f1 | tobymao/sqlglot | Python | 2,800 | Fix(snowflake): apply range parser after colon, if any | Fixes #2798 | 2024-01-09T16:42:25Z | ParseError when using LIKE/ILIKE on an element in an object in Snowflake
I'm getting `ParseError: Invalid expression / Unexpected token` when using `LIKE` or `ILIKE` on an element within an object in Snowflake.
Example:
```
import sqlglot
sqlglot.parse(""" select parse_json('{"x": "hello"}'):x like 'hello' """, r... | [
{
"body": "I'm getting `ParseError: Invalid expression / Unexpected token` when using `LIKE` or `ILIKE` on an element within an object in Snowflake.\r\n\r\nExample:\r\n```\r\nimport sqlglot\r\nsqlglot.parse(\"\"\" select parse_json('{\"x\": \"hello\"}'):x like 'hello' \"\"\", read=\"snowflake\")\r\nsqlglot.pars... | 18e07d3353c1e11cc5b3ba2025e4440f48c2be02 | {
"head_commit": "4db93f12c91c49cc37a86a5a4a452757d8318de8",
"head_commit_message": "Fix(snowflake): apply range parser after colon, if any",
"patch_to_review": "diff --git a/sqlglot/dialects/snowflake.py b/sqlglot/dialects/snowflake.py\nindex ad14e6ee74..ddd36826e3 100644\n--- a/sqlglot/dialects/snowflake.py\n++... | [
{
"diff_hunk": "@@ -328,6 +328,10 @@ def _parse_colon_get_path(\n if not self._match(TokenType.COLON):\n break\n \n+ if self._match_set(self.RANGE_PARSERS):\n+ expression = self.RANGE_PARSERS[self._prev.token_type](self, this)\n+ this = expression if isinstance(expression, e... | b112715e0ed938a69a2b28b1661673b4cae804a4 | diff --git a/sqlglot/dialects/snowflake.py b/sqlglot/dialects/snowflake.py
index ad14e6ee74..454df94c9e 100644
--- a/sqlglot/dialects/snowflake.py
+++ b/sqlglot/dialects/snowflake.py
@@ -328,6 +328,9 @@ def _parse_colon_get_path(
if not self._match(TokenType.COLON):
break
+ if self._match_set... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-2476@0da6e41 | tobymao/sqlglot | Python | 2,476 | Feat(postgres): add support for the PARTITION OF property in CREATE | Fixes #2469
Reference: https://www.postgresql.org/docs/current/sql-createtable.html | 2023-10-28T02:01:40Z | feat(postgres): create partition tables
**Is your feature request related to a problem? Please describe.**
Be able to parse statements like: `CREATE TABLE cust_part3 PARTITION OF customers FOR VALUES WITH (modulus 3, remainder 2)`
```python
>>> sqlglot.parse_one('CREATE TABLE cust_part3 PARTITION OF customers FOR ... | I can take a look into this soon | [
{
"body": "**Is your feature request related to a problem? Please describe.**\r\nBe able to parse statements like: `CREATE TABLE cust_part3 PARTITION OF customers FOR VALUES WITH (modulus 3, remainder 2)`\r\n\r\n```python\r\n>>> sqlglot.parse_one('CREATE TABLE cust_part3 PARTITION OF customers FOR VALUES WITH (... | a1252d8ba7d2394bbb14ccd42d835da8cd4eb740 | {
"head_commit": "0da6e41beac320830d42ff3995a99f66caf32e14",
"head_commit_message": "Revert arg type",
"patch_to_review": "diff --git a/sqlglot/dialects/postgres.py b/sqlglot/dialects/postgres.py\nindex 30e8b0a8cb..14fdcb26f6 100644\n--- a/sqlglot/dialects/postgres.py\n+++ b/sqlglot/dialects/postgres.py\n@@ -134,... | [
{
"diff_hunk": "@@ -1743,6 +1744,55 @@ def _parse_partition_by(self) -> t.List[exp.Expression]:\n return self._parse_csv(self._parse_conjunction)\n return []\n \n+ def _parse_partition_bound_spec(self) -> exp.PartitionBoundSpec:\n+ def _parse_partition_bound_expr() -> t.Optional[ex... | 5cd1482b4c35e8cd2d57e9d1d4d3079b29b41c68 | diff --git a/sqlglot/dialects/postgres.py b/sqlglot/dialects/postgres.py
index 30e8b0a8cb..14fdcb26f6 100644
--- a/sqlglot/dialects/postgres.py
+++ b/sqlglot/dialects/postgres.py
@@ -134,7 +134,9 @@ def _auto_increment_to_serial(expression: exp.Expression) -> exp.Expression:
def _serial_to_generated(expression: ex... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
} |
tobymao__sqlglot-2358@2bec4f8 | tobymao/sqlglot | Python | 2,358 | Fix(hive): don't generate BYTE when transpiling Oracle's VARCHAR(5 BYTE) | Fixes #2356
This shouldn't be breaking, I remember adding support for the `expression` argument only to handle Oracle's `BYTE`, `CHAR` type specifiers, see https://github.com/tobymao/sqlglot/commit/3ee16c5e8 | 2023-10-02T15:27:23Z | Handle BYTE statement in Oracle DDL to Spark
using sqlglot to transpile some Oracle DDL to Spark that I have from a customer. The modified code changed the `VARCHAR2(5 BYTE)` to `VACHAR(5 BYTE)` but the DDL still fails in Spark with the `(5 BYTE)`. Once removing the word BYTE the DDL executed fine. | [
{
"body": "using sqlglot to transpile some Oracle DDL to Spark that I have from a customer. The modified code changed the `VARCHAR2(5 BYTE)` to `VACHAR(5 BYTE)` but the DDL still fails in Spark with the `(5 BYTE)`. Once removing the word BYTE the DDL executed fine.\r\n",
"number": 2356,
"title": "Han... | 5fb71743d9274b7e0e825a761be3672c6299e453 | {
"head_commit": "2bec4f83fb60240117320a1ebbc5b13f2386d759",
"head_commit_message": "Fix(hive): don't generate BYTE when transpiling Oracle's VARCHAR(5 BYTE)",
"patch_to_review": "diff --git a/sqlglot/dialects/hive.py b/sqlglot/dialects/hive.py\nindex 3f925a708f..828ea32386 100644\n--- a/sqlglot/dialects/hive.py\... | [
{
"diff_hunk": "@@ -562,13 +562,18 @@ def datatype_sql(self, expression: exp.DataType) -> str:\n expression = exp.DataType.build(\"text\")\n elif expression.this in exp.DataType.TEMPORAL_TYPES:\n expression = exp.DataType.build(expression.this)\n- elif expr... | fc67eb77bd0f33054c7d0907c5e76a1fdf78bc9b | diff --git a/sqlglot/dialects/oracle.py b/sqlglot/dialects/oracle.py
index 61ba8b861f..6a007ab47e 100644
--- a/sqlglot/dialects/oracle.py
+++ b/sqlglot/dialects/oracle.py
@@ -153,6 +153,7 @@ class Generator(generator.Generator):
JOIN_HINTS = False
TABLE_HINTS = False
COLUMN_JOIN_MARKS_SUPPORT... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-2242@0b9f5f9 | tobymao/sqlglot | Python | 2,242 | Feat: eliminate semi/anti joins transformation | Fixes #2240 | 2023-09-16T23:20:13Z | transform semi and anti join to exists and not exists for backends that don't have semi/anti join syntax
**Is your feature request related to a problem? Please describe.**
Not really related to a problem, except that I can't take a semi join query from a backend that supports them to one that doesn't.
**Describe ... | [
{
"body": "**Is your feature request related to a problem? Please describe.**\r\n\r\nNot really related to a problem, except that I can't take a semi join query from a backend that supports them to one that doesn't.\r\n\r\n**Describe the solution you'd like**\r\n\r\nI'd like sqlglot to take this query:\r\n\r\n`... | a2222b8e450e5a6d3ac1c1d349bc0218dd05b351 | {
"head_commit": "0b9f5f9461c934152caa77e0125d7e5779c50833",
"head_commit_message": "Feat: eliminate semi/anti joins transformation",
"patch_to_review": "diff --git a/sqlglot/dialects/bigquery.py b/sqlglot/dialects/bigquery.py\nindex f3cb53a4af..ab5d7b333b 100644\n--- a/sqlglot/dialects/bigquery.py\n+++ b/sqlglot... | [
{
"diff_hunk": "@@ -118,7 +118,7 @@ class Parser(parser.Parser):\n TokenType.ARRAY,\n }\n \n- TABLE_ALIAS_TOKENS = {*parser.Parser.TABLE_ALIAS_TOKENS} - {\n+ TABLE_ALIAS_TOKENS = parser.Parser.TABLE_ALIAS_TOKENS.copy() - {",
"line": null,
"original_line": 121,
"orig... | d15b226dcdb5387a706bee61bbcd8ba2664d3c7a | diff --git a/sqlglot/dialects/bigquery.py b/sqlglot/dialects/bigquery.py
index f3cb53a4af..ab5d7b333b 100644
--- a/sqlglot/dialects/bigquery.py
+++ b/sqlglot/dialects/bigquery.py
@@ -462,6 +462,7 @@ class Generator(generator.Generator):
_unqualify_unnest,
transforms.eliminate_d... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
} | |
stanfordnlp__dspy-8124@cd235a0 | stanfordnlp/dspy | Python | 8,124 | Add unit tests for ParallelExecutor class in dspy.utils.parallelizer | This PR adds additional test cases for the ParallelExecutor:
- Ensures worker threads maintain independence.
- Validates parallel execution speed.
- Tests error handling with max_errors.
All tests are passed locally.
Fixes: #8122 | 2025-04-26T11:32:43Z | Add unit test for dspy.Parallel
This class is used to run DSPy modules in parallel in a thread-safe way. The code is located in `dspy/utils/parallelizer.py`, a sample usage is like below:
```
import dspy
dspy.settings.configure(lm=dspy.LM("openai/gpt-4o-mini"))
cot = dspy.ChainOfThought("question->answer")
parallel... | [
{
"body": "This class is used to run DSPy modules in parallel in a thread-safe way. The code is located in `dspy/utils/parallelizer.py`, a sample usage is like below:\n\n```\nimport dspy\n\ndspy.settings.configure(lm=dspy.LM(\"openai/gpt-4o-mini\"))\n\ncot = dspy.ChainOfThought(\"question->answer\")\n\nparallel... | 44ffb3e0e025e0f05fdcaa31bec13ec5ac791f3c | {
"head_commit": "cd235a06cc93efa6dd888d9e5bd8779a328ecca4",
"head_commit_message": "Add unit tests for ParallelExecutor class in dspy.utils.parallelizer",
"patch_to_review": "diff --git a/tests/utils/test_parallelizer.py b/tests/utils/test_parallelizer.py\nnew file mode 100644\nindex 0000000000..db0f236205\n--- ... | [
{
"diff_hunk": "@@ -0,0 +1,40 @@\n+import time\n+import pytest\n+from dspy.utils.parallelizer import ParallelExecutor\n+\n+def test_worker_threads_independence():\n+ def task(item):\n+ # Each thread maintains its own state by appending to a thread-local list\n+ return item * 2\n+\n+ data = [... | 8c17454a4b2255ed14e79f29c388cedfb415f3eb | diff --git a/tests/utils/test_parallelizer.py b/tests/utils/test_parallelizer.py
new file mode 100644
index 0000000000..f1ec3c8011
--- /dev/null
+++ b/tests/utils/test_parallelizer.py
@@ -0,0 +1,59 @@
+import time
+import pytest
+from dspy.utils.parallelizer import ParallelExecutor
+
+
+def test_worker_threads_independ... | {
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Test Suite / CI Enhancements"
} | |
stanfordnlp__dspy-8003@2a1d0f2 | stanfordnlp/dspy | Python | 8,003 | Change the output interface of evaluate | In this PR, we change the output interface of `Evaluate.__call__`.
Instead of returning either score, (score, outputs), (score, scores, outputs) based on arguments, it will always return EvaluationResult containing the following fields:
- score: A float percentage score (e.g., 67.30) representing overall performanc... | 2025-03-24T09:55:13Z | [FR] DSPy : Log examples evaluation scores
### Willingness to contribute
Yes. I can contribute this feature independently.
### Proposal Summary
The current implementation only logs the overall evaluation score, while individual scores for each example are not captured or recorded.
### Motivation
> #### What is the... | @TomeHirata I thought the autologging records the evaluation result table that includes scores for individual samples
https://github.com/mlflow/mlflow/blob/a68ee413624f5067ec33fa05ba1785fab613fa24/mlflow/dspy/callback.py#L271
@Nasreddine Could you share an example code where the row-level result is not available? The... | [
{
"body": "### Willingness to contribute\n\nYes. I can contribute this feature independently.\n\n### Proposal Summary\n\nThe current implementation only logs the overall evaluation score, while individual scores for each example are not captured or recorded.\n\n### Motivation\n\n> #### What is the use case for ... | 42c50c88f76347ca51d600c480155bf319526b3b | {
"head_commit": "2a1d0f2a25197881de379dfc6b3dc42f51ba838e",
"head_commit_message": "introduce EvaluationResult class",
"patch_to_review": "diff --git a/docs/docs/tutorials/agents/index.ipynb b/docs/docs/tutorials/agents/index.ipynb\nindex 8b5521fb66..ef7cc56c45 100644\n--- a/docs/docs/tutorials/agents/index.ipyn... | [
{
"diff_hunk": "@@ -41,6 +41,18 @@ def HTML(x: str) -> str: # noqa: N802\n logger = logging.getLogger(__name__)\n \n \n+class EvaluationResult(dspy.Prediction):\n+ \"\"\"\n+ A class that represents the result of an evaluation.\n+ It is a subclass of `dspy.Prediction` that contains the following fields... | 0c1e0a15894f3d27fd4537209e6aa93a74a36b38 | diff --git a/docs/docs/tutorials/agents/index.ipynb b/docs/docs/tutorials/agents/index.ipynb
index 8b5521fb66..ef7cc56c45 100644
--- a/docs/docs/tutorials/agents/index.ipynb
+++ b/docs/docs/tutorials/agents/index.ipynb
@@ -500,23 +500,20 @@
" metric=top5_recall,\n",
" num_threads=16,\n",
" ... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
} |
stanfordnlp__dspy-7861@e4f2c00 | stanfordnlp/dspy | Python | 7,861 | Fix save predict model with LM | This PR enables the serialization of dspy.LM to fix the issue of model saving.
Resolve #7852 | 2025-02-27T04:13:16Z | [Bug] save() is broken when predict has lm instance
### What happened?
save() is broken when predict has lm assigned. See code below for reproducing.
To fix it, we can serialize the model name, then at loading time, reconstruct the lm instance by calling `dspy.LM(saved_model_name)`
### Steps to reproduce
```
>>> im... | [
{
"body": "### What happened?\n\nsave() is broken when predict has lm assigned. See code below for reproducing.\n\nTo fix it, we can serialize the model name, then at loading time, reconstruct the lm instance by calling `dspy.LM(saved_model_name)`\n\n### Steps to reproduce\n\n```\n>>> import dspy\n>>> predict =... | c45517284f3b2f5f95c9e7b5fa8627d09c920082 | {
"head_commit": "e4f2c008e12f40b6c184caef19f9cbc52c29dac0",
"head_commit_message": "fix test_lm_after_dump_and_load_state\n\nSigned-off-by: TomuHirata <tomu.hirata@gmail.com>",
"patch_to_review": "diff --git a/dspy/clients/lm.py b/dspy/clients/lm.py\nindex 6f5468b84e..617f532230 100644\n--- a/dspy/clients/lm.py\... | [
{
"diff_hunk": "@@ -246,6 +246,10 @@ def copy(self, **kwargs):\n new_instance.kwargs[key] = value\n \n return new_instance\n+ \n+ def dump_state(self):\n+ # TODO: callbacks cannot be saved. We should consider how to save callbacks.\n+ return { k: v for k, v in self.__... | fd29bb0f4ec462f402174dc945849fdf2ebbbe9c | diff --git a/dspy/clients/lm.py b/dspy/clients/lm.py
index 6f5468b84e..47e69f95f2 100644
--- a/dspy/clients/lm.py
+++ b/dspy/clients/lm.py
@@ -246,6 +246,10 @@ def copy(self, **kwargs):
new_instance.kwargs[key] = value
return new_instance
+
+ def dump_state(self):
+ state_keys ... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-2151@c1aa2be | tobymao/sqlglot | Python | 2,151 | Fix(hive): parse <number> <date_part> as an interval instead of an alias | Fixes #2123
Tested in https://demo.gethue.com/hue/editor/?type=hiv (enter `demo` for both the username and the password):
<img width="1165" alt="Screenshot 2023-09-04 at 7 15 59 PM" src="https://github.com/tobymao/sqlglot/assets/46752250/b1095ff1-2882-44d1-9b0b-141884d09742">
The date parts were chosen based o... | 2023-09-04T16:16:55Z | (hive) hive parsing error, date addition and subtraction are parsed into aliases
`sqlglot.parse_one("""cast('1998-01-06' as date) + 30 days""", "hive")
TRY_CAST('1998-01-06' AS DATE) + 30 AS days
The syntax tree is as follows:
Parse days into an alias
[(ALIAS this:
(ADD this:
(TRYCAST this:
(L... | this is the correct behavior.
```
>>> spark.sql("select cast('1998-01-06' as date) + 30 year ").collect()
[Row(year=datetime.date(1998, 2, 5))]
```
in this case i change it to year and you can see spark returns the column name year as an alias
@tobymao In hive, this does not seem to be in the form of an alias,... | [
{
"body": "`sqlglot.parse_one(\"\"\"cast('1998-01-06' as date) + 30 days\"\"\", \"hive\")\r\nTRY_CAST('1998-01-06' AS DATE) + 30 AS days\r\n\r\nThe syntax tree is as follows:\r\nParse days into an alias\r\n\r\n[(ALIAS this: \r\n (ADD this: \r\n (TRYCAST this: \r\n (LITERAL this: 1998-01-06, is_string: ... | 7b589ae2ce4071f004b0cd3eedab5fca4deb79bb | {
"head_commit": "c1aa2be516e9db9c2c030c6678ddb7b02f518b34",
"head_commit_message": "Fix(hive): parse <number> <date_part> as an interval instead of an alias",
"patch_to_review": "diff --git a/sqlglot/dialects/hive.py b/sqlglot/dialects/hive.py\nindex 6bb6cfbba3..cbe8b9e869 100644\n--- a/sqlglot/dialects/hive.py\... | [
{
"diff_hunk": "@@ -47,9 +47,31 @@\n \"HOUR\": \" / 3600\",\n }\n \n+INTERVAL_VARS = {\n+ \"SECOND\",\n+ \"SECONDS\",\n+ \"MINUTE\",\n+ \"MINUTES\",\n+ \"DAY\",\n+ \"DAYS\",\n+ \"MONTH\",\n+ \"MONTHS\",\n+ \"YEAR\",\n+ \"YEARS\",\n+}\n+\n+\n DIFF_MONTH_SWITCH = (\"YEAR\", \"QUA... | 74615d08b13c508c5b5a59f356ecb3d1a1389161 | diff --git a/sqlglot/dialects/hive.py b/sqlglot/dialects/hive.py
index 6bb6cfbba3..61d87bb850 100644
--- a/sqlglot/dialects/hive.py
+++ b/sqlglot/dialects/hive.py
@@ -47,9 +47,31 @@
"HOUR": " / 3600",
}
+INTERVAL_VARS = {
+ "SECOND",
+ "SECONDS",
+ "MINUTE",
+ "MINUTES",
+ "DAY",
+ "DAYS",
+ ... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-2035@430c4cf | tobymao/sqlglot | Python | 2,035 | Feat(presto,oracle): add support for INTERVAL span types | Fixes #2027 | 2023-08-11T12:14:31Z | parsing trino `interval day to second` and `interval year to month` into a type doesn't work
```
In [9]: import sqlglot as sg
In [10]: sg.__version__
Out[10]: '17.9.1'
In [11]: sg.parse_one("interval day to second", into=sg.exp.DataType, read="trino")
...
ParseError: Failed to parse 'interval day to second' i... | [
{
"body": "```\r\nIn [9]: import sqlglot as sg\r\n\r\nIn [10]: sg.__version__\r\nOut[10]: '17.9.1'\r\n\r\nIn [11]: sg.parse_one(\"interval day to second\", into=sg.exp.DataType, read=\"trino\")\r\n...\r\nParseError: Failed to parse 'interval day to second' into <class 'sqlglot.expressions.DataType'>\r\n\r\nIn [... | 0659dcb883b26cb5379e9b577b86aa5868a7079a | {
"head_commit": "430c4cfff165a5d08288faf8a09bdbc4a0d9f58a",
"head_commit_message": "Feat(presto,oracle): add support for INTERVAL span types",
"patch_to_review": "diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py\nindex edd72ea6bb..ee973a294e 100644\n--- a/sqlglot/expressions.py\n+++ b/sqlglot/express... | [
{
"diff_hunk": "@@ -3184,10 +3184,17 @@ def _parse_types(\n elif self._match_text_seq(\"WITHOUT\", \"TIME\", \"ZONE\"):\n maybe_func = False\n elif type_token == TokenType.INTERVAL:\n- unit = self._parse_var()\n+ span: t.Optional[t.List[exp.Expression]] ... | 182db40ee7954b135ccab82ca3fc9fd12478b9f2 | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index edd72ea6bb..ee973a294e 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -3859,6 +3859,18 @@ def __init__(self, **args):
super().__init__(**args)
+# https://www.oracletutorial.com/oracle-basics/oracle-interval/
+# https://... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
} | |
tobymao__sqlglot-2077@93dd710 | tobymao/sqlglot | Python | 2,077 | Fix!(optimizer): remove redundant parens during subquery elimination | Fixes #2075 | 2023-08-16T18:53:11Z | Optimizer is incorrect when superfluous parens wrap subqueries
```
from sqlglot import parse_one
from sqlglot.optimizer import optimize
from sqlglot.dialects import Snowflake
sql = """
SELECT
("SUBQUERY_0"."KEY") AS "SUBQUERY_1_COL_0"
FROM
(
SELECT
*
FROM
((( -- <-------------... | [
{
"body": "```\r\nfrom sqlglot import parse_one\r\nfrom sqlglot.optimizer import optimize\r\nfrom sqlglot.dialects import Snowflake\r\n\r\nsql = \"\"\"\r\nSELECT \r\n (\"SUBQUERY_0\".\"KEY\") AS \"SUBQUERY_1_COL_0\"\r\nFROM \r\n (\r\n SELECT \r\n * \r\n FROM \r\n ((( -- <-------------- ***note... | c5dc9acdfeb715de1c219eb228fe2dda1b9af497 | {
"head_commit": "93dd71054acfda836cd32d8c40fa7d202526dde2",
"head_commit_message": "Fix!(optimizer): remove redundant parens during subquery elimination",
"patch_to_review": "diff --git a/sqlglot/optimizer/eliminate_subqueries.py b/sqlglot/optimizer/eliminate_subqueries.py\nindex af42f25aca..72bb200852 100644\n-... | [
{
"diff_hunk": "@@ -142,13 +142,21 @@ def _eliminate_derived_table(scope, existing_ctes, taken):\n if scope.parent.pivots or isinstance(scope.parent.expression, exp.Lateral):\n return None\n \n- parent = scope.expression.parent\n+ # Get rid of redundant exp.Subquery expressions, i.e. those tha... | 404fd9a06624c46d2f3dd51aa7419ad6bfa6889b | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index 28174dd903..9672b8e0a2 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -3263,6 +3263,23 @@ def unnest(self):
expression = expression.this
return expression
+ def unwrap(self) -> Subquery:
+ expressio... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
stanfordnlp__dspy-1609@86307ff | stanfordnlp/dspy | Python | 1,609 | Fix TypedPredictor formatting with list output values | Fixes #1567 | 2024-10-09T22:48:50Z | DSPy 2.5 + TypedPredictor with List[str] outputs throws during optimization, but inference works
**Script**
```
from typing import List, Literal, Dict, Any, Optional
import dspy
from datasets import load_dataset
from pydantic import BaseModel, Field
llm = dspy.LM(model="databricks/databricks-meta-llama-3-1-... | Thanks @dbczumar ! This is actually an interesting problem.
The way to debug this is to do `dspy.inspect_history(n=5)` at the end to view the last five prompts.
When you do that, you see:
```
System message:
Your input fields are:
1. `text` (str): Text to tokenize
Your output fields are:
1. `tokens` (li... | [
{
"body": "**Script**\r\n\r\n```\r\nfrom typing import List, Literal, Dict, Any, Optional\r\n\r\nimport dspy\r\nfrom datasets import load_dataset\r\nfrom pydantic import BaseModel, Field\r\n\r\nllm = dspy.LM(model=\"databricks/databricks-meta-llama-3-1-70b-instruct\")\r\ndspy.settings.configure(lm=llm)\r\n\r\n#... | 16ceaba5f7126ca86e5b50b669975c926b9f8f55 | {
"head_commit": "86307ff9c639b01e856faf2d2d5e632cad11ce3e",
"head_commit_message": "name\n\nSigned-off-by: dbczumar <corey.zumar@databricks.com>",
"patch_to_review": "diff --git a/dspy/adapters/chat_adapter.py b/dspy/adapters/chat_adapter.py\nindex 5a20dcff32..e2172a80e3 100644\n--- a/dspy/adapters/chat_adapter.... | [
{
"diff_hunk": "@@ -452,6 +453,22 @@ class TestSignature(dspy.Signature):\n assert output == [0, 1, 2]\n \n \n+def test_list_inputs_and_outputs():\n+ lm = DummyLM([{\"output\": [\"0\", \"1\", \"2\"]}])",
"line": null,
"original_line": 457,
"original_start_line": null,
"path": "tests/funct... | 8664e0d0908b0f83854daca1932ab1ae16db6182 | diff --git a/dspy/adapters/chat_adapter.py b/dspy/adapters/chat_adapter.py
index 5a20dcff32..e2172a80e3 100644
--- a/dspy/adapters/chat_adapter.py
+++ b/dspy/adapters/chat_adapter.py
@@ -2,16 +2,34 @@
import json
import re
import textwrap
-from typing import get_args, get_origin
+from typing import Any, Dict, KeysVi... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-2096@1794bf3 | tobymao/sqlglot | Python | 2,096 | Feat!: add support for casting to user defined types | Fixes #2091
References:
- https://docs.oracle.com/cd/E11882_01/appdev.112/e25519/create_type.htm
- https://www.postgresql.org/docs/current/sql-createtype.html (Redshift doesn't support it)
- https://github.com/prestodb/presto/issues/16324 (Trino doesn't support it)
- https://docs.teradata.com/r/76g1CuvvQlYBjb2WP... | 2023-08-18T16:23:19Z | add support for table cast to parse the tables correctly.
Hello,
Parser is unable to parse the query when there is a table cast function used in the query.
**Code for your reference -**
```
import sqlglot.expressions as exp
query ="""
SELECT count(1)
INTO V_Temp
FROM TABLE(CAST(somelist as data_list))
... | Hey, appreciate you taking the time to file these issues, but you also need to provide documentation for non-trivial things, such as this `TABLE` function. From the issue creation template:
> Official Documentation Please include links to official SQL documentation related to your issue.
What is `data_list` in yo... | [
{
"body": "Hello,\r\n\r\nParser is unable to parse the query when there is a table cast function used in the query.\r\n\r\n**Code for your reference -**\r\n\r\n```\r\nimport sqlglot.expressions as exp\r\n\r\nquery =\"\"\"\r\nSELECT count(1)\r\nINTO V_Temp\r\nFROM TABLE(CAST(somelist as data_list))\r\nWHERE col ... | a20794ab986b3b6401b016132cf2c5e36d50f4a9 | {
"head_commit": "1794bf3ae7203daaa3199be139d931bb4c1dbee1",
"head_commit_message": "Remove redundant test",
"patch_to_review": "diff --git a/README.md b/README.md\nindex 7bb895d5a8..6fa89d50f5 100644\n--- a/README.md\n+++ b/README.md\n@@ -178,7 +178,7 @@ for table in parse_one(\"SELECT * FROM x JOIN y JOIN z\").... | [
{
"diff_hunk": "@@ -488,7 +490,7 @@ def datatype_sql(self, expression: exp.DataType) -> str:\n expression = exp.DataType.build(\"text\")\n elif expression.this in exp.DataType.TEMPORAL_TYPES:\n expression = exp.DataType.build(expression.this)\n- elif expres... | 2235db2574d20057f171cb71e61de05e4f869950 | diff --git a/README.md b/README.md
index 7bb895d5a8..6fa89d50f5 100644
--- a/README.md
+++ b/README.md
@@ -178,7 +178,7 @@ for table in parse_one("SELECT * FROM x JOIN y JOIN z").find_all(exp.Table):
### Parser Errors
-When the parser detects an error in the syntax, it raises a ParserError:
+When the parser detect... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
} |
tobymao__sqlglot-1988@ff571c4 | tobymao/sqlglot | Python | 1,988 | Fix(optimizer): wrap scalar subquery replacement in a MAX call | Fixes #1987
Tested this in SQL fiddle, seems to be good. | 2023-08-01T16:31:29Z | Nested Query Bug - Column '_u_0._col_0' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
<h3>Code snippet:</h3>
```python
import sqlglot
from sqlglot.optimizer import optimize, RULES as optimize_rules
DCT_RULES = {x.__name__ : x for x in optimize_ru... | Thanks for the report, we'll take a look | [
{
"body": "<h3>Code snippet:</h3>\r\n\r\n```python\r\nimport sqlglot\r\nfrom sqlglot.optimizer import optimize, RULES as optimize_rules\r\nDCT_RULES = {x.__name__ : x for x in optimize_rules}\r\nDCT_RULES.pop('quote_identifiers')\r\noptimize_rules = tuple(list(DCT_RULES.values()))\r\n\r\nif __name__ == '__main_... | a81ce337909b6443741f56656d3d789e080fcb88 | {
"head_commit": "ff571c45643d69f24da96583ffd1c3b612cf14e6",
"head_commit_message": "Fix(optimizer): wrap scalar subquery replacement in a MAX call",
"patch_to_review": "diff --git a/sqlglot/optimizer/unnest_subqueries.py b/sqlglot/optimizer/unnest_subqueries.py\nindex 09e3f2aa4e..9d17b25ab8 100644\n--- a/sqlglot... | [
{
"diff_hunk": "@@ -46,20 +46,20 @@ def unnest(select, parent_select, next_alias_name):\n if not predicate or parent_select is not predicate.parent_select:\n return\n \n- # this subquery returns a scalar and can just be converted to a cross join\n+ # This subquery returns a scalar and can just... | 18e0d9af13aa4ee8297e87407311eb30f960f1a4 | diff --git a/sqlglot/optimizer/unnest_subqueries.py b/sqlglot/optimizer/unnest_subqueries.py
index 09e3f2aa4e..816f5fb326 100644
--- a/sqlglot/optimizer/unnest_subqueries.py
+++ b/sqlglot/optimizer/unnest_subqueries.py
@@ -46,20 +46,24 @@ def unnest(select, parent_select, next_alias_name):
if not predicate or pare... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-1961@5bd3ed9 | tobymao/sqlglot | Python | 1,961 | Feat(mysql): improve support for DDL index column constraints | Fixes #1959 -- still a WIP. An alternative I considered for _index_options_ was to create a separate class for each option. That way we'd avoid the if checks in the generator, but we'd have a handful more expression types to deal with, so wasn't sure and went ahead with this approach instead because it felt simple. Wil... | 2023-07-27T00:16:15Z | getting error for foreign key constrain for mysql
code = from sqlglot import parse_one,dialects
sql = """
CREATE TABLE IF NOT EXISTS `industry_info` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`industry_id` bigint(20) NOT NULL,
`industry_column_1` varchar(1000) ,
`industry_column_2` varchar(1000) ,
`i... | Will take a look soon | [
{
"body": "code = from sqlglot import parse_one,dialects\r\nsql = \"\"\"\r\nCREATE TABLE IF NOT EXISTS `industry_info` (\r\n `id` bigint(20) NOT NULL AUTO_INCREMENT,\r\n `industry_id` bigint(20) NOT NULL,\r\n `industry_column_1` varchar(1000) ,\r\n `industry_column_2` varchar(1000) ,\r\n `industry_column_... | 9582242837cec90cdab761d6a8c6865fbf93d592 | {
"head_commit": "5bd3ed9d6a2b92490e831dfd5463d2c9315777d9",
"head_commit_message": "Cleanup",
"patch_to_review": "diff --git a/sqlglot/dialects/clickhouse.py b/sqlglot/dialects/clickhouse.py\nindex 8f60df24df..ce1a48601a 100644\n--- a/sqlglot/dialects/clickhouse.py\n+++ b/sqlglot/dialects/clickhouse.py\n@@ -380,... | [
{
"diff_hunk": "@@ -327,6 +343,57 @@ class Parser(parser.Parser):\n \n LOG_DEFAULTS_TO_LN = True\n \n+ def _parse_index_constraint(\n+ self, kind: t.Optional[str] = None\n+ ) -> exp.IndexColumnConstraint:\n+ if kind:\n+ self._match_texts({\"INDEX\", \"K... | 133371a6943a4be0b302e27e75fbcc1d21e65584 | diff --git a/sqlglot/dialects/clickhouse.py b/sqlglot/dialects/clickhouse.py
index 8f60df24df..ce1a48601a 100644
--- a/sqlglot/dialects/clickhouse.py
+++ b/sqlglot/dialects/clickhouse.py
@@ -380,7 +380,7 @@ def after_limit_modifiers(self, expression: exp.Expression) -> t.List[str]:
]
def paramet... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-1900@415036c | tobymao/sqlglot | Python | 1,900 | Fix!(optimizer): preserve parenthesized joins, tables | Fixes #1898 | 2023-07-08T00:46:48Z | Optimizer transforms nested joins incorrectly
Hey @GeorgeSittas, thanks a lot for the fix on the nested joins. That will work on my part, as i only need to be able to parse and use the scope class. However i did a check on the optimizer as well and the SQL seems to be transformed into SQL that is functionally different... | Hey @10bas10, happy to know your issue was resolved.
> However i did a check on the optimizer as well and the SQL seems to be transformed into SQL that is functionally different from the original.
So after discussing this, we agreed on flattening the `JOIN` sequence in the optimizer because preserving the parenth... | [
{
"body": "Hey @GeorgeSittas, thanks a lot for the fix on the nested joins. That will work on my part, as i only need to be able to parse and use the scope class. However i did a check on the optimizer as well and the SQL seems to be transformed into SQL that is functionally different from the original.\r\n\r\n... | 3b215adc413772bc1af46a67e2410b38ad8872a2 | {
"head_commit": "415036c6132358fec696456271b44025b976d623",
"head_commit_message": "Remove selects, named_selects from Scope",
"patch_to_review": "diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py\nindex 242e66ce65..1fb8765486 100644\n--- a/sqlglot/expressions.py\n+++ b/sqlglot/expressions.py\n@@ -878... | [
{
"diff_hunk": "@@ -97,8 +97,11 @@ def merge_derived_tables(expression, leave_tables_isolated=False):\n for subquery in outer_scope.derived_tables:\n from_or_join = subquery.find_ancestor(exp.From, exp.Join)\n alias = subquery.alias_or_name\n- inner_scope = outer_scope... | 0c0d9e54ffb246179d4f42250c90fd15e944a6ab | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index 242e66ce65..1fb8765486 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -878,11 +878,11 @@ def alias_column_names(self) -> t.List[str]:
return [c.name for c in table_alias.args.get("columns") or []]
@property
- def ... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-1894@305833a | tobymao/sqlglot | Python | 1,894 | Fix!(parser, optimizer): improve parsing, optimizing of nested tables, joins | Fixes #1879 | 2023-07-05T16:59:41Z | Using optimizer with nested joins
When using a nested join, where table b and c need to be joined before making the join to table a is not working without using parenthesis, as it will result in a parsing error. This has been filed as separate bug. However when using parenthesis at this nested join, the query is actual... | maybe need to expand this out into (select * from b join c) as b
Shouldnt the scope component also be able to handle such statements and identify tables a, b and c in its most outer scope?
The problem is that `c` is used as a source (e.g. in `c.test2`), but it's not in scope because we parse it as a subquery.
FYI @10ba... | [
{
"body": "When using a nested join, where table b and c need to be joined before making the join to table a is not working without using parenthesis, as it will result in a parsing error. This has been filed as separate bug. However when using parenthesis at this nested join, the query is actually wrongly pars... | be48a3da890f9ce40308af1c4b8750657c1701db | {
"head_commit": "305833af719371d3d713631e75ca339da670f065",
"head_commit_message": "Remove retreat, manually attach alias to subquery if it's already parsed",
"patch_to_review": "diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py\nindex fdf02c8184..d0f8ff886e 100644\n--- a/sqlglot/expressions.py\n+++ b... | [
{
"diff_hunk": "@@ -1969,10 +1969,35 @@ def _parse_select(\n \n self._match_r_paren()\n \n- # early return so that subquery unions aren't parsed again\n- # SELECT * FROM (SELECT 1) UNION ALL SELECT 1\n- # Union ALL should be a property of the top select node, not the... | 758581a5f38419fbc54c6696274f032510b2bdc3 | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index fdf02c8184..52538c648d 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -274,12 +274,16 @@ def append(self, arg_key: str, value: t.Any) -> None:
def set(self, arg_key: str, value: t.Any) -> None:
"""
- Sets `arg... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-1738@5020f36 | tobymao/sqlglot | Python | 1,738 | Fix!(presto): ensure ||, CONCAT args are strings | Fixes #1735 | 2023-06-07T00:18:05Z | Can not convert '||' from redshift to presto/trino when the arguments are not string typed
```
sql = """
select 'a' || 1 || 1.1
"""
converted_sql = sqlglot.transpile(sql, read="redshift", write="trino")[0]
print(converted_sql)
```
```
SELECT 'a' || 1 || 1.1
```
Trino would throw error `Unexpected paramete... | ~I don't think this can be done without type annotations, because you could also have `'a' || x`. So for the transpilation to be correct you need to know the type of `x`.~
E.g. the following is valid Redshift
```
WITH t AS (SELECT 1 AS x) SELECT 'a' || x FROM t;
```
I guess we'll just take a look and see what'... | [
{
"body": "```\r\nsql = \"\"\"\r\nselect 'a' || 1 || 1.1\r\n\"\"\"\r\nconverted_sql = sqlglot.transpile(sql, read=\"redshift\", write=\"trino\")[0]\r\nprint(converted_sql)\r\n```\r\n```\r\nSELECT 'a' || 1 || 1.1\r\n\r\n```\r\n\r\nTrino would throw error `Unexpected parameters (varchar(1), integer) for function ... | 2dd8cba03fea94b811ec6bf2c6ce0a60bc48744f | {
"head_commit": "5020f36cd64b5a9f19a314f87b08abba956bf7c8",
"head_commit_message": "Simplify",
"patch_to_review": "diff --git a/sqlglot/dialects/clickhouse.py b/sqlglot/dialects/clickhouse.py\nindex 1509a093b4..5daa68e61a 100644\n--- a/sqlglot/dialects/clickhouse.py\n+++ b/sqlglot/dialects/clickhouse.py\n@@ -23,... | [
{
"diff_hunk": "@@ -2074,6 +2074,11 @@ def intdiv_sql(self, expression: exp.IntDiv) -> str:\n def dpipe_sql(self, expression: exp.DPipe) -> str:\n return self.binary(expression, \"||\")\n \n+ def safedpipe_sql(self, expression: exp.SafeDPipe) -> str:\n+ if self.STRICT_STRING_CONCAT:\n+ ... | 05fe4745573b98ba3afe2c53359e7efeb501c823 | diff --git a/sqlglot/dialects/clickhouse.py b/sqlglot/dialects/clickhouse.py
index 1509a093b4..5daa68e61a 100644
--- a/sqlglot/dialects/clickhouse.py
+++ b/sqlglot/dialects/clickhouse.py
@@ -23,6 +23,7 @@ def _lower_func(sql: str) -> str:
class ClickHouse(Dialect):
NORMALIZE_FUNCTIONS: bool | str = False
NUL... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-1708@256a379 | tobymao/sqlglot | Python | 1,708 | Feat(mysql): add support for the UNIQUE KEY constraint | Fixes #1707 | 2023-05-31T14:40:55Z | Issues parsing MySQL with UNIQUE constraints
After creating a table in mysql, and getting the create sql string with `SHOW CREATE TABLE`, sqlglot is unable to parse the returned string.
I expect that sqlglot to correctly parse valid MySQL strings.
1) Create a MySQL table:
```
CREATE TABLE foo (
id CHAR(36) D... | [
{
"body": "After creating a table in mysql, and getting the create sql string with `SHOW CREATE TABLE`, sqlglot is unable to parse the returned string.\r\n\r\nI expect that sqlglot to correctly parse valid MySQL strings.\r\n\r\n1) Create a MySQL table:\r\n```\r\nCREATE TABLE foo (\r\n id CHAR(36) DEFAULT (UUID... | 611c234b0bd0e100079011780f3f70eaf95aad01 | {
"head_commit": "256a3790651e4c99d70928aaeee0c9f1ec73fc5e",
"head_commit_message": "Feat(mysql): add support for the UNIQUE KEY constraint",
"patch_to_review": "diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py\nindex db1fa5005a..b74e369a23 100644\n--- a/sqlglot/expressions.py\n+++ b/sqlglot/expressio... | [
{
"diff_hunk": "@@ -3371,9 +3371,11 @@ def _parse_unnamed_constraint(\n return self.CONSTRAINT_PARSERS[constraint](self)\n \n def _parse_unique(self) -> exp.Expression:\n+ this = self._match_text_seq(\"KEY\") and self._parse_id_var()\n if not self._match(TokenType.L_PAREN, advance=Fal... | 86fa81ccca5a37a888aa78406039abaa8cd03a25 | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index db1fa5005a..441cef5c89 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -1285,7 +1285,7 @@ class TitleColumnConstraint(ColumnConstraintKind):
class UniqueColumnConstraint(ColumnConstraintKind):
- arg_types: t.Dict[str, t.Any] ... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-1729@8be14c1 | tobymao/sqlglot | Python | 1,729 | Fix: conditionally quote identifiers that start with a digit | Fixes #1727 | 2023-06-06T12:04:39Z | Trino aliases cannot start with an integer if unquoted
This is a valid Spark query, but it's not in Trino, I get an error "identifiers must not start with a digit; surround the identifier with double quotes"
```
>>> import sqlglot
>>> sqlglot.__version__
'15.0.0'
>>> sqlglot.transpile("SELECT 1 AS 1x", read='spa... | [
{
"body": "This is a valid Spark query, but it's not in Trino, I get an error \"identifiers must not start with a digit; surround the identifier with double quotes\"\r\n\r\n```\r\n>>> import sqlglot\r\n>>> sqlglot.__version__\r\n'15.0.0'\r\n>>> sqlglot.transpile(\"SELECT 1 AS 1x\", read='spark', write='trino')[... | e4b164ff762fe3491ba86e8a9b5759793acf825a | {
"head_commit": "8be14c109d2dc2f7604014e5984f84bb99a5207d",
"head_commit_message": "Fixups",
"patch_to_review": "diff --git a/sqlglot/dialects/hive.py b/sqlglot/dialects/hive.py\nindex fbd626a2c0..8b434396ed 100644\n--- a/sqlglot/dialects/hive.py\n+++ b/sqlglot/dialects/hive.py\n@@ -283,6 +283,7 @@ class Generat... | [
{
"diff_hunk": "@@ -283,6 +283,7 @@ class Generator(generator.Generator):\n JOIN_HINTS = False\n TABLE_HINTS = False\n INDEX_ON = \"ON TABLE\"\n+ IDENTIFIER_CAN_START_WITH_DIGIT = True",
"line": null,
"original_line": 286,
"original_start_line": null,
"path": "sqlg... | 35f8e12e294c0d116beedd8bba5f2f9738b6bac1 | diff --git a/sqlglot/dialects/dialect.py b/sqlglot/dialects/dialect.py
index 890a3c3cb5..448cf34738 100644
--- a/sqlglot/dialects/dialect.py
+++ b/sqlglot/dialects/dialect.py
@@ -104,6 +104,10 @@ def get_start_end(token_type: TokenType) -> t.Tuple[t.Optional[str], t.Optional[
klass.byte_start, klass.byte_end =... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-1560@a955af9 | tobymao/sqlglot | Python | 1,560 | Feat(optimizer): expand join constructs into SELECT * from subqueries | Fixes #1554
<details>
<summary>Click this to see the execution of all added test cases in both trino and postgres.</summary>
Trino
-----
```
trino> WITH tbl AS (SELECT 1) SELECT * FROM (tbl AS tbl) AS _q_0; -- Only valid in Trino, postgres doesn't allow this
_col0
-------
1
(1 row)
trino> WIT... | 2023-05-05T19:18:42Z | build_scope and optimize fail on query
Hi,
dialect = trino
It is not possible to build the scope or optimize this sql statement,
```
sql = """
WITH
v_select_44724 AS (SELECT field_4705 Name, id id, field_4707 Active, field_4729 num, field_4730 num1, field_4731 num2 FROM askai_34.public.database_table_271),
... | Will take a look soon, thanks for reporting. | [
{
"body": "Hi,\r\n\r\ndialect = trino\r\n\r\nIt is not possible to build the scope or optimize this sql statement,\r\n\r\n```\r\nsql = \"\"\"\r\nWITH\r\nv_select_44724 AS (SELECT field_4705 Name, id id, field_4707 Active, field_4729 num, field_4730 num1, field_4731 num2 FROM askai_34.public.database_table_271),... | ac60698d7343880cbe0fdc5935afb2c1ce0873a8 | {
"head_commit": "a955af9d696125f668bac97ff168db5a2d3ab47f",
"head_commit_message": "Remove unnecessary copying",
"patch_to_review": "diff --git a/sqlglot/optimizer/expand_join_constructs.py b/sqlglot/optimizer/expand_join_constructs.py\nnew file mode 100644\nindex 0000000000..649a794847\n--- /dev/null\n+++ b/sql... | [
{
"diff_hunk": "@@ -0,0 +1,33 @@\n+import typing as t\n+\n+from sqlglot import exp\n+from sqlglot.optimizer.scope import traverse_scope\n+\n+\n+def expand_join_constructs(expression: exp.Expression) -> exp.Expression:\n+ \"\"\"\n+ Replace \"join constructs\" (*) by equivalent SELECT * subqueries.\n+\n+ ... | a911e8eea008941388c205c97a26568d95a599eb | diff --git a/sqlglot/optimizer/qualify_tables.py b/sqlglot/optimizer/qualify_tables.py
index a719ebedd7..1b451a68de 100644
--- a/sqlglot/optimizer/qualify_tables.py
+++ b/sqlglot/optimizer/qualify_tables.py
@@ -7,21 +7,29 @@
def qualify_tables(expression, db=None, catalog=None, schema=None):
"""
- Rewrite sq... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-1379@2d4618a | tobymao/sqlglot | Python | 1,379 | Fix: allow '[]' when parsing a nested type | Fixes #1377
cc: @cpcloud | 2023-04-05T13:23:06Z | Casting to array of struct with duckdb fails to parse
```
In [17]: import sqlglot as sg
In [18]: sg.__version__
Out[18]: '11.4.5'
In [19]: sg.parse_one("select cast([struct_pack(a := 1)] as struct(a bigint)[])", read="duckdb")
...
ParseError: Expected TYPE after CAST. Line 1, Col: 38.
select cast([struct_p... | thanks, I'll take a look soon | [
{
"body": "```\r\nIn [17]: import sqlglot as sg\r\n\r\nIn [18]: sg.__version__\r\nOut[18]: '11.4.5'\r\n\r\nIn [19]: sg.parse_one(\"select cast([struct_pack(a := 1)] as struct(a bigint)[])\", read=\"duckdb\")\r\n...\r\nParseError: Expected TYPE after CAST. Line 1, Col: 38.\r\n select cast([struct_pack(a := 1)] ... | 58fdbf05f5e7627936ebf2036410c60396d6da72 | {
"head_commit": "2d4618aa5ec4948a00ace6f0aee5888645991312",
"head_commit_message": "Add a test for array of arrays",
"patch_to_review": "diff --git a/sqlglot/parser.py b/sqlglot/parser.py\nindex 20c63dcaf2..6df4145eca 100644\n--- a/sqlglot/parser.py\n+++ b/sqlglot/parser.py\n@@ -2636,7 +2636,7 @@ def _parse_type... | [
{
"diff_hunk": "@@ -432,6 +432,18 @@ def test_cast(self):\n \"snowflake\": \"CAST(COL AS ARRAY)\",\n },\n )\n+ self.validate_all(\n+ \"CAST([STRUCT_PACK(a := 1)] AS STRUCT(a BIGINT)[])\",\n+ write={\n+ \"duckdb\": \"CAST(LIST_VALUE(... | 7297c43412097f50d7b502d71637b198d67dafbc | diff --git a/sqlglot/parser.py b/sqlglot/parser.py
index 20c63dcaf2..6df4145eca 100644
--- a/sqlglot/parser.py
+++ b/sqlglot/parser.py
@@ -2636,7 +2636,7 @@ def _parse_types(self, check_func: bool = False) -> t.Optional[exp.Expression]:
self._match_r_paren()
maybe_func = True
- if not... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-1350@5940b00 | tobymao/sqlglot | Python | 1,350 | fix MySQL json_extract_scalar | Fix https://github.com/tobymao/sqlglot/issues/1349 | 2023-03-27T13:41:13Z | JSON_EXTRACT_SCALAR is not a valid function in MySQL
```
import sqlglot
s = '''select * from requests where get_json_object(stream_data, '$.data.results') is not null'''
sqlglot.transpile(s, read='hive', write='mysql')
```
the result is `["SELECT * FROM requests WHERE NOT JSON_EXTRACT_SCALAR(stream_data, '$.data... | What is the equivalent way of expressing this in MySQL (+bonus for docs)? JSON handling is non-standard, so it's not sufficiently covered yet across many dialects. | [
{
"body": "```\r\nimport sqlglot\r\ns = '''select * from requests where get_json_object(stream_data, '$.data.results') is not null'''\r\nsqlglot.transpile(s, read='hive', write='mysql')\r\n```\r\n\r\nthe result is `[\"SELECT * FROM requests WHERE NOT JSON_EXTRACT_SCALAR(stream_data, '$.data.results') IS NULL\"]... | 8d7a3177f005084844e7a9adf15dd9db35918dc7 | {
"head_commit": "5940b0017a4c284f76f41144886fb787ce6ea82a",
"head_commit_message": "fix MySQL json_extract_scalar",
"patch_to_review": "diff --git a/sqlglot/dialects/mysql.py b/sqlglot/dialects/mysql.py\nindex ab26ebc0e9..b9071dc3cd 100644\n--- a/sqlglot/dialects/mysql.py\n+++ b/sqlglot/dialects/mysql.py\n@@ -40... | [
{
"diff_hunk": "@@ -404,6 +404,8 @@ class Generator(generator.Generator):\n exp.DayOfYear: rename_func(\"DAYOFYEAR\"),\n exp.WeekOfYear: rename_func(\"WEEKOFYEAR\"),\n exp.GroupConcat: lambda self, e: f\"\"\"GROUP_CONCAT({self.sql(e, \"this\")} SEPARATOR {self.sql(e, \"separa... | 5455acdd73dc6e6b310d4b2287b5dbfa73805379 | diff --git a/sqlglot/dialects/mysql.py b/sqlglot/dialects/mysql.py
index ab26ebc0e9..8549e153fc 100644
--- a/sqlglot/dialects/mysql.py
+++ b/sqlglot/dialects/mysql.py
@@ -404,6 +404,7 @@ class Generator(generator.Generator):
exp.DayOfYear: rename_func("DAYOFYEAR"),
exp.WeekOfYear: rename_func(... | {
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-1292@3d2bcf1 | tobymao/sqlglot | Python | 1,292 | Move SET parsing/generating logic from MySQL to base | - Moved a subset of the logic we use for parsing / generating SET statements to the base classes, so that we can parse statements like `SET variable = value` with finer granularity.
- When the base `_parse_set` method fails, the parser will fallback to parsing the input as a command.
cc: @joocer @tobymao @barakalon... | 2023-03-13T13:49:05Z | SET commands not decomposed
Parsing a `SET` command such as
~~~sql
SET @q = 4;
~~~
results in this:
_(note, the equals expression is just presented as a string literal)_
~~~
(COMMAND this: SET, expression:
(LITERAL this: @q = 4, is_string: True))
~~~
I had expected the statement to be decomposed i... | many command in SQL haven't been high priority and so are parsed in the catchall "command" node.
basic parsing of set should be easy with just doing parse_conjunction(), but i'm not sure how SET is used in all dialects.
Thanks @tobymao,
Looking at another non-SELECT example
~~~sql
SHOW FULL COLUMNS FROM $astron... | [
{
"body": "Parsing a `SET` command such as \r\n\r\n~~~sql\r\nSET @q = 4;\r\n~~~\r\n\r\nresults in this:\r\n_(note, the equals expression is just presented as a string literal)_\r\n\r\n~~~\r\n(COMMAND this: SET, expression: \r\n (LITERAL this: @q = 4, is_string: True))\r\n~~~\r\n\r\nI had expected the statement... | c05ceb21896f0b7b1282158d7b14f992213d25f7 | {
"head_commit": "3d2bcf1cbe2630254986c991cc05f799d7c2570c",
"head_commit_message": "Move SET parsing logic from MySQL to base parser",
"patch_to_review": "diff --git a/sqlglot/dialects/mysql.py b/sqlglot/dialects/mysql.py\nindex a8312358c5..1e2cfa3287 100644\n--- a/sqlglot/dialects/mysql.py\n+++ b/sqlglot/dialec... | [
{
"diff_hunk": "@@ -3864,7 +3911,18 @@ def _parse_merge(self) -> exp.Expression:\n )\n \n def _parse_set(self) -> exp.Expression:\n- return self.expression(exp.Set, expressions=self._parse_csv(self._parse_set_item))\n+ index = self._index\n+ try:\n+ return self.expres... | 60ed38f2a83495e0f3fc65d4fe5f809c8a72f5ec | diff --git a/sqlglot/dialects/mysql.py b/sqlglot/dialects/mysql.py
index a8312358c5..1e2cfa3287 100644
--- a/sqlglot/dialects/mysql.py
+++ b/sqlglot/dialects/mysql.py
@@ -177,7 +177,7 @@ class Tokenizer(tokens.Tokenizer):
"@@": TokenType.SESSION_PARAMETER,
}
- COMMANDS = tokens.Tokenizer.... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-1304@c169f47 | tobymao/sqlglot | Python | 1,304 | Drop ORDER BY clause when transpiling ARRAY_AGG to hive | Fixes #1303 | 2023-03-16T20:21:04Z | ARRAY_AGG() conversion between Trino and Spark
ARRAY_AGG() in Trino supports `array_agg(x ORDER BY y DESC)` which currently is translated to Spark as `COLLECT_LIST(x ORDER BY y DESC)` but Spark doesn't support that syntax at all -- just `COLLECT_LIST(x)`. Could we fix this mapping to syntactically parse (and drop the... | I'll take a look soon | [
{
"body": "ARRAY_AGG() in Trino supports `array_agg(x ORDER BY y DESC)` which currently is translated to Spark as `COLLECT_LIST(x ORDER BY y DESC)` but Spark doesn't support that syntax at all -- just `COLLECT_LIST(x)`. Could we fix this mapping to syntactically parse (and drop the ordering) or is there a way... | a2a6065a7b6eadb452b77f80d5389a5e7f6cac10 | {
"head_commit": "c169f4707b0c088f1edafa9993a851ebd9bc7ccb",
"head_commit_message": "Drop ORDER BY clause when transpiling ARRAY_AGG to hive",
"patch_to_review": "diff --git a/sqlglot/dialects/hive.py b/sqlglot/dialects/hive.py\nindex a01daa82e8..14038ff8c7 100644\n--- a/sqlglot/dialects/hive.py\n+++ b/sqlglot/di... | [
{
"diff_hunk": "@@ -335,13 +338,17 @@ class Generator(generator.Generator):\n exp.TableFormatProperty: exp.Properties.Location.POST_SCHEMA,\n }\n \n- def with_properties(self, properties):\n+ def arrayagg_sql(self, expression: exp.ArrayAgg) -> str:\n+ arg = expressio... | 89c77eb9376463f1556e3b0f94db4cf6f7171917 | diff --git a/sqlglot/dialects/hive.py b/sqlglot/dialects/hive.py
index a01daa82e8..0110eee568 100644
--- a/sqlglot/dialects/hive.py
+++ b/sqlglot/dialects/hive.py
@@ -1,5 +1,7 @@
from __future__ import annotations
+import typing as t
+
from sqlglot import exp, generator, parser, tokens, transforms
from sqlglot.dia... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-1181@687f071 | tobymao/sqlglot | Python | 1,181 | UDTF scope refactor | fixes https://github.com/tobymao/sqlglot/issues/1168
This adds `lateral_sources`. I'm not crazy about yet another concept in scope. | 2023-02-15T17:02:06Z | Optimize Error in qualify_columns in presto unnest
```python
optimize(
sqlglot.parse_one(
"""
select table.a, t.b
from table_t
cross join unnest(split(b, ',')) AS view_t(b)
""",
read="presto",
),
{},
).sql()
```
Looking at the example above, the same named column `b` in unnes... | [
{
"body": "```python\r\noptimize(\r\n sqlglot.parse_one(\r\n \"\"\"\r\nselect table.a, t.b\r\nfrom table_t\r\ncross join unnest(split(b, ',')) AS view_t(b)\r\n\"\"\",\r\n read=\"presto\",\r\n ),\r\n {},\r\n).sql()\r\n```\r\n\r\nLooking at the example above, the same named column `b` in un... | 35dd02a00ee6deea78cf9025dd70249598def066 | {
"head_commit": "687f0712f873b32a52b80a4fcbff55c61a514016",
"head_commit_message": "extend laterals",
"patch_to_review": "diff --git a/sqlglot/optimizer/eliminate_subqueries.py b/sqlglot/optimizer/eliminate_subqueries.py\nindex c6bea5a849..51b888d11c 100644\n--- a/sqlglot/optimizer/eliminate_subqueries.py\n+++ b... | [
{
"diff_hunk": "@@ -47,22 +53,28 @@ def __init__(\n outer_column_list=None,\n parent=None,\n scope_type=ScopeType.ROOT,\n+ lateral_sources=None,\n ):\n self.expression = expression\n self.sources = sources or {}\n+ self.lateral_sources = {**lateral_sourc... | 04c404058e10c538d786421d238ea3187906462a | diff --git a/sqlglot/optimizer/eliminate_subqueries.py b/sqlglot/optimizer/eliminate_subqueries.py
index c6bea5a849..6f9db82ca2 100644
--- a/sqlglot/optimizer/eliminate_subqueries.py
+++ b/sqlglot/optimizer/eliminate_subqueries.py
@@ -81,9 +81,7 @@ def eliminate_subqueries(expression):
new_ctes.append(cte_scop... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-1137@70ff569 | tobymao/sqlglot | Python | 1,137 | Fix parsing for snowflake AUTOINCREMENT/IDENTITY constraints | Fixes #1129 | 2023-02-09T18:48:28Z | snowflake: cannot create or alter tables with AUTOINCREMENT or IDENTITY keyword
I can use the "IDENTITY" and "AUTOINCREMENT" keywords in snowflake:
```sql
create or replace table colors as
select name
from (values ('blue'),('red'),('green')) colors (name);
create or replace table identity_column_example ... | I can take a look at this and #1132 tomorrow if nobody beats me to it.
Tip: if you want to use `transpile` with `read` and `write` pointing to the same dialect, you can omit the `write` argument because of `identity` being true by default for this method. Even shorter: you can just do `transpile(..., "snowflake")`. | [
{
"body": "I can use the \"IDENTITY\" and \"AUTOINCREMENT\" keywords in snowflake:\r\n\r\n```sql\r\ncreate or replace table colors as\r\n select name\r\n from (values ('blue'),('red'),('green')) colors (name);\r\ncreate or replace table identity_column_example like colors;\r\nalter table identity_column_e... | 327874eb7ff361136a91a26bc337c1d501d17164 | {
"head_commit": "70ff569781d2d3a494bf7e434add0ab6298eaf61",
"head_commit_message": "Fix parsing for snowflake AUTOINCREMENT/IDENTITY constraints",
"patch_to_review": "diff --git a/sqlglot/dialects/snowflake.py b/sqlglot/dialects/snowflake.py\nindex 8ca1d362ea..7b74fb5eda 100644\n--- a/sqlglot/dialects/snowflake.... | [
{
"diff_hunk": "@@ -2738,6 +2738,9 @@ def _parse_column_def(self, this: t.Optional[exp.Expression]) -> t.Optional[exp.\n \n return self.expression(exp.ColumnDef, this=this, kind=kind, constraints=constraints)\n \n+ def _parse_autoincrement(self) -> exp.Expression:",
"line": null,
"original_li... | dd4c1c69d76a76c806d3888123721d662f68592f | diff --git a/sqlglot/dialects/snowflake.py b/sqlglot/dialects/snowflake.py
index 8ca1d362ea..679502b4d1 100644
--- a/sqlglot/dialects/snowflake.py
+++ b/sqlglot/dialects/snowflake.py
@@ -295,3 +295,12 @@ def describe_sql(self, expression: exp.Describe) -> str:
kind = f" {kind_value}" if kind_value else ""
... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-1123@23610af | tobymao/sqlglot | Python | 1,123 | Fix: allow identifier escapes in the tokenizer | fixes #1122 | 2023-02-08T20:41:38Z | BUG: Escaped quotes at the beginning and end of quoted snowflake identifier cause ParseError
This query works in snowflake:
```sql
SELECT """C Market # Segment""" FROM "Funky Customer With Nulls"
```
where the column name is `"C Market # Segment"` with quotes included. You escape quotes inside snowflake SQL ide... | +1 | [
{
"body": "This query works in snowflake:\r\n\r\n```sql\r\nSELECT \"\"\"C Market # Segment\"\"\" FROM \"Funky Customer With Nulls\"\r\n```\r\n\r\nwhere the column name is `\"C Market # Segment\"` with quotes included. You escape quotes inside snowflake SQL identifiers by using two quotes.\r\n\r\nTranspiling the... | 295428f8133655505d8b463a1c8d8865fd22bcc7 | {
"head_commit": "23610af6509e40cf8826ff750525857057dbeee8",
"head_commit_message": "Formatting",
"patch_to_review": "diff --git a/sqlglot/dialects/bigquery.py b/sqlglot/dialects/bigquery.py\nindex 0ae3e04997..7236c48285 100644\n--- a/sqlglot/dialects/bigquery.py\n+++ b/sqlglot/dialects/bigquery.py\n@@ -116,7 +11... | [
{
"diff_hunk": "@@ -1046,12 +1051,25 @@ def _scan_formatted_string(self, string_start: str) -> bool:\n return True\n \n def _scan_identifier(self, identifier_end: str) -> None:\n- while self._peek != identifier_end:\n+ text = \"\"\n+ identifier_end_is_escape = identifier_end in ... | cb0cbe45e535ad006ce05b4ea372c235f077bc94 | diff --git a/sqlglot/dialects/bigquery.py b/sqlglot/dialects/bigquery.py
index 0ae3e04997..7236c48285 100644
--- a/sqlglot/dialects/bigquery.py
+++ b/sqlglot/dialects/bigquery.py
@@ -116,7 +116,7 @@ class Tokenizer(tokens.Tokenizer):
]
COMMENTS = ["--", "#", ("/*", "*/")]
IDENTIFIERS = ["`"]
... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-1083@190e22c | tobymao/sqlglot | Python | 1,083 | Fix: handling Dialect, t.Type[Dialect] now works in get_or_raise | Fixes #1082 | 2023-02-03T19:51:00Z | Unable to use Dialect instances as a read/write arguments when transpiling
Although typing indicates that functions like `transpile` accept instances of `Dialect` as `read` and `write` arguments, it doesn't seem to work. I'd be happy to submit a merge request to fix this, but it seems like this issue goes deeper, with ... | I can confirm that this is an issue. I'll post a fix soon, unless you want to play around with the project and submit a PR.
Thanks for the report! It's interesting that we haven't tested that properly yet.
Thank you, if it's a quick fix I would of course be happier if you do it, since I see there are a lot of places... | [
{
"body": "Although typing indicates that functions like `transpile` accept instances of `Dialect` as `read` and `write` arguments, it doesn't seem to work. I'd be happy to submit a merge request to fix this, but it seems like this issue goes deeper, with `Dialect.get_or_raise` actually being the root problem, ... | 93871a44691c9ee8f272180a68d57019724d1fc7 | {
"head_commit": "190e22c1d0338f1d99b8aaf273f0f2e83073900c",
"head_commit_message": "More fixups",
"patch_to_review": "diff --git a/sqlglot/__init__.py b/sqlglot/__init__.py\nindex 6861590a43..5738fd0c51 100644\n--- a/sqlglot/__init__.py\n+++ b/sqlglot/__init__.py\n@@ -36,6 +36,8 @@\n if t.TYPE_CHECKING:\n T ... | [
{
"diff_hunk": "@@ -36,6 +36,8 @@\n if t.TYPE_CHECKING:\n T = t.TypeVar(\"T\", bound=Expression)\n \n+ DialectType = t.Union[str, Dialect, t.Type[Dialect]]",
"line": null,
"original_line": 39,
"original_start_line": null,
"path": "sqlglot/__init__.py",
"start_line": null,
"text": ... | a64fa7999bb34054476838772927353ce1eb9cfa | diff --git a/sqlglot/__init__.py b/sqlglot/__init__.py
index 6861590a43..6aefb27c18 100644
--- a/sqlglot/__init__.py
+++ b/sqlglot/__init__.py
@@ -34,8 +34,11 @@
from sqlglot.tokens import Tokenizer, TokenType
if t.TYPE_CHECKING:
+ from sqlglot.dialects.dialect import DialectType
+
T = t.TypeVar("T", bound=... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-958@bda9a66 | tobymao/sqlglot | Python | 958 | Fix: accept some keywords as ID_VARS for BigQuery | This is a WIP because I haven't solved the issue related to the `time` identifier.
Fixes #954 | 2023-01-17T16:18:56Z | Using reserved words as identifiers
In Bigquery, various reserved words and builtin function names can be used as unquoted identifiers.
E.g. `WITH view AS (SELECT 1 AS x) SELECT * FROM view` is valid in BQ but raises a ParseError "Expected CTE to have alias". Renaming the CTE to something else stops the error.
A... | I'll take a look. I think you just need to add the tokens of interest to `ID_VAR_TOKENS` for this to be resolved. | [
{
"body": "In Bigquery, various reserved words and builtin function names can be used as unquoted identifiers.\r\n\r\nE.g. `WITH view AS (SELECT 1 AS x) SELECT * FROM view` is valid in BQ but raises a ParseError \"Expected CTE to have alias\". Renaming the CTE to something else stops the error.\r\n\r\nAnother ... | b634df7c07b2d86ad59677a2c7f013c3f5dc2641 | {
"head_commit": "bda9a666b44866a4dc0e1fb6b4de217c3457ff83",
"head_commit_message": "Fix: accept 'view' and 'values' as valid id vars (bigquery)",
"patch_to_review": "diff --git a/sqlglot/dialects/bigquery.py b/sqlglot/dialects/bigquery.py\nindex f0089e18f5..9a2554df25 100644\n--- a/sqlglot/dialects/bigquery.py\n... | [
{
"diff_hunk": "@@ -165,6 +165,14 @@ class Parser(parser.Parser):\n TokenType.TABLE,\n }\n \n+ ID_VAR_TOKENS = {\n+ *parser.Parser.ID_VAR_TOKENS, # type: ignore\n+ TokenType.VALUES,\n+ TokenType.VIEW,",
"line": null,
"original_line": 171,
... | ed100d6c169905ab6046044f4bd6a2f039241dbd | diff --git a/sqlglot/dialects/bigquery.py b/sqlglot/dialects/bigquery.py
index f0089e18f5..9ddfbea4fc 100644
--- a/sqlglot/dialects/bigquery.py
+++ b/sqlglot/dialects/bigquery.py
@@ -165,6 +165,11 @@ class Parser(parser.Parser):
TokenType.TABLE,
}
+ ID_VAR_TOKENS = {
+ *parser.... | {
"difficulty": "medium",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-932@0ab58ea | tobymao/sqlglot | Python | 932 | Fix window parsing for ROWS BETWEEN without alias | fixes #931 | 2023-01-13T16:36:34Z | Window frame fails to parse
```
In [27]: import sqlglot as sg
In [28]: sg.__version__
Out[28]: '10.4.3'
In [29]: sg.parse_one("SELECT avg(x) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FROM t", read="duckdb")
...
ParseError: Expecting ). Line 1, Col: 26.
SELECT avg(x) OVER (ROWS BETWEEN... | [
{
"body": "```\r\nIn [27]: import sqlglot as sg\r\n\r\nIn [28]: sg.__version__\r\nOut[28]: '10.4.3'\r\n\r\nIn [29]: sg.parse_one(\"SELECT avg(x) OVER (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FROM t\", read=\"duckdb\")\r\n...\r\nParseError: Expecting ). Line 1, Col: 26.\r\n SELECT avg(x) OVER ... | db19481580e1243458b702d9df016a374ce4bb5f | {
"head_commit": "0ab58ea8c4d3c3ab29609fd790b5b62742d6d65a",
"head_commit_message": "Fix window parsing for ROWS BETWEEN without alias",
"patch_to_review": "diff --git a/sqlglot/parser.py b/sqlglot/parser.py\nindex 4a01ed8a28..32d3173328 100644\n--- a/sqlglot/parser.py\n+++ b/sqlglot/parser.py\n@@ -594,6 +594,8 @... | [
{
"diff_hunk": "@@ -426,6 +426,7 @@ SELECT SUM(x) OVER (PARTITION BY a RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT\n SELECT SUM(x) OVER (PARTITION BY a RANGE BETWEEN 1 AND 3)\n SELECT SUM(x) OVER (PARTITION BY a RANGE BETWEEN 1 FOLLOWING AND 3)\n SELECT SUM(x) OVER (PARTITION BY a RANGE BETWEEN 1 FOLLOWING AN... | 5fd136eb65a74f97d08d6e9c0b2cc7b10eda2965 | diff --git a/sqlglot/generator.py b/sqlglot/generator.py
index c690ec0958..694c0abb0e 100644
--- a/sqlglot/generator.py
+++ b/sqlglot/generator.py
@@ -1031,7 +1031,9 @@ def window_sql(self, expression: exp.Window) -> str:
if not partition and not order and not spec and alias:
return f"{this} {alia... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-744@5af62d0 | tobymao/sqlglot | Python | 744 | Fix sqlite primary key order constraint | Fixes #742 | 2022-11-21T20:32:48Z | [sqlite dialect] Parser doesn't recognize optional order parameter for PKEY column-constraint
sqlite's `CREATE TABLE` statement accepts an optional order parameter for `PRIMARY KEY` column-constraints. `sqlglot` does not yet handle this parameter:
```python
create_table_sql = "CREATE TABLE foo (id INTEGER PRIMARY K... | [
{
"body": "sqlite's `CREATE TABLE` statement accepts an optional order parameter for `PRIMARY KEY` column-constraints. `sqlglot` does not yet handle this parameter:\r\n\r\n```python\r\ncreate_table_sql = \"CREATE TABLE foo (id INTEGER PRIMARY KEY ASC);\"\r\nsqlglot.parse_one(create_table_sql, read=\"sqlite\")\r... | 4041737c39bd1e67a2dc18414b1900a730a340f2 | {
"head_commit": "5af62d07eb176540818b0be9896300c350e79875",
"head_commit_message": "Fix sqlite primary key order constraint",
"patch_to_review": "diff --git a/sqlglot/generator.py b/sqlglot/generator.py\nindex 71f0265677..1488f40c13 100644\n--- a/sqlglot/generator.py\n+++ b/sqlglot/generator.py\n@@ -387,8 +387,9... | [
{
"diff_hunk": "@@ -2050,7 +2050,10 @@ def _parse_column_constraint(self):\n elif self._match(TokenType.SCHEMA_COMMENT):\n kind = self.expression(exp.CommentColumnConstraint, this=self._parse_string())\n elif self._match(TokenType.PRIMARY_KEY):\n- kind = exp.PrimaryKeyColu... | 01fee9467b283ae9c59bc2dee14e0244f7e2a619 | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index 6b9fd4bcbd..fe1e7c488b 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -767,7 +767,7 @@ class NotNullColumnConstraint(ColumnConstraintKind):
class PrimaryKeyColumnConstraint(ColumnConstraintKind):
- pass
+ arg_types = {"de... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-708@dbd5fbc | tobymao/sqlglot | Python | 708 | Add support for aggregates without the GROUP BY clause | Fixes #703 | 2022-11-15T23:00:46Z | execute(): Exception when aggregating without GROUP BY
sqlglot 10.0.2, Python 3.8.4
`>>> execute('select sum(x) from t', tables={'t': [{'x': 1}, {'x': 2}]})`
Result:
`sqlglot.errors.ExecuteError: Step 'Scan: t (140100782263984)' failed: '_col_0'`
If I add column 'y' above, with the same value for every row,... | this isn't supported yet but we can add it soon
Note: I'm planning to tackle this issue soon, just spending some time to understand the executor's logic since I haven't had the chance to play around with it. | [
{
"body": "sqlglot 10.0.2, Python 3.8.4\r\n\r\n`>>> execute('select sum(x) from t', tables={'t': [{'x': 1}, {'x': 2}]})`\r\n\r\nResult:\r\n\r\n`sqlglot.errors.ExecuteError: Step 'Scan: t (140100782263984)' failed: '_col_0'`\r\n\r\nIf I add column 'y' above, with the same value for every row, and group by that c... | 543eca314546e0bd42f97c354807b4e398ab36ec | {
"head_commit": "dbd5fbccae423e89bb3eff4aebc7cc200264ae54",
"head_commit_message": "Add support for aggregates without the GROUP BY clause",
"patch_to_review": "diff --git a/sqlglot/executor/__init__.py b/sqlglot/executor/__init__.py\nindex 1954e54699..22f42ee585 100644\n--- a/sqlglot/executor/__init__.py\n+++ b... | [
{
"diff_hunk": "@@ -31,18 +38,18 @@ def eval_tuple(self, codes):\n return tuple(self.eval(code) for code in codes)\n \n @property\n- def table(self):\n+ def table(self) -> Table:\n if self._table is None:\n- self._table = list(self.tables.values())[0]\n+ self._tab... | eebee1afd08fa417f34234e580a94425473d6f6b | diff --git a/sqlglot/executor/__init__.py b/sqlglot/executor/__init__.py
index 1954e54699..22f42ee585 100644
--- a/sqlglot/executor/__init__.py
+++ b/sqlglot/executor/__init__.py
@@ -51,17 +51,3 @@ def execute(sql, schema=None, read=None, tables=None):
result = PythonExecutor(tables=tables).execute(plan)
logg... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-884@68eb80b | tobymao/sqlglot | Python | 884 | Add support for clickhouse's parametric function syntax | Phew... this was more tricky than expected.
Fixes #882 | 2023-01-04T20:31:12Z | ClickHouse quantile/quantileIf functions fail to parse
ClickHouse has a somewhat unique way of spelling these:
```
9bf006fcf914 :) select quantile(0.5)(a) from (select 1 a union all select 2);
SELECT quantile(0.5)(a)
FROM
(
SELECT 1 AS a
UNION ALL
SELECT 2
)
Query id: fe37056a-efe0-45bf-9d34... | I'll take a look at this soon. Besides the `quantile` argument missing in the `Quantile` expression, I don't think SQLGlot can handle the syntax `foo(a)(b)` yet. | [
{
"body": "ClickHouse has a somewhat unique way of spelling these:\r\n\r\n```\r\n9bf006fcf914 :) select quantile(0.5)(a) from (select 1 a union all select 2);\r\n\r\nSELECT quantile(0.5)(a)\r\nFROM\r\n(\r\n SELECT 1 AS a\r\n UNION ALL\r\n SELECT 2\r\n)\r\n\r\nQuery id: fe37056a-efe0-45bf-9d34-95de2916e... | 0db3807d7d5b59fc49009b053807bcc46a8a8cd7 | {
"head_commit": "68eb80bccb69c78b83544fed2a781c286ba79484",
"head_commit_message": "Add support for clickhouse's parametric function syntax",
"patch_to_review": "diff --git a/sqlglot/dialects/clickhouse.py b/sqlglot/dialects/clickhouse.py\nindex 7136340228..6e655c2118 100644\n--- a/sqlglot/dialects/clickhouse.py... | [
{
"diff_hunk": "@@ -2049,7 +2049,23 @@ def _parse_function(self, functions=None):\n args = self._parse_csv(self._parse_lambda)\n \n if function:\n- this = function(args)\n+ params = None\n+\n+ # Clickhouse supports function calls like foo(x, y... | 10d7b2dd24a4f90a290eebf3e4fe60706970e16c | diff --git a/sqlglot/dialects/clickhouse.py b/sqlglot/dialects/clickhouse.py
index 7136340228..6e655c2118 100644
--- a/sqlglot/dialects/clickhouse.py
+++ b/sqlglot/dialects/clickhouse.py
@@ -11,6 +11,13 @@ def _lower_func(sql):
return sql[:index].lower() + sql[index:]
+def _quantile_sql(self, expression: exp.Q... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-677@3f8dd6f | tobymao/sqlglot | Python | 677 | Add support for snowflake's flatten function | fixes #675 | 2022-11-08T02:02:42Z | Difficulty parsing snowflake query with `split()` nested in `flatten()`
Hello!
I've been unable to get sqlglot to parse the following query and I'm not sure if it's user error or a bug? Would love to get this resolved:
```python
import sqlglot
import sqlglot.expressions as exp
query = f"""
select
dag_r... | Hello, thanks for the report.
This error happens because `=>` is not currently parsed properly to support Snowflake's `flatten` [syntax](https://docs.snowflake.com/en/sql-reference/functions/flatten.html#syntax).
I'll take a look at it soon. | [
{
"body": "Hello!\r\n\r\nI've been unable to get sqlglot to parse the following query and I'm not sure if it's user error or a bug? Would love to get this resolved:\r\n\r\n```python\r\n\r\nimport sqlglot\r\nimport sqlglot.expressions as exp\r\n\r\nquery = f\"\"\"\r\nselect\r\n dag_report.acct_id,\r\n dag_repo... | 1533ed5960b6e395fb6da89021fd7426c41731a2 | {
"head_commit": "3f8dd6fd6d6bfb99d20a4a9b86daa8d4d4432e11",
"head_commit_message": "Add support for snowflake's flatten function",
"patch_to_review": "diff --git a/sqlglot/dataframe/sql/functions.py b/sqlglot/dataframe/sql/functions.py\nindex dbfb06f30b..9863d56ddc 100644\n--- a/sqlglot/dataframe/sql/functions.p... | [
{
"diff_hunk": "@@ -71,6 +71,18 @@ def _unix_to_time(self, expression):\n raise ValueError(\"Improper scale for timestamp\")\n \n \n+def _flatten_sql(self, expression):\n+ args = [\n+ f\"{'INPUT' if key == 'this' else key.upper()} => {self.sql(arg)}\"\n+ for key, arg in expression.args.item... | 1055ef902b844063267c3e5b2c608d87d9c99798 | diff --git a/sqlglot/dialects/snowflake.py b/sqlglot/dialects/snowflake.py
index 9149ef4b3d..0c8a4fa631 100644
--- a/sqlglot/dialects/snowflake.py
+++ b/sqlglot/dialects/snowflake.py
@@ -58,7 +58,7 @@ def _snowflake_to_timestamp(args):
return exp.UnixToTime.from_arg_list(args)
-def _unix_to_time(self, expressi... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-555@44bba4c | tobymao/sqlglot | Python | 555 | Allow functions after IN | fixes #554 | 2022-10-06T23:44:42Z | IN operation doesn't parse with the clickhouse dialect
clickhouse:
```
albatross :) select 'a' in mapKeys(map('a', 1, 'b', 2));
SELECT 'a' IN mapKeys(map('a', 1, 'b', 2))
Query id: 29c42995-45a2-491c-8b36-cf0e51b2b39b
┌─in('a', mapKeys(map('a', 1, 'b', 2)))─┐
│ 1 │
└──... | [
{
"body": "clickhouse:\r\n\r\n```\r\nalbatross :) select 'a' in mapKeys(map('a', 1, 'b', 2));\r\n\r\nSELECT 'a' IN mapKeys(map('a', 1, 'b', 2))\r\n\r\nQuery id: 29c42995-45a2-491c-8b36-cf0e51b2b39b\r\n\r\n┌─in('a', mapKeys(map('a', 1, 'b', 2)))─┐\r\n│ 1 │\r\n└────────────────... | d486eab830a13572f3c13d7b508279df0a3802cb | {
"head_commit": "44bba4c56543efeb4cb1d3582041b7c36345cfdd",
"head_commit_message": "Allow functions after IN",
"patch_to_review": "diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py\nindex 39ce0af0ca..99c06140b7 100644\n--- a/sqlglot/expressions.py\n+++ b/sqlglot/expressions.py\n@@ -2074,7 +2074,7 @@ c... | [
{
"diff_hunk": "@@ -2074,7 +2074,7 @@ class Distinct(Expression):\n \n \n class In(Predicate):\n- arg_types = {\"this\": True, \"expressions\": False, \"query\": False, \"unnest\": False}\n+ arg_types = {\"this\": True, \"expressions\": False, \"query\": False, \"unnest\": False, \"function\": False}",
... | 66f6178c5a5da716fcc6bfd9ce6843d3a2ae34b3 | diff --git a/sqlglot/expressions.py b/sqlglot/expressions.py
index 39ce0af0ca..53951bffa6 100644
--- a/sqlglot/expressions.py
+++ b/sqlglot/expressions.py
@@ -2074,7 +2074,7 @@ class Distinct(Expression):
class In(Predicate):
- arg_types = {"this": True, "expressions": False, "query": False, "unnest": False}
+ ... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-553@1d0c75c | tobymao/sqlglot | Python | 553 | Export union/intersect/except builders | fixes #540 | 2022-10-06T21:21:42Z | API support for union/intersect/except
Is there a way to construct a union from an existing sqlglot expression?
I tried the following, and hunted around for methods in the codebase but I didn't see anything:
```
In [9]: import sqlglot as sg
In [10]: t = sg.table('t')
In [11]: t.union(t)
------------------... | A quick workaround:
```python
import sqlglot as sg
from sqlglot import expressions as exp
t = sg.table('t')
sel = sg.select('*').from_(t)
union = exp.Union(this=sel, expression=sel.copy())
```
But a `union` builder method would be sweet, yeah.
Ah, great. Thank you. My current workaround was ... not as n... | [
{
"body": "Is there a way to construct a union from an existing sqlglot expression?\r\n\r\nI tried the following, and hunted around for methods in the codebase but I didn't see anything:\r\n\r\n```\r\nIn [9]: import sqlglot as sg\r\n\r\nIn [10]: t = sg.table('t')\r\n\r\nIn [11]: t.union(t)\r\n------------------... | ce00df12633606ba0db62fdf60d309eabf678fb3 | {
"head_commit": "1d0c75cbddec4bbe581ab32a7fe9a3e95e51268c",
"head_commit_message": "Export union/intersect/except builders",
"patch_to_review": "diff --git a/sqlglot/__init__.py b/sqlglot/__init__.py\nindex 8291ec2b8b..fc5358e0a8 100644\n--- a/sqlglot/__init__.py\n+++ b/sqlglot/__init__.py\n@@ -8,7 +8,9 @@\n ... | [
{
"diff_hunk": "@@ -428,6 +428,69 @@ def assert_is(self, type_):\n assert isinstance(self, type_)\n return self\n \n+ def union(self, expr, distinct=True, dialect=None, **opts):",
"line": null,
"original_line": 431,
"original_start_line": null,
"path": "sqlglot/expressions.py"... | b62f920b26088e9bb19770abac4d2b0057950319 | diff --git a/sqlglot/__init__.py b/sqlglot/__init__.py
index 8291ec2b8b..fc5358e0a8 100644
--- a/sqlglot/__init__.py
+++ b/sqlglot/__init__.py
@@ -8,7 +8,9 @@
and_,
column,
condition,
+ except_,
from_,
+ intersect,
maybe_parse,
not_,
or_,
@@ -16,6 +18,7 @@
subquery,
)
fro... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
} |
triton-inference-server__server-3276@064d14d | triton-inference-server/server | Python | 3,276 | http on windows | Related to triton-inference-server/server#3130
PR for third_party: https://github.com/triton-inference-server/third_party/pull/9 | 2021-08-25T14:21:11Z | Support for HTTP endpoint on windows
**Describe the solution you'd like**
Is it possible for Windows version of Triton to have HTTP server, like the Linux version does? Currently, only GRPC endpoint is supported.
**Additional context**
Triton uses libevhtp in HTTP server implementation, and vcpkg seems to lack li... | Triton likely needs a windows-native port of libevhtp. We suspect that it is just a matter of using the appropriate socket API for windows. Other than libevhtp there is no other blocker for getting HTTP support on windows. At this time we don't have a schedule for when we will be working on windows HTTP. | [
{
"body": "**Describe the solution you'd like**\r\nIs it possible for Windows version of Triton to have HTTP server, like the Linux version does? Currently, only GRPC endpoint is supported. \r\n\r\n**Additional context**\r\nTriton uses libevhtp in HTTP server implementation, and vcpkg seems to lack libevhtp for... | 54d31b5c528227590648e2c2e1a4d8f6ceb23225 | {
"head_commit": "064d14d5922318d77b0fda9980cc2389b3f252ce",
"head_commit_message": "dockerfile polishing",
"patch_to_review": "diff --git a/Dockerfile.win10.min b/Dockerfile.win10.min\nindex 50285abd53..8f13b3f79e 100644\n--- a/Dockerfile.win10.min\n+++ b/Dockerfile.win10.min\n@@ -58,11 +58,13 @@ ARG VS_INSTALL_... | [
{
"diff_hunk": "@@ -702,6 +707,11 @@ StartEndpoints(\n const std::shared_ptr<nvidia::inferenceserver::SharedMemoryManager>&\n shm_manager)\n {\n+#ifdef _WIN32\n+ WSADATA wsaData;\n+ WSAStartup(MAKEWORD(2,2), &wsaData);",
"line": null,
"original_line": 712,
"original_start_line": null,
... | 13e367d27d21967fa682f2ab5ffba12a2c9ee1d3 | diff --git a/Dockerfile.win10.min b/Dockerfile.win10.min
index ef85aaed56..0ecfa57e93 100644
--- a/Dockerfile.win10.min
+++ b/Dockerfile.win10.min
@@ -58,11 +58,15 @@ ARG VS_INSTALL_PATH_WP="C:\BuildTools"
RUN powershell.exe Start-Process -FilePath vs_buildtools.exe -ArgumentList "--wait","--quiet","--norestart","--no... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "New Feature Additions"
} |
triton-inference-server__server-2796@7345236 | triton-inference-server/server | Python | 2,796 | Fix memory leak caused by repeated calling Aws::InitAPI | Fixes issues outlined in #2794 | 2021-04-30T16:15:46Z | memory leak in s3 filesystem
**Description**
A clear and concise description of what the bug is.
There is a memory leak in s3 filesystem.
Here is the information given by valgrind
```
==627== 344 bytes in 1 blocks are definitely lost in loss record 1,637 of 2,513
==627== at 0x483B7F3: malloc (in /usr/lib/x86_6... | Thank you for brining this to our attention @yushcs. I have created a [PR](https://github.com/triton-inference-server/server/pull/2796) with the recommended changes here. | [
{
"body": "**Description**\r\nA clear and concise description of what the bug is.\r\nThere is a memory leak in s3 filesystem.\r\nHere is the information given by valgrind\r\n```\r\n==627== 344 bytes in 1 blocks are definitely lost in loss record 1,637 of 2,513\r\n==627== at 0x483B7F3: malloc (in /usr/lib/x86... | cfaaf027070f2c97ceda125d1e7d75c855533ef5 | {
"head_commit": "73452367479897a19aebd944594c5d7b1084a317",
"head_commit_message": "Fix memory leak caused by repeated calling Aws::InitAPI",
"patch_to_review": "diff --git a/src/core/filesystem.cc b/src/core/filesystem.cc\nindex 59a74786e3..9863f4f4bf 100644\n--- a/src/core/filesystem.cc\n+++ b/src/core/filesys... | [
{
"diff_hunk": "@@ -1753,7 +1753,8 @@ GetFileSystem(const std::string& path, FileSystem** file_system)\n \"-DTRITON_ENABLE_S3=ON.\");\n #else\n Aws::SDKOptions options;\n- Aws::InitAPI(options);\n+ std::once_flag onceFlag;",
"line": null,
"original_line": 1756,
"original_start_line... | 64bf01e393b2fdc3a96b5e14245aee8bae081530 | diff --git a/src/core/filesystem.cc b/src/core/filesystem.cc
index 59a74786e3..b77bdc6ac1 100644
--- a/src/core/filesystem.cc
+++ b/src/core/filesystem.cc
@@ -1753,7 +1753,8 @@ GetFileSystem(const std::string& path, FileSystem** file_system)
"-DTRITON_ENABLE_S3=ON.");
#else
Aws::SDKOptions options;
- ... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
tobymao__sqlglot-432@e0ca8a3 | tobymao/sqlglot | Python | 432 | Add trim function | Implemented the trim function based (mostly) on [this](https://www.w3resource.com/sql/character-functions/trim.php).
- As I understand, Hive supports the syntax `[L|R]TRIM(target)`. If that's true, this implementation doesn't cover Hive.
- Do we generally want to provide links in comments (eg. for sources), like I ... | 2022-09-14T13:12:27Z | [Postgres 14.4] Parser does not recognize 'both' keyword in trim() function
Example (valid sql):
```
select trim(both from ' xxx ');
```
Result:
```
ParseError: Expecting ). Line 1, Col: 18.
select trim(both from ' xxx ');
Expected table name. Line 1, Col: 25.
select trim(both from ' xxx ');
Required ... | [
{
"body": "Example (valid sql):\r\n```\r\nselect trim(both from ' xxx ');\r\n```\r\nResult:\r\n```\r\nParseError: Expecting ). Line 1, Col: 18.\r\n select trim(both from ' xxx ');\r\n\r\nExpected table name. Line 1, Col: 25.\r\n select trim(both from ' xxx ');\r\n\r\nRequired keyword: 'this' missing for <clas... | 44397aba53440cc9b955bad51a4aafb9f20b023d | {
"head_commit": "e0ca8a35beff571ad2b08e1128bdd5d5b991035e",
"head_commit_message": "Add generic trim sql generation",
"patch_to_review": "diff --git a/sqlglot/dialects/mysql.py b/sqlglot/dialects/mysql.py\nindex 93800a6202..a2d39a020f 100644\n--- a/sqlglot/dialects/mysql.py\n+++ b/sqlglot/dialects/mysql.py\n@@ -... | [
{
"diff_hunk": "@@ -1937,6 +1942,34 @@ def _parse_substring(self):\n \n return this\n \n+ def _parse_trim(self):\n+ # https://www.w3resource.com/sql/character-functions/trim.php\n+ # https://docs.oracle.com/javadb/10.8.3.0/ref/rreftrimfunc.html\n+\n+ position = None\n+ col... | 284646fda227432fb07d90e631bf16d792b1ac63 | diff --git a/sqlglot/dialects/mysql.py b/sqlglot/dialects/mysql.py
index 93800a6202..a2d39a020f 100644
--- a/sqlglot/dialects/mysql.py
+++ b/sqlglot/dialects/mysql.py
@@ -49,6 +49,21 @@ def _str_to_date_sql(self, expression):
return f"STR_TO_DATE({self.sql(expression.this)}, {date_format})"
+def _trim_sql(self... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
tobymao__sqlglot-437@4acbe39 | tobymao/sqlglot | Python | 437 | Support lateral with subqueries | Fixes #424 | 2022-09-15T17:41:15Z | [Postgres 14.4] Parser does not support subquery after lateral keyword
Example (valid sql):
```
select art from table1 inner join lateral (select art from table2) table2 on table2.art=table2.art
```
Result:
```
ParseError: Expected table name. Line 1, Col: 35.
select art from table1 inner join lateral (select ... | [
{
"body": "Example (valid sql):\r\n```\r\nselect art from table1 inner join lateral (select art from table2) table2 on table2.art=table2.art\r\n```\r\nResult:\r\n```\r\nParseError: Expected table name. Line 1, Col: 35.\r\n select art from table1 inner join lateral (select art from table2) table2 on table2.art=... | 6a91ee30b5444899e985d6416b02c51d2fe66cb5 | {
"head_commit": "4acbe398ef647f95fc5b1539aca8313f2cb4e00f",
"head_commit_message": "Support lateral with subqueries",
"patch_to_review": "diff --git a/sqlglot/generator.py b/sqlglot/generator.py\nindex b569e12354..bb4d67647b 100644\n--- a/sqlglot/generator.py\n+++ b/sqlglot/generator.py\n@@ -630,6 +630,8 @@ def ... | [
{
"diff_hunk": "@@ -1105,7 +1109,12 @@ def _parse_join(self):\n if not self._match(TokenType.JOIN):\n return None\n \n- kwargs = {\"this\": self._parse_table()}\n+ this = (\n+ self._parse_lateral()\n+ if self._curr and self._curr.token_type == TokenType.LA... | 0114c30555b26bd8ab0097ff447891d021fcec73 | diff --git a/sqlglot/generator.py b/sqlglot/generator.py
index b569e12354..bb4d67647b 100644
--- a/sqlglot/generator.py
+++ b/sqlglot/generator.py
@@ -630,6 +630,8 @@ def lambda_sql(self, expression):
def lateral_sql(self, expression):
this = self.sql(expression, "this")
+ if isinstance(expressio... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} | |
triton-inference-server__server-2736@bfd04db | triton-inference-server/server | Python | 2,736 | Support data compression in HTTP | 2021-04-16T16:42:29Z | Support input/output compression
**Is your feature request related to a problem? Please describe.**
Some algorithms may require large amounts of input data, which could saturate network bandwidth for clients. (Generative or other algorithms may correspondingly return large amounts of output data.) It could be very use... | GRPC has some compression options. Would you just like those exposed? HTTP would need a custom solution on both client and server side. Do you have any specific solution(s), compressions, etc. in mind?
Yes, exposing the gRPC compression options would be a great starting point. (We're not using the HTTP client right no... | [
{
"body": "**Is your feature request related to a problem? Please describe.**\r\nSome algorithms may require large amounts of input data, which could saturate network bandwidth for clients. (Generative or other algorithms may correspondingly return large amounts of output data.) It could be very useful in these... | c3e014114e7701b28fc0e67e5fe329ab34b8d794 | {
"head_commit": "bfd04db0ebec957aa33d87c3a48700c9a628355c",
"head_commit_message": "Fix up",
"patch_to_review": "diff --git a/Dockerfile.QA b/Dockerfile.QA\nindex 4668122db1..27b7108967 100644\n--- a/Dockerfile.QA\n+++ b/Dockerfile.QA\n@@ -119,7 +119,10 @@ RUN mkdir -p qa/common && \\\n cp /tmp/tritonbuild/t... | [
{
"diff_hunk": "@@ -2471,6 +2497,11 @@ HTTPAPIServer::InferRequestClass::InferResponseComplete(\n HTTPAPIServer::InferRequestClass* infer_request =\n reinterpret_cast<HTTPAPIServer::InferRequestClass*>(userp);\n \n+ // Always specify supported compression as Accept-Encoding\n+ evhtp_headers_add_header... | 9271fd8e440529e5e23eb3a53e4ce8f994f767c8 | diff --git a/Dockerfile.QA b/Dockerfile.QA
index 4668122db1..27b7108967 100644
--- a/Dockerfile.QA
+++ b/Dockerfile.QA
@@ -119,7 +119,10 @@ RUN mkdir -p qa/common && \
cp /tmp/tritonbuild/tritonserver/build/test-util/install/bin/triton_repo_agent_test qa/L0_triton_repo_agent/. && \
cp /tmp/tritonbuild/tritons... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
} | |
tobymao__sqlglot-74@e938ec1 | tobymao/sqlglot | Python | 74 | lambda fix for single unbracketed arguments | PR to fix #73
I know this works for the specific use cases I'm aware of where the desired (or at least allowed) syntax is always `x ->`.
If there is a dialect where `x ->` is incorrect and only `(x) ->` is allowed then this generic application of the fix may not work. | 2022-02-16T13:29:12Z | Arguments to lambda functions wrapped in brackets even with only one argument
```python
sql = """
transform(array('a','b','c'), X -> upper(X))
"""
import sqlglot
sqlglot.transpile(sql)
```
```
> ["TRANSFORM(ARRAY('a', 'b', 'c'), (X) -> UPPER(X))"]
```
I discovered this as it was breaking some pyspark quer... | [
{
"body": "```python\r\nsql = \"\"\"\r\ntransform(array('a','b','c'), X -> upper(X))\r\n\"\"\"\r\n\r\nimport sqlglot\r\nsqlglot.transpile(sql)\r\n```\r\n```\r\n> [\"TRANSFORM(ARRAY('a', 'b', 'c'), (X) -> UPPER(X))\"]\r\n```\r\n\r\nI discovered this as it was breaking some pyspark queries. Minimal pyspark exampl... | 1d2d2be824b26a7859d88a858921103b150ef624 | {
"head_commit": "e938ec19efaa54d34ec5979c984a71f2180ac4dd",
"head_commit_message": "reformatting to pass checks",
"patch_to_review": "diff --git a/sqlglot/generator.py b/sqlglot/generator.py\nindex f0ae91f6f4..74d84cba20 100644\n--- a/sqlglot/generator.py\n+++ b/sqlglot/generator.py\n@@ -423,8 +423,11 @@ def joi... | [
{
"diff_hunk": "@@ -280,3 +280,13 @@ def test_error_level(self, logger):\n \n with self.assertRaises(ParseError):\n transpile(\"x + 1 (\")\n+\n+ def test_lambda(self):",
"line": null,
"original_line": 284,
"original_start_line": null,
"path": "tests/test_transpile.py",
... | bf3f572b3615cf01132236955a66795d08ae5989 | diff --git a/sqlglot/generator.py b/sqlglot/generator.py
index f0ae91f6f4..e72482391f 100644
--- a/sqlglot/generator.py
+++ b/sqlglot/generator.py
@@ -423,9 +423,9 @@ def join_sql(self, expression):
return f"{expression_sql}{op_sql} {this_sql}{on_sql}"
def lambda_sql(self, expression):
- return s... | {
"difficulty": "medium",
"estimated_review_effort": 2,
"problem_domain": "Code Refactoring / Architectural Improvement"
} | |
triton-inference-server__server-1109@a35fb61 | triton-inference-server/server | Python | 1,109 | Add queue policy settings in dynamic batch scheduler. Add priority level for policy queues | 2020-02-14T19:08:46Z | Server Queue
I have read the documentation and did not find any place talking about `Server Queue Size`. As far as I understood from the `TRTIS Architecture`, incoming inference requests are queued by `Model Schedulers` and when `Execution Context` is available, the request is passed for inference. I would like to know... | The scheduling queue doesn't have a fixed size, it grows to accommodate all pending requests. The latency for these requests will increase dramatically if the server is overloaded. Typically, something "upstream" of TRTIS should notice this increase in latency and scale / load-balance.
If TRTIS implemented a maximum... | [
{
"body": "I have read the documentation and did not find any place talking about `Server Queue Size`. As far as I understood from the `TRTIS Architecture`, incoming inference requests are queued by `Model Schedulers` and when `Execution Context` is available, the request is passed for inference. I would like t... | 0a5d71a7269d0d6be2f25326e93bd5b22157d8d9 | {
"head_commit": "a35fb6144d29750204ea1e435fbdc8c888175f04",
"head_commit_message": "Fix FIXME",
"patch_to_review": "diff --git a/src/backends/onnx/loader.cc b/src/backends/onnx/loader.cc\nindex 5847bdcdf9..3908913a3d 100644\n--- a/src/backends/onnx/loader.cc\n+++ b/src/backends/onnx/loader.cc\n@@ -28,10 +28,10 @... | [
{
"diff_hunk": "@@ -97,4 +99,282 @@ CompareWithPendingShape(\n return true;\n }\n \n+Status\n+PriorityQueue::PolicyQueue::Enqueue(Scheduler::Payload&& payload)\n+{\n+ if ((max_queue_size_ != 0) && (Size() >= max_queue_size_)) {\n+ return Status(RequestStatusCode::UNAVAILABLE, \"Exceeds maximum queue size\... | b0fb26a0f480950f214ffa1ae1847ca8c5930235 | diff --git a/src/backends/onnx/loader.cc b/src/backends/onnx/loader.cc
index 5847bdcdf9..3908913a3d 100644
--- a/src/backends/onnx/loader.cc
+++ b/src/backends/onnx/loader.cc
@@ -28,10 +28,10 @@
#include <future>
#include <thread>
-#include "src/core/logging.h"
#include "src/backends/onnx/onnx_utils.h"
#include "... | {
"difficulty": "high",
"estimated_review_effort": 4,
"problem_domain": "New Feature Additions"
} | |
xorbitsai__inference-3625@58611dc | xorbitsai/inference | Python | 3,625 | BUG: fix TTS error bug :No such file or directory | Fix: TTS error bug, No such file or directory
Fixes #3579 . | 2025-06-13T01:23:22Z | FileNotFoundError: [Errno 2] No such file or directory: '/tmp/file
### System Info / 系統信息
Windows11
Conda python 3.10.16
### Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
- [ ] docker / docker
- [x] pip install / 通过 pip install 安装
- [ ] installation from source / 从源码安装
### Version info / 版本信息
latest... | Can you paste the error stack?
In xinference\core\media_interface line 667, /tmp/uuid.mp3 is not a file or directory. # @qinxuye
Oh, I see, this path should be processed with `tempfile` module. Are you interested in contributing?
This is not solved, I will reopen it.
I mean, it's welcome for you to send a PR to solve... | [
{
"body": "### System Info / 系統信息\n\nWindows11\nConda python 3.10.16\n\n### Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?\n\n- [ ] docker / docker\n- [x] pip install / 通过 pip install 安装\n- [ ] installation from source / 从源码安装\n\n### Version info / 版本信息\n\nlatest\n\n### The command used to start X... | da2040e54c18c80ae88c64608de0081fa6df54c4 | {
"head_commit": "58611dc391e8a76963481f74ee9235e95044cfd9",
"head_commit_message": "修复TTS报错问题:No such file or directory",
"patch_to_review": "diff --git a/xinference/core/media_interface.py b/xinference/core/media_interface.py\nindex c539912065..a96eb8f7f4 100644\n--- a/xinference/core/media_interface.py\n+++ b/... | [
{
"diff_hunk": "@@ -784,9 +784,10 @@ def tts_generate(\n )\n \n # Write to a temp .mp3 file and return its path\n- audio_path = f\"/tmp/{uuid.uuid4()}.mp3\"\n- with open(audio_path, \"wb\") as f:\n- f.write(response)\n+ # Get the current TE... | 82efe7160100cd7bf779fd8199da47c6d717cc60 | diff --git a/xinference/core/media_interface.py b/xinference/core/media_interface.py
index c539912065..bc9b2b453d 100644
--- a/xinference/core/media_interface.py
+++ b/xinference/core/media_interface.py
@@ -16,6 +16,7 @@
import io
import logging
import os
+import tempfile
import threading
import time
import uuid
... | {
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "Bug Fixes"
} |
xorbitsai__inference-3442@8e324b8 | xorbitsai/inference | Python | 3,442 | FEAT: llama.cpp backend support multimodal | - llama.cpp backend support mutimodal projectors
- gemma3 gguf support multimodal
- makes the inference error clear
Fixes: https://github.com/xorbitsai/inference/issues/3416 | 2025-05-13T14:57:44Z | 使用llama运行gemma3,识别图片时报错
### System Info / 系統信息
官方docker镜像v1.5.1
### Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
- [ ] docker / docker
- [ ] pip install / 通过 pip install 安装
- [ ] installation from source / 从源码安装
### Version info / 版本信息
官方docker镜像,版本1.5.1
### The command used to start Xinference / ... | @codingl2k1 llama.cpp 支持 gemma-3 图片输入了吗?
> [@codingl2k1](https://github.com/codingl2k1) llama.cpp 支持 gemma-3 图片输入了吗?
目前 llama server 还不支持 multimodal(wip):https://github.com/ggml-org/llama.cpp/tree/master/tools/server
有个 libmtmd 提供了 multimodal 的功能,但不是个完整的 server(上面那个 server 的 multimodal 还在开发中)
好吧(捂脸),谢谢解答 | [
{
"body": "### System Info / 系統信息\n\n官方docker镜像v1.5.1\n\n### Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?\n\n- [ ] docker / docker\n- [ ] pip install / 通过 pip install 安装\n- [ ] installation from source / 从源码安装\n\n### Version info / 版本信息\n\n官方docker镜像,版本1.5.1\n\n### The command used to start Xinf... | 1adc5d3e5cffb2752cd3e05ca782c4cfe3c0ce57 | {
"head_commit": "8e324b8c4c871313b6a8cff82133616527c8b8f4",
"head_commit_message": "feat: [UI] add the multimodal_projector parameter",
"patch_to_review": "diff --git a/.github/workflows/python.yaml b/.github/workflows/python.yaml\nindex 60b181f074..9bdf77d6fd 100644\n--- a/.github/workflows/python.yaml\n+++ b/.... | [
{
"diff_hunk": "@@ -1014,6 +1015,7 @@ async def launch_model(\n \"replica\",\n \"n_gpu\",\n \"request_limits\",\n+ \"multimodal_projector\",",
"line": null,
"original_line": 1018,
"original_start_line": null,
"path": "xinference/api/restful_api.py",... | d7d6022d3ce2e1841bc0b131c2efa6ced6d959c5 | diff --git a/.github/workflows/python.yaml b/.github/workflows/python.yaml
index 60b181f074..9bdf77d6fd 100644
--- a/.github/workflows/python.yaml
+++ b/.github/workflows/python.yaml
@@ -125,7 +125,7 @@ jobs:
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
fi
pip install -e ".[dev]"
- pi... | {
"difficulty": "high",
"estimated_review_effort": 4,
"problem_domain": "Bug Fixes"
} |
sympy__sympy-28148@3098ae7 | sympy/sympy | Python | 28,148 | Fix limit evaluation for 2**x and E**x at -oo | Fix incorrect limit for exponential functions with numeric base
#### References to other Issues or PRs
Fixes #28130
#### Brief description of what is fixed or changed
This PR fixes a bug in the evaluation of limits for exponential expressions like `2**x` as `x → -oo`, which previously returned `oo` instead ... | 2025-06-14T14:02:48Z | limit of exponential with non-E base gives oo instead of zero
Taking a limit of an exponential function with non-E base gives oo instead of zero:
```python
In [5]: limit(E**x, x, -oo)
Out[5]: 0
In [6]: limit(2**x, x, -oo)
Out[6]: ∞
``` | I’ve been looking into this issue and I think I’ve found a possible explanation.
The problem seems to stem from how the `Limit.doit()` method handles the `z0` (the limit point). When `z0` is infinite , the method updates `z0` locally in this block:
```python
if z0.is_infinite:
cdir = sign(z0)
cdir = cdir / ab... | [
{
"body": "Taking a limit of an exponential function with non-E base gives oo instead of zero:\n```python\nIn [5]: limit(E**x, x, -oo)\nOut[5]: 0\n\nIn [6]: limit(2**x, x, -oo)\nOut[6]: ∞\n```",
"number": 28130,
"title": "limit of exponential with non-E base gives oo instead of zero"
}
] | e17302a7e58f564550f0517061bf9ad376e5957f | {
"head_commit": "3098ae7b90a659f8e7cd90c7b75aab05f16472f9",
"head_commit_message": "Added entries to .mailmap",
"patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 42cf0d8f59df..b038691709cf 100644\n--- a/.mailmap\n+++ b/.mailmap\n@@ -781,6 +781,8 @@ Jason Siefken <siefkenj@gmail.com>\n Jason Tokayer <ja... | [
{
"diff_hunk": "@@ -781,6 +781,8 @@ Jason Siefken <siefkenj@gmail.com>\n Jason Tokayer <jason.tokayer@gmail.com>\n Jason Tokayer <jason.tokayer@gmail.com> <jason.tokayer@capitalone.com>\n Jatin Bhardwaj <bhardwajjatin093@gmail.com> <148186488+Jatinbhardwaj-093@users.noreply.github.com>\n+Jatin Gaur <jatin22101@... | 0dab531b8485461c6a8e7336ccf94f588924e428 | diff --git a/.mailmap b/.mailmap
index 42cf0d8f59df..19d6de67ba1f 100644
--- a/.mailmap
+++ b/.mailmap
@@ -781,6 +781,8 @@ Jason Siefken <siefkenj@gmail.com>
Jason Tokayer <jason.tokayer@gmail.com>
Jason Tokayer <jason.tokayer@gmail.com> <jason.tokayer@capitalone.com>
Jatin Bhardwaj <bhardwajjatin093@gmail.com> <148... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
sympy__sympy-28081@1b4ace2 | sympy/sympy | Python | 28,081 | codegen: Fix order parameter handling in reshape function | #### Brief description of what is fixed or changed
Fixed a bug in the reshape function in sympy/codegen/fnodes.py where the order parameter was incorrectly conditioned on the pad parameter instead of itself. This caused the order parameter to be ignored when pad was None, even if order was provided.
The issue was... | 2025-05-22T14:38:21Z | Possible error in codegen fnodes.py
Hi,
not sure if it is an error but in codegen/fnodes.py, line 505 it says:
`([_printable(order)] if pad else [])`
instead of what makes more sense and is consistent with the rest
`([_printable(order)] if order else [])` | Yes, that looks like a bug. Those nodes probably lack complete test coverage. Are you using fortran? A pull request adding test cases would be most welcome.
I'm not using Fortran, I just stumbled upon it while formatting the code.
Hi @bjodah @PhilP1988 @oscarbenjamin ,
I've investigated this issue and can say that the... | [
{
"body": "Hi,\nnot sure if it is an error but in codegen/fnodes.py, line 505 it says:\n`([_printable(order)] if pad else [])`\ninstead of what makes more sense and is consistent with the rest\n`([_printable(order)] if order else [])`",
"number": 28029,
"title": "Possible error in codegen fnodes.py"
}... | 1558c006a584870b74d910c2371f4fa39545e739 | {
"head_commit": "1b4ace2bf9289f32336916d650df9cbcf7a4901f",
"head_commit_message": "refactor: Remove unused import of Dict from ast.py",
"patch_to_review": "diff --git a/sympy/codegen/ast.py b/sympy/codegen/ast.py\nindex dd774ca87c5c..329b77eb4c6b 100644\n--- a/sympy/codegen/ast.py\n+++ b/sympy/codegen/ast.py\n@... | [
{
"diff_hunk": "@@ -772,6 +772,23 @@ def _print_ArrayConstructor(self, ac):\n fmtstr = \"[%s]\" if self._settings[\"standard\"] >= 2003 else '(/%s/)'\n return fmtstr % ', '.join((self._print(arg) for arg in ac.elements))\n \n+ def _print_FunctionCall(self, expr):",
"line": null,
"orig... | ad70c1e9b7033f4e52a0dacf885d2d9f647564cc | diff --git a/sympy/codegen/ast.py b/sympy/codegen/ast.py
index dd774ca87c5c..329b77eb4c6b 100644
--- a/sympy/codegen/ast.py
+++ b/sympy/codegen/ast.py
@@ -1892,6 +1892,41 @@ class FunctionCall(Token, Expr):
_construct_function_args = staticmethod(lambda args: Tuple(*args))
+class KeywordFunctionCall(FunctionCa... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
xorbitsai__inference-1379@43a7263 | xorbitsai/inference | Python | 1,379 | feat: add phi-3-mini series | Resolve #1371
Add phi-3-mini-128k-instruct and phi-3-mini-4k-instruct. | 2024-04-25T12:57:05Z | FEAT: support phi-3 model
### Is your feature request related to a problem? Please describe
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
### Describe the solution you'd like
A clear and concise description of what you want to happen.
### Describe alternatives you'... | [
{
"body": "### Is your feature request related to a problem? Please describe\r\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\r\n\r\n### Describe the solution you'd like\r\nA clear and concise description of what you want to happen.\r\n\r\n### Describe alternatives... | 46627a6f0a9509d988fdd49dc9246a3fab1dd79f | {
"head_commit": "43a72635efb59104de5e386add00154edb8b3f45",
"head_commit_message": "add docs of phi-3-mini",
"patch_to_review": "diff --git a/doc/source/models/builtin/llm/phi-3-mini-128k-instruct.rst b/doc/source/models/builtin/llm/phi-3-mini-128k-instruct.rst\nnew file mode 100644\nindex 0000000000..a6b75097d1... | [
{
"diff_hunk": "@@ -461,6 +461,66 @@\n }\n ]\n },\n+ {\n+ \"version\": 1,\n+ \"context_length\": 128000,\n+ \"model_name\": \"phi-3-mini-128k-instruct\",\n+ \"model_lang\": [\n+ \"en\"\n+ ],\n+ \"model_ability\": [\n+ \"generate\"",
"line": null,
"original_line":... | 6c8b3462fa4ae818804351736a52fe0b77a53535 | diff --git a/doc/source/models/builtin/llm/index.rst b/doc/source/models/builtin/llm/index.rst
index 168aa45fb8..f371139ad5 100644
--- a/doc/source/models/builtin/llm/index.rst
+++ b/doc/source/models/builtin/llm/index.rst
@@ -311,6 +311,16 @@ The following is a list of built-in LLM in Xinference:
- 2048
- ... | {
"difficulty": "low",
"estimated_review_effort": 2,
"problem_domain": "New Feature Additions"
} | |
sympy__sympy-27850@3cc2d18 | sympy/sympy | Python | 27,850 | Update old type annotation syntax to new TypeHints | <!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
Fixes #27845
#### Brief description of what is fixed or changed
This PR modernizes type annotations in the SymPy codebase by:
- Convert "# type... | 2025-03-30T02:20:20Z | Remove old type annotation syntax
When the first type annotations were added to the sympy codebase it was not possible to use the current Python type annotation syntax so there are a mixture of things like type comments:
```
$ git grep '# type: int'
sympy/codegen/fnodes.py: _decimals = None # type: int
sympy/matric... | To use the new syntax it is necessary to add `from __future__ import annotations` at the top of the files for compatibility with Python 3.8.
@oscarbenjamin I wouldn't mind working on this. What did you have in mind for this:
> I don't think that the assignment to None is needed either.
Setting to an initial value o... | [
{
"body": "When the first type annotations were added to the sympy codebase it was not possible to use the current Python type annotation syntax so there are a mixture of things like type comments:\n```\n$ git grep '# type: int'\nsympy/codegen/fnodes.py: _decimals = None # type: int\nsympy/matrices/common.p... | 9ea536ddd5c1f9383908debfc4c2718c83ed5344 | {
"head_commit": "3cc2d18d3fb53c02af690e1bfdd6fc49fc67d737",
"head_commit_message": "Removed tTuple from core/evalf.py\n\nSigned-off-by: Nicholas Laustrup <124007393+nicklaustrup@users.noreply.github.com>",
"patch_to_review": "diff --git a/.mailmap b/.mailmap\nindex 5ba2c8bf0c07..d41eb5d62045 100644\n--- a/.mailm... | [
{
"diff_hunk": "@@ -1193,7 +1194,7 @@ def evalf_integral(expr: 'Integral', prec: int, options: OPT_DICT) -> TMP_RES:\n return result\n \n \n-def check_convergence(numer: 'Expr', denom: 'Expr', n: 'Symbol') -> tTuple[int, Any, Any]:\n+def check_convergence(numer: Expr, denom: Expr, n: 'Symbol') -> tuple[int,... | 26d8e2bedd0f5f791f7dc2ebdff224fd3317cec6 | diff --git a/.mailmap b/.mailmap
index 5ba2c8bf0c07..d41eb5d62045 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1085,6 +1085,7 @@ Nguyen Truong Duy <truongduy134@yahoo.com>
Nichita Utiu <nikita.utiu+github@gmail.com> <nichitautiu@nichitautiu-desktop.(none)>
Nicholas Bollweg <nick.bollweg@gmail.com> <nbollweg@continuum.io>... | {
"difficulty": "medium",
"estimated_review_effort": 4,
"problem_domain": "Code Refactoring / Architectural Improvement"
} |
sympy__sympy-27741@cf7fec3 | sympy/sympy | Python | 27,741 | Fix IntegerPredicate Handling for Pow and Mul Expressions | <!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->
#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234" (see
https://tinyurl.com/auto-closing for more... | 2025-03-12T07:57:52Z | BUG: Incorrect Behavior in `ask` for Integer Division
#### **Problem Description**
When using `ask(Q.integer(x/y), Q.integer(x) & Q.integer(y))`, SymPy incorrectly returns `True`, implying that the division of two integers is always an integer.
#### **Minimal Reproducible Example**
```python
from sympy import symbols,... | Note that the old assumptions currently return ```None``` here:
```
x, y = symbols('x y', integer=True)
print((x/y).is_integer) # Got:None
```
Please note this is an issue with the ```Mul``` handler for ```IntegerPredicate```.
@TiloRC
I have linked the PR that fixes this issue. Please do review it. Thanks.
> Please ... | [
{
"body": "#### **Problem Description**\nWhen using `ask(Q.integer(x/y), Q.integer(x) & Q.integer(y))`, SymPy incorrectly returns `True`, implying that the division of two integers is always an integer.\n\n#### **Minimal Reproducible Example**\n```python\nfrom sympy import symbols, ask, Q\n\nx, y = symbols('x y... | 8e48eb8dfb4dcaccc42c93e98dfd1ebcf5d8eb4a | {
"head_commit": "cf7fec37fe8058fc9b2ab3d618ecee3bd834adf3",
"head_commit_message": "Remove redundant ask query",
"patch_to_review": "diff --git a/sympy/assumptions/handlers/common.py b/sympy/assumptions/handlers/common.py\nindex b89ffe8402e7..f6e9f6f321be 100644\n--- a/sympy/assumptions/handlers/common.py\n+++ b... | [
{
"diff_hunk": "@@ -1814,7 +1827,7 @@ def test_odd_query():\n assert ask(Q.odd(3**k), Q.even(k)) is None\n \n assert ask(Q.odd(k**m), Q.even(k) & Q.integer(m) & ~Q.negative(m)) is None\n- assert ask(Q.odd(n**m), Q.odd(n) & Q.integer(m) & ~Q.negative(m)) is True\n+ assert ask(Q.odd(n**m), Q.odd(n) ... | 1cf088b420e0488e1e0a3d7a72cad2d293802309 | diff --git a/sympy/assumptions/handlers/common.py b/sympy/assumptions/handlers/common.py
index b89ffe8402e7..f6e9f6f321be 100644
--- a/sympy/assumptions/handlers/common.py
+++ b/sympy/assumptions/handlers/common.py
@@ -5,7 +5,7 @@
from sympy.assumptions import Q, ask, AppliedPredicate
from sympy.core import Basic, ... | {
"difficulty": "medium",
"estimated_review_effort": 3,
"problem_domain": "Bug Fixes"
} |
Subsets and Splits
Review Test Instances
Retrieves basic metadata about code review instances but doesn't provide any analytical insights or patterns beyond raw data access.