Releases: pinecone-io/pinecone-python-client
Release v8.1.2
Release v8.1.1
Bug fixes
- Fix crash when delete() receives an empty response body — The asyncio delete() and delete_namespace() methods could crash with an AttributeError when
the server returned an empty response body. These methods now return None gracefully instead of crashing. (#623, fixes
#564)
Security & dependency updates
- Bump orjson minimum to 3.11.6 (CVE-2025-67221) (#625)
- Bump aiohttp to 3.13.5 in lockfile (CVE-2026-22815) (#630)
- Bump pygments to 2.20.0 in lockfile (ReDoS fix) (#628)
- Add explicit GITHUB_TOKEN permissions to workflow files (#629)
- Bump minimatch to 3.1.5 in bump-version action (#618)
- Bump picomatch to 2.3.2 in bump-version action (#627)
Full Changelog: v8.1.0...v8.1.1
v8.1.0
This release adds support for creating and configuring index read_capacity for BYOC indexes:
import pinecone
from pinecone import ByocSpec
pc = pinecone.Pinecone(api_key="YOUR_API_KEY")
# Create a BYOC index with OnDemand read capacity
pc.create_index(
name="my-byoc-index",
dimension=1536,
spec=ByocSpec(
environment="my-byoc-env",
read_capacity={"mode": "OnDemand"},
)
)
# Create a BYOC index with Dedicated read capacity
pc.create_index(
name="my-byoc-index",
dimension=1536,
spec=ByocSpec(
environment="my-byoc-env",
read_capacity={
"mode": "Dedicated",
"dedicated": {
"node_type": "b1",
"scaling": "Manual",
"manual": {"replicas": 2},
},
},
)
)The following user-facing types have been added or updated to support this:
ByocSpec— now accepts optionalread_capacityandschemafieldsReadCapacityDict— union alias for the two read capacity modes belowReadCapacityOnDemandDict—{"mode": "OnDemand"}ReadCapacityDedicatedDict—{"mode": "Dedicated", "dedicated": ReadCapacityDedicatedConfigDict}ReadCapacityDedicatedConfigDict—{"node_type": str, "scaling": str, "manual": ScalingConfigManualDict}ScalingConfigManualDict—{"shards": int, "replicas": int}MetadataSchemaFieldConfig—{"filterable": bool}, used with the schema field onByocSpec
All of the above are exported from the top-level pinecone module.
Support for scan_factor and max_candidates has been added to Index.query() and Index.query_namespaces():
# scan_factor widens the IVF scan to trade latency for higher recall
# max_candidates controls how many candidates are reranked with exact distances
results = index.query(
vector=[...],
top_k=10,
scan_factor=2.0,
max_candidates=500,
)Both parameters are optional and only take effect on dedicated read node (DRN) dense indexes. scan_factor adjusts how much of the IVF index is scanned when gathering vector candidates, and max_candidates caps the number of candidates that undergo exact-distance reranking to improve recall.
What's Changed
- Regenerate code from
2025-10, implementschema/read_capacityinBYOCSpecby @austin-denoble in #614 - Implement
scan_factorandmax_candidatesforqueryby @austin-denoble in #617
Full Changelog: v8.0.1...v8.1.0
v8.0.1
Security
🔒 Fixed Protobuf Denial-of-Service Vulnerability (CVE-2025-4565)
Updated protobuf dependency to address a denial-of-service vulnerability when parsing deeply nested recursive structures in a Pure-Python backend.
Affected users: Only users of the grpc extras (pip install pinecone[grpc]) and PineconeGRPC client will be affected by the change. Users of the default REST client (Pinecone) are not affected.
Changes:
- Upgraded
protobuffrom5.xto6.33.0+ - Upgraded
googleapis-common-protosfrom1.66.0to1.72.0+for compatibility - Regenerated gRPC code with protobuf v33.0
Impact:
- Breaking Change: Minimum protobuf version is now
6.33.0(was5.29.5) - Users with pinned protobuf versions
<6.33.0will need to upgrade - No API or functionality changes for end users
- All existing code continues to work with the new protobuf version
References:
Release v8.0.0
Upgrading from 7.x to 8.x
The v8 release of the Pinecone Python SDK has been published as pinecone to PyPI.
With a few exceptions noted below, nearly all changes are additive and non-breaking. The major version bump primarily reflects the step up to API version 2025-10 and the addition of a new dependency on orjson for fast JSON parsing.
Breaking Changes
namespace parameter in GRPC methods. When namespace=None, the parameter is omitted from requests, allowing the API to handle namespace defaults appropriately. This change affects upsert_from_dataframe methods in GRPC clients. The API is moving toward "__default__" as the default namespace value, and this change ensures the SDK doesn't override API defaults.
Note: The official SDK package was renamed last year from pinecone-client to pinecone beginning in version 5.1.0. Please remove pinecone-client from your project dependencies and add pinecone instead to get the latest updates if upgrading from earlier versions.
What's new in 8.x
Dedicated Read Capacity for Serverless Indexes
You can now configure dedicated read nodes for your serverless indexes, giving you more control over query performance and capacity planning. By default, serverless indexes use OnDemand read capacity, which automatically scales based on demand. With dedicated read capacity, you can allocate specific read nodes with manual scaling control.
Create an index with dedicated read capacity:
from pinecone import (
Pinecone,
ServerlessSpec,
CloudProvider,
AwsRegion,
Metric
)
pc = Pinecone()
pc.create_index(
name='my-index',
dimension=1536,
metric=Metric.COSINE,
spec=ServerlessSpec(
cloud=CloudProvider.AWS,
region=AwsRegion.US_EAST_1,
read_capacity={
"mode": "Dedicated",
"dedicated": {
"node_type": "t1",
"scaling": "Manual",
"manual": {
"shards": 2,
"replicas": 2
}
}
}
)
)Configure read capacity on an existing index:
You can switch between OnDemand and Dedicated modes, or adjust the number of shards and replicas for dedicated read capacity:
from pinecone import Pinecone
pc = Pinecone()
# Switch to OnDemand read capacity
pc.configure_index(
name='my-index',
read_capacity={"mode": "OnDemand"}
)
# Switch to Dedicated read capacity with manual scaling
pc.configure_index(
name='my-index',
read_capacity={
"mode": "Dedicated",
"dedicated": {
"node_type": "t1",
"scaling": "Manual",
"manual": {
"shards": 3,
"replicas": 2
}
}
}
)
# Scale up by increasing shards and replicas
pc.configure_index(
name='my-index',
read_capacity={
"mode": "Dedicated",
"dedicated": {
"node_type": "t1",
"scaling": "Manual",
"manual": {
"shards": 4,
"replicas": 3
}
}
}
)When you change read capacity configuration, the index will transition to the new configuration. You can use describe_index to check the status of the transition.
See PR #528 for details.
Fetch and Update Vectors by Metadata
Fetch vectors by metadata filter
You can now fetch vectors using metadata filters instead of vector IDs. This is especially useful when you need to retrieve vectors based on their metadata properties.
from pinecone import Pinecone
pc = Pinecone()
index = pc.Index(host="your-index-host")
# Fetch vectors matching a complex filter
response = index.fetch_by_metadata(
filter={'genre': {'$in': ['comedy', 'drama']}, 'year': {'$eq': 2019}},
namespace='my_namespace',
limit=50
)
print(f"Found {len(response.vectors)} vectors")
# Iterate through fetched vectors
for vec_id, vector in response.vectors.items():
print(f"ID: {vec_id}, Metadata: {vector.metadata}")Pagination support:
When fetching large numbers of vectors, you can use pagination tokens to retrieve results in batches:
# First page
response = index.fetch_by_metadata(
filter={'status': 'active'},
limit=100
)
# Continue with next page if available
if response.pagination and response.pagination.next:
next_response = index.fetch_by_metadata(
filter={'status': 'active'},
pagination_token=response.pagination.next,
limit=100
)Update vectors by metadata filter
The update method used to require a vector id to be passed, but now you have the option to pass a metadata filter instead. This is useful for bulk metadata updates across many vectors.
There is also a dry_run option that allows you to preview the number of vectors that would be changed by the update before performing the operation.
from pinecone import Pinecone
pc = Pinecone()
index = pc.Index(host="your-index-host")
# Preview how many vectors would be updated (dry run)
response = index.update(
set_metadata={'status': 'active'},
filter={'genre': {'$eq': 'drama'}},
dry_run=True
)
print(f"Would update {response.matched_records} vectors")
# Apply the update by repeating the command without dry_run
response = index.update(
set_metadata={'status': 'active'},
filter={'genre': {'$eq': 'drama'}}
)FilterBuilder for fluent filter construction
A new FilterBuilder utility class provides a type-safe, fluent interface for constructing metadata filters. While perhaps a bit verbose, it can help prevent common errors like misspelled operator names and provides better IDE support.
When you chain .build() onto the FilterBuilder it will emit a python dictionary representing the filter. Methods that take metadata filters as arguments will continue to accept dictionaries as before.
from pinecone import Pinecone, FilterBuilder
pc = Pinecone()
index = pc.Index(host="your-index-host")
# Simple equality filter
filter1 = FilterBuilder().eq("genre", "drama").build()
# Returns: {"genre": "drama"}
# Multiple conditions with AND using & operator
filter2 = (FilterBuilder().eq("genre", "drama") &
FilterBuilder().gt("year", 2020)).build()
# Returns: {"$and": [{"genre": "drama"}, {"year": {"$gt": 2020}}]}
# Multiple conditions with OR using | operator
filter3 = (FilterBuilder().eq("genre", "comedy") |
FilterBuilder().eq("genre", "drama")).build()
# Returns: {"$or": [{"genre": "comedy"}, {"genre": "drama"}]}
# Complex nested conditions
filter4 = ((FilterBuilder().eq("genre", "drama") &
FilterBuilder().gte("year", 2020)) |
(FilterBuilder().eq("genre", "comedy") &
FilterBuilder().lt("year", 2000))).build()
# Use with fetch_by_metadata
response = index.fetch_by_metadata(filter=filter2, limit=50)
# Use with update
index.update(
set_metadata={'status': 'archived'},
filter=filter3
)The FilterBuilder supports all Pinecone filter operators: eq, ne, gt, gte, lt, lte, in_, nin, and exists. Compound expressions are built with and as & and or as |.
See PR #529 for fetch_by_metadata, PR #544 for update() with filter, and PR #531 for FilterBuilder.
Other New Features
Create namespaces programmatically
You can now create namespaces in serverless indexes directly from the SDK:
from pinecone import Pinecone
pc = Pinecone()
index = pc.Index(host="your-index-host")
# Create a namespace with just a name
namespace = index.create_namespace(name="my-namespace")
print(f"Created namespace: {namespace.name}, Vector count: {namespace.vector_count}")
# Create a namespace with schema configuration
namespace = index.create_namespace(
name="my-namespace",
schema={
"fields": {
"genre": {"filterable": True},
"year": {"filterable": True}
}
}
)Note: This operation is not supported for pod-based indexes.
See PR #532 for details.
Match terms in search operations
For sparse indexes with integrated embedding configured to use the pinecone-sparse-english-v0 model, you can now specify which terms must be present in search results:
from pinecone import Pinecone, SearchQuery
pc = Pinecone()
index = pc.Index(host="your-index-host")
response = index.search(
namespace="my-namespace",
query=SearchQuery(
inputs={"text": "Apple corporation"},
top_k=10,
match_terms={
"strategy": "all",
"terms": ["apple", "corporation"]
}
)
)The match_terms parameter ensures that all specified terms must be present in the text of each search hit. Terms are normalized and tokenized before matching, and order does not matter.
See PR #530 for details.
Admin API enhancements
**Update API keys, proj...
v7.3.0
This minor release includes the ability to interact with the Admin API and adds support for working with index namespaces via gRPC. Previously, namespace support was available only through REST.
Admin api
This release introduces an Admin class that provides support for performing CRUD operations on projects and API keys using REST.
Projects
from pinecone import Admin
# Use service account credentials
admin = Admin(client_id='foo', client_secret='bar')
# Example: Create a project
project = admin.project.create(
name="example-project",
max_pods=5
)
print(f"Project {project.id} was created")
# Example: Rename a project
project = admin.project.get(name='example-project')
admin.project.update(
project_id=project.id,
name='my-awesome-project'
)
# Example: Enable CMEK on all projects
project_list = admin.projects.list()
for proj in project_list_response.data:
admin.projects.update(
project_id=proj.id,
force_encryption_with_cmek=True
)
# Example: Set pod quota to 0 for all projects
project_list = admin.projects.list()
for proj in project_list_response.data:
admin.projects.update(project_id=proj.id, max_pods=0)
# Delete the project
admin.project.delete(project_id=project.id)API Keys
from pinecone import Admin
# Use service account credentials
admin = Admin(client_id='foo', client_secret='bar')
project = admin.project.get(name='my-project')
# Create an API key
api_key_response = admin.api_keys.create(
project_id=project.id,
name="ci-key",
roles=["ProjectEditor"]
)
key = api_key_response.value # 'pcsk_....'
# Look up info on a key by id
key_info = admin.api_keys.get(
api_key_id=api_key_response.key.id
)
# Delete a key
admin.api_keys.delete(
api_key_id=api_key_response.key.id
)Working with namespaces with gRPC
The gRPC Index class now exposes methods for calling describe_namespace, delete_namespace, list_namespaces, and list_namespaces_paginated.
from pinecone.grpc import PineconeGRPC as Pinecone
pc = Pinecone(api_key='YOUR_API_KEY')
index = pc.Index(host='your-index-host')
# list namespaces
results = index.list_namespaces_paginated(limit=10)
next_results = index.list_namespaces_paginated(limit=10, pagination_token=results.pagination.next)
# describe namespace
namespace = index.describe_namespace(results[0].name)
# delete namespaces (NOTE: this deletes all data within the namespace)
index.delete_namespace(results[0].name)What's Changed
- Implement Admin API by @jhamon in #512
- Add support for list, describe, and delete namespaces in grpc by @rohanshah18 in #517
Full Changelog: v7.2.0...v7.3.0
Release v7.2.0
This minor release includes new methods for working with index namespaces via REST, and the ability to configure an index with the embed configuration, which was not previously exposed.
Working with namespaces
The Index and IndexAsyncio classes now expose methods for calling describe_namespace, delete_namespace, list_namespaces, and list_namespaces_paginated. There is also a NamespaceResource which can be used to perform these operations. Namespaces themselves are still created implicitly when upserting data to a specific namespace.
from pinecone import Pinecone
pc = Pinecone(api_key='YOUR_API_KEY')
index = pc.Index(host='your-index-host')
# list namespaces
results = index.list_namespaces_paginated(limit=10)
next_results = index.list_namespaces_paginated(limit=10, pagination_token=results.pagination.next)
# describe namespace
namespace = index.describe_namespace(results[0].name)
# delete namespaces (NOTE: this deletes all data within the namespace)
index.delete_namespace(results[0].name)Configuring integrated embedding for an index
Previously, the configure_index methods did not support providing an embed argument when configuring an existing index. These methods now support embed in the shape of ConfigureIndexEmbed. You can convert an existing index to an integrated index by specifying the embedding model and field_map. The index vector type and dimension must match the model vector type and dimension, and the index similarity metric must be supported by the model. You can use list_models and get_model on the Inference class to get specific details about models.
You can later change the embedding configuration to update the field map, read parameters, or write parameters. Once set, the model cannot be changed.
from pinecone import Pinecone
pc = Pinecone(api_key='YOUR_API_KEY')
# convert an existing index to use the integrated embedding model multilingual-e5-large
pc.configure_index(
name="my-existing-index",
embed={"model": "multilingual-e5-large", "field_map": {"text": "chunk_text"}},
)What's Changed
- Add describe, delete, and list namespaces (REST) by @rohanshah18 in #507
- Fix release workflow by @rohanshah18 in #516
- Add
embedto Indexconfigurecalls by @austin-denoble in #515
Full Changelog: v7.1.0...v7.2.0
Release v7.1.0
This release fixes an issue where GRPC methods using async_req=True ignored user-provided timeout values, defaulting instead to a hardcoded 5-second timeout imposed by PineconeGrpcFuture. To verify this fix, we added a new test file, test_timeouts.py, which uses a mock GRPC server to simulate client timeout behavior under delayed response conditions.
Release v7.0.2
This small bugfix release includes the following fixes:
- Windows users should now be able to install without seeing the
readlineerror reported by in #502. See #503 for details on the root cause and fix. - We have added a new multi-platform installation testing workflow to catch future issues like the above Windows problem.
- While initially running these new tests we discovered a dependency was not being included correctly for the Assistant functionality:
pinecone-plugin-assistant. The assistant plugin had been inadvertently added as a dev dependency rather than a dependency, which means our integration tests for that functionality were able to pass while the published artifact was not including it. We have corrected this problem, which means assistant functions should now work without installing anything additional.
Release v7.0.1
This small bugfix release fixes:
- Broken autocompletion / intellisense for inference functions. See #498 for details.
- Restores missing type information for Exception classes that was inadvertently removed when setting up the package-level .pyi file