Skip to content
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
b0fa59c
RDF, cleanup relations and remove unnecessary bindings, add distribut…
harshach Apr 1, 2026
5d9a6a9
Merge branch 'main' into rdf_v2
harshach Apr 1, 2026
b0aef60
Update generated TypeScript types
github-actions[bot] Apr 1, 2026
7d46a65
Merge branch 'main' into rdf_v2
harshach Apr 1, 2026
156f039
Address comments from copilot
harshach Apr 1, 2026
4a1e083
Merge remote-tracking branch 'origin/rdf_v2' into rdf_v2
harshach Apr 1, 2026
e0f35aa
Update generated TypeScript types
github-actions[bot] Apr 1, 2026
1f22b97
Merge remote-tracking branch 'origin/main' into rdf_v2
aniketkatkar97 Apr 2, 2026
e259ec0
fix test issues
harshach Apr 2, 2026
fabade4
Merge remote-tracking branch 'origin/rdf_v2' into rdf_v2
harshach Apr 2, 2026
ec70141
Merge branch 'main' into rdf_v2
aniketkatkar97 Apr 6, 2026
2841a12
Fix minor UI bugs
aniketkatkar97 Apr 6, 2026
88672fc
Add the missing filters
aniketkatkar97 Apr 6, 2026
8448869
Fix RDF export API error
aniketkatkar97 Apr 6, 2026
31b761d
Add export functionality
aniketkatkar97 Apr 7, 2026
e29eea2
Fix ui-checkstyle
aniketkatkar97 Apr 7, 2026
cfc7769
Merge branch 'main' into rdf_v2
aniketkatkar97 Apr 7, 2026
bed18d9
Fix java checkstyle
aniketkatkar97 Apr 7, 2026
706a3fc
Fix unit tests
aniketkatkar97 Apr 7, 2026
5273179
Fix and increase the coverage for KnowledgeGraph.spec.ts
aniketkatkar97 Apr 7, 2026
1062f0c
Merge branch 'main' into rdf_v2
aniketkatkar97 Apr 7, 2026
1452a2f
Merge branch 'main' into rdf_v2
aniketkatkar97 Apr 7, 2026
fb1efe5
Merge branch 'main' into rdf_v2
aniketkatkar97 Apr 7, 2026
6c2a1e1
Merge remote-tracking branch 'origin/main' into rdf_v2
harshach Apr 7, 2026
7dd2925
Fix tests
harshach Apr 7, 2026
30b8d0b
Merge branch 'main' into rdf_v2
harshach Apr 7, 2026
49cca4c
Remove rdf as default in playwright and local docker
harshach Apr 8, 2026
8a5de75
Merge remote-tracking branch 'origin/rdf_v2' into rdf_v2
harshach Apr 8, 2026
0f91604
Merge remote-tracking branch 'origin/main' into rdf_v2
aniketkatkar97 Apr 9, 2026
3664b0e
fix ui-checkstyle
aniketkatkar97 Apr 10, 2026
0e4df9c
Merge branch 'main' into rdf_v2
aniketkatkar97 Apr 10, 2026
33abb30
Address comments
harshach Apr 11, 2026
1f32af1
Merge remote-tracking branch 'origin/rdf_v2' into rdf_v2
harshach Apr 11, 2026
dcd2b4c
Merge remote-tracking branch 'origin/main' into rdf_v2
harshach Apr 11, 2026
8ff7aae
Potential fix for pull request finding 'CodeQL / Artifact poisoning'
harshach Apr 11, 2026
82c6a23
Address copilot comments
harshach Apr 11, 2026
b73e7f5
Address copilot comments
harshach Apr 11, 2026
8814747
FIx tests
harshach Apr 12, 2026
5b89ec2
Merge branch 'main' into rdf_v2
harshach Apr 12, 2026
cfacda1
FIx docker
harshach Apr 12, 2026
d84a8d7
Merge remote-tracking branch 'origin/rdf_v2' into rdf_v2
harshach Apr 12, 2026
e3f0f5d
Update openmetadata-service/src/main/java/org/openmetadata/service/ap…
harshach Apr 12, 2026
3037499
Address copilot review comments: license headers, JSON escaping, type…
Copilot Apr 12, 2026
835debc
Show error toast for unsupported export format in KnowledgeGraph
Copilot Apr 12, 2026
54c177f
Fix docker
harshach Apr 12, 2026
4d4b807
Merge remote-tracking branch 'origin/rdf_v2' into rdf_v2
harshach Apr 12, 2026
7a34b67
Fix docker for playwright
harshach Apr 12, 2026
22cb1bc
Fix docker for playwright
harshach Apr 12, 2026
ae5bf8f
Fix tests
harshach Apr 13, 2026
b3c792e
Fix tests
harshach Apr 13, 2026
986101c
Fix docker
harshach Apr 13, 2026
c69ac4f
Fix docker
harshach Apr 13, 2026
1a94234
Merge branch 'main' into rdf_v2
aniketkatkar97 Apr 13, 2026
2432c3a
Merge branch 'main' into rdf_v2
aniketkatkar97 Apr 13, 2026
4bddb71
Fix glossary and pagination spec flakiness
aniketkatkar97 Apr 13, 2026
6ab8c15
Merge branch 'main' into rdf_v2
aniketkatkar97 Apr 13, 2026
7375993
update the missing translations
aniketkatkar97 Apr 13, 2026
aee1f34
Fix docker
harshach Apr 14, 2026
c3c4255
Merge remote-tracking branch 'origin/rdf_v2' into rdf_v2
harshach Apr 14, 2026
a092978
Fix docker
harshach Apr 14, 2026
c855b49
Fix integration test
aniketkatkar97 Apr 14, 2026
e984879
Fix fuseki not starting
aniketkatkar97 Apr 14, 2026
38d8ccf
Merge branch 'main' into rdf_v2
aniketkatkar97 Apr 14, 2026
df8374c
Fixed the run local docker script
aniketkatkar97 Apr 14, 2026
fb4406b
worked on comments
aniketkatkar97 Apr 14, 2026
8c97479
Merge branch 'main' into rdf_v2
aniketkatkar97 Apr 14, 2026
f6123c4
Fix flakiness in knowledge graph tests
aniketkatkar97 Apr 14, 2026
2058a5f
Fix checkstyle
aniketkatkar97 Apr 14, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
79 changes: 79 additions & 0 deletions bootstrap/sql/migrations/native/1.13.0/mysql/schemaChanges.sql
Original file line number Diff line number Diff line change
Expand Up @@ -84,3 +84,82 @@ SELECT ue.id, re.id, 'user', 'role', 10
FROM user_entity ue, role_entity re
WHERE ue.name = 'mcpapplicationbot'
AND re.name = 'ApplicationBotImpersonationRole';

-- RDF distributed indexing state tables
CREATE TABLE IF NOT EXISTS rdf_index_job (
id VARCHAR(36) NOT NULL,
status VARCHAR(32) NOT NULL,
jobConfiguration JSON NOT NULL,
totalRecords BIGINT NOT NULL DEFAULT 0,
processedRecords BIGINT NOT NULL DEFAULT 0,
successRecords BIGINT NOT NULL DEFAULT 0,
failedRecords BIGINT NOT NULL DEFAULT 0,
stats JSON,
createdBy VARCHAR(256) NOT NULL,
createdAt BIGINT NOT NULL,
startedAt BIGINT,
completedAt BIGINT,
updatedAt BIGINT NOT NULL,
errorMessage TEXT,
PRIMARY KEY (id),
INDEX idx_rdf_index_job_status (status),
INDEX idx_rdf_index_job_created (createdAt DESC)
);

CREATE TABLE IF NOT EXISTS rdf_index_partition (
id VARCHAR(36) NOT NULL,
jobId VARCHAR(36) NOT NULL,
entityType VARCHAR(128) NOT NULL,
partitionIndex INT NOT NULL,
rangeStart BIGINT NOT NULL,
rangeEnd BIGINT NOT NULL,
estimatedCount BIGINT NOT NULL,
workUnits BIGINT NOT NULL,
priority INT NOT NULL DEFAULT 50,
status VARCHAR(32) NOT NULL DEFAULT 'PENDING',
processingCursor BIGINT NOT NULL DEFAULT 0,
processedCount BIGINT NOT NULL DEFAULT 0,
successCount BIGINT NOT NULL DEFAULT 0,
failedCount BIGINT NOT NULL DEFAULT 0,
assignedServer VARCHAR(255),
claimedAt BIGINT,
startedAt BIGINT,
completedAt BIGINT,
lastUpdateAt BIGINT,
lastError TEXT,
retryCount INT NOT NULL DEFAULT 0,
claimableAt BIGINT NOT NULL DEFAULT 0,
PRIMARY KEY (id),
UNIQUE KEY uk_rdf_partition_job_entity_idx (jobId, entityType, partitionIndex),
INDEX idx_rdf_partition_job (jobId),
INDEX idx_rdf_partition_status_priority (status, priority DESC),
INDEX idx_rdf_partition_claimable (jobId, status, claimableAt),
INDEX idx_rdf_partition_assigned_server (jobId, assignedServer),
CONSTRAINT fk_rdf_partition_job FOREIGN KEY (jobId) REFERENCES rdf_index_job(id) ON DELETE CASCADE
);

CREATE TABLE IF NOT EXISTS rdf_reindex_lock (
lockKey VARCHAR(64) NOT NULL,
jobId VARCHAR(36) NOT NULL,
serverId VARCHAR(255) NOT NULL,
acquiredAt BIGINT NOT NULL,
lastHeartbeat BIGINT NOT NULL,
expiresAt BIGINT NOT NULL,
PRIMARY KEY (lockKey)
);

CREATE TABLE IF NOT EXISTS rdf_index_server_stats (
id VARCHAR(36) NOT NULL,
jobId VARCHAR(36) NOT NULL,
serverId VARCHAR(256) NOT NULL,
entityType VARCHAR(128) NOT NULL,
processedRecords BIGINT DEFAULT 0,
successRecords BIGINT DEFAULT 0,
failedRecords BIGINT DEFAULT 0,
partitionsCompleted INT DEFAULT 0,
partitionsFailed INT DEFAULT 0,
lastUpdatedAt BIGINT NOT NULL,
PRIMARY KEY (id),
UNIQUE INDEX idx_rdf_index_server_stats_job_server_entity (jobId, serverId, entityType),
INDEX idx_rdf_index_server_stats_job_id (jobId)
);
82 changes: 82 additions & 0 deletions bootstrap/sql/migrations/native/1.13.0/postgres/schemaChanges.sql
Original file line number Diff line number Diff line change
Expand Up @@ -96,3 +96,85 @@ FROM user_entity ue, role_entity re
WHERE ue.name = 'mcpapplicationbot'
AND re.name = 'ApplicationBotImpersonationRole'
ON CONFLICT DO NOTHING;

-- RDF distributed indexing state tables
CREATE TABLE IF NOT EXISTS rdf_index_job (
id VARCHAR(36) NOT NULL,
status VARCHAR(32) NOT NULL,
jobConfiguration JSONB NOT NULL,
totalRecords BIGINT NOT NULL DEFAULT 0,
processedRecords BIGINT NOT NULL DEFAULT 0,
successRecords BIGINT NOT NULL DEFAULT 0,
failedRecords BIGINT NOT NULL DEFAULT 0,
stats JSONB,
createdBy VARCHAR(256) NOT NULL,
createdAt BIGINT NOT NULL,
startedAt BIGINT,
completedAt BIGINT,
updatedAt BIGINT NOT NULL,
errorMessage TEXT,
PRIMARY KEY (id)
);

CREATE INDEX IF NOT EXISTS idx_rdf_index_job_status ON rdf_index_job(status);
CREATE INDEX IF NOT EXISTS idx_rdf_index_job_created ON rdf_index_job(createdAt DESC);

CREATE TABLE IF NOT EXISTS rdf_index_partition (
id VARCHAR(36) NOT NULL,
jobId VARCHAR(36) NOT NULL,
entityType VARCHAR(128) NOT NULL,
partitionIndex INT NOT NULL,
rangeStart BIGINT NOT NULL,
rangeEnd BIGINT NOT NULL,
estimatedCount BIGINT NOT NULL,
workUnits BIGINT NOT NULL,
priority INT NOT NULL DEFAULT 50,
status VARCHAR(32) NOT NULL DEFAULT 'PENDING',
processingCursor BIGINT NOT NULL DEFAULT 0,
processedCount BIGINT NOT NULL DEFAULT 0,
successCount BIGINT NOT NULL DEFAULT 0,
failedCount BIGINT NOT NULL DEFAULT 0,
assignedServer VARCHAR(255),
claimedAt BIGINT,
startedAt BIGINT,
completedAt BIGINT,
lastUpdateAt BIGINT,
lastError TEXT,
retryCount INT NOT NULL DEFAULT 0,
claimableAt BIGINT NOT NULL DEFAULT 0,
PRIMARY KEY (id),
UNIQUE (jobId, entityType, partitionIndex),
CONSTRAINT fk_rdf_partition_job FOREIGN KEY (jobId) REFERENCES rdf_index_job(id) ON DELETE CASCADE
);

CREATE INDEX IF NOT EXISTS idx_rdf_partition_job ON rdf_index_partition(jobId);
CREATE INDEX IF NOT EXISTS idx_rdf_partition_status_priority ON rdf_index_partition(status, priority DESC);
CREATE INDEX IF NOT EXISTS idx_rdf_partition_claimable ON rdf_index_partition(jobId, status, claimableAt);
CREATE INDEX IF NOT EXISTS idx_rdf_partition_assigned_server ON rdf_index_partition(jobId, assignedServer);

CREATE TABLE IF NOT EXISTS rdf_reindex_lock (
lockKey VARCHAR(64) NOT NULL,
jobId VARCHAR(36) NOT NULL,
serverId VARCHAR(255) NOT NULL,
acquiredAt BIGINT NOT NULL,
lastHeartbeat BIGINT NOT NULL,
expiresAt BIGINT NOT NULL,
PRIMARY KEY (lockKey)
);

CREATE TABLE IF NOT EXISTS rdf_index_server_stats (
id VARCHAR(36) NOT NULL,
jobId VARCHAR(36) NOT NULL,
serverId VARCHAR(256) NOT NULL,
entityType VARCHAR(128) NOT NULL,
processedRecords BIGINT DEFAULT 0,
successRecords BIGINT DEFAULT 0,
failedRecords BIGINT DEFAULT 0,
partitionsCompleted INT DEFAULT 0,
partitionsFailed INT DEFAULT 0,
lastUpdatedAt BIGINT NOT NULL,
PRIMARY KEY (id),
UNIQUE (jobId, serverId, entityType)
);

CREATE INDEX IF NOT EXISTS idx_rdf_index_server_stats_job_id ON rdf_index_server_stats(jobId);
37 changes: 37 additions & 0 deletions docker/development/docker-compose-fuseki.yml
Original file line number Diff line number Diff line change
@@ -1,12 +1,44 @@
version: "3.9"

services:
execute-migrate-all:
environment:
RDF_ENABLED: ${RDF_ENABLED:-true}
RDF_STORAGE_TYPE: ${RDF_STORAGE_TYPE:-FUSEKI}
RDF_ENDPOINT: ${RDF_ENDPOINT:-http://fuseki:3030/openmetadata}
RDF_REMOTE_USERNAME: ${RDF_REMOTE_USERNAME:-admin}
RDF_REMOTE_PASSWORD: ${RDF_REMOTE_PASSWORD:-admin}
RDF_BASE_URI: ${RDF_BASE_URI:-https://open-metadata.org/}
RDF_JSONLD_ENABLED: ${RDF_JSONLD_ENABLED:-true}
RDF_SPARQL_ENABLED: ${RDF_SPARQL_ENABLED:-true}
RDF_DATASET: ${RDF_DATASET:-openmetadata}
depends_on:
fuseki:
condition: service_healthy

openmetadata-server:
environment:
RDF_ENABLED: ${RDF_ENABLED:-true}
RDF_STORAGE_TYPE: ${RDF_STORAGE_TYPE:-FUSEKI}
RDF_ENDPOINT: ${RDF_ENDPOINT:-http://fuseki:3030/openmetadata}
RDF_REMOTE_USERNAME: ${RDF_REMOTE_USERNAME:-admin}
RDF_REMOTE_PASSWORD: ${RDF_REMOTE_PASSWORD:-admin}
RDF_BASE_URI: ${RDF_BASE_URI:-https://open-metadata.org/}
RDF_JSONLD_ENABLED: ${RDF_JSONLD_ENABLED:-true}
RDF_SPARQL_ENABLED: ${RDF_SPARQL_ENABLED:-true}
RDF_DATASET: ${RDF_DATASET:-openmetadata}
depends_on:
fuseki:
condition: service_healthy

Comment thread
aniketkatkar97 marked this conversation as resolved.
fuseki:
image: stain/jena-fuseki:5.0.0
container_name: openmetadata-fuseki
hostname: fuseki
ports:
- "3030:3030"
networks:
- local_app_net
environment:
- ADMIN_PASSWORD=admin
- JVM_ARGS=-Xmx4g -Xms2g
Expand All @@ -19,6 +51,11 @@ services:
memory: 4G
reservations:
memory: 2G
healthcheck:
test: "curl -s -f http://localhost:3030/\\$/ping > /dev/null || exit 1"
interval: 15s
timeout: 10s
retries: 20
# Create the database directory before starting Fuseki
entrypoint: /bin/sh -c "mkdir -p /fuseki/databases/openmetadata && exec /docker-entrypoint.sh /jena-fuseki/fuseki-server --update --loc=/fuseki/databases/openmetadata /openmetadata"

Expand Down
42 changes: 38 additions & 4 deletions docker/run_local_docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,8 @@ cd ../
echo "Stopping any previous Local Docker Containers"
docker compose -f docker/development/docker-compose-postgres.yml down --remove-orphans
docker compose -f docker/development/docker-compose.yml down --remove-orphans
docker compose -f docker/development/docker-compose-postgres.yml -f docker/development/docker-compose-fuseki.yml down --remove-orphans
docker compose -f docker/development/docker-compose.yml -f docker/development/docker-compose-fuseki.yml down --remove-orphans

if [[ $skipMaven == "false" ]]; then
if [[ $mode == "no-ui" ]]; then
Expand All @@ -80,6 +82,14 @@ if [[ $debugOM == "true" ]]; then
export OPENMETADATA_DEBUG=true
fi

export RDF_ENABLED=true
export RDF_STORAGE_TYPE=FUSEKI
export RDF_ENDPOINT="${RDF_ENDPOINT:-http://fuseki:3030/openmetadata}"
export RDF_REMOTE_USERNAME="${RDF_REMOTE_USERNAME:-admin}"
export RDF_REMOTE_PASSWORD="${RDF_REMOTE_PASSWORD:-admin}"
export RDF_BASE_URI="${RDF_BASE_URI:-https://open-metadata.org/}"
export RDF_DATASET="${RDF_DATASET:-openmetadata}"

Copy link

Copilot AI Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

run_local_docker.sh now unconditionally exports RDF_ENABLED=true (and related RDF vars), which makes it hard to run the default stack without RDF and overrides any caller-provided RDF_ENABLED=false. Consider defaulting with parameter expansion (e.g., export RDF_ENABLED=${RDF_ENABLED:-true}) so users can opt out.

Copilot uses AI. Check for mistakes.
if [[ $cleanDbVolumes == "true" ]]
then
if [[ -d "$PWD/docker/development/docker-volume/" ]]
Expand Down Expand Up @@ -116,13 +126,16 @@ else
exit 1
fi

RDF_COMPOSE_FILE="docker/development/docker-compose-fuseki.yml"
COMPOSE_ARGS=(-f "$COMPOSE_FILE" -f "$RDF_COMPOSE_FILE")

if [[ $includeIngestion == "true" ]]; then
echo "Building all services including ingestion (dependency: ${INGESTION_DEPENDENCY:-all})"
docker compose -f $COMPOSE_FILE build --build-arg INGESTION_DEPENDENCY="${INGESTION_DEPENDENCY:-all}" && docker compose -f $COMPOSE_FILE up -d
docker compose "${COMPOSE_ARGS[@]}" build --build-arg INGESTION_DEPENDENCY="${INGESTION_DEPENDENCY:-all}" && docker compose "${COMPOSE_ARGS[@]}" up -d
else
echo "Building services without ingestion"
docker compose -f $COMPOSE_FILE build $SEARCH_SERVICE $DB_SERVICE execute-migrate-all openmetadata-server && \
docker compose -f $COMPOSE_FILE up -d $SEARCH_SERVICE $DB_SERVICE execute-migrate-all openmetadata-server
docker compose "${COMPOSE_ARGS[@]}" build $SEARCH_SERVICE $DB_SERVICE execute-migrate-all openmetadata-server && \
docker compose "${COMPOSE_ARGS[@]}" up -d fuseki $SEARCH_SERVICE $DB_SERVICE execute-migrate-all openmetadata-server
fi

RESULT=$?
Expand All @@ -136,6 +149,11 @@ until curl -s -f "http://localhost:9200/_cat/indices/openmetadata_team_search_in
sleep 5
done

until curl -s -f "http://localhost:3030/\$/ping" > /dev/null 2>&1; do
echo 'Checking if Fuseki is reachable...\n'
sleep 5
done

if [[ $includeIngestion == "true" ]]; then
# Function to get OAuth access token for Airflow API
get_airflow_token() {
Expand Down Expand Up @@ -288,6 +306,22 @@ curl --location --request POST 'http://localhost:8585/api/v1/apps/trigger/Search
--header 'Authorization: Bearer eyJraWQiOiJHYjM4OWEtOWY3Ni1nZGpzLWE5MmotMDI0MmJrOTQzNTYiLCJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImlzQm90IjpmYWxzZSwiaXNzIjoib3Blbi1tZXRhZGF0YS5vcmciLCJpYXQiOjE2NjM5Mzg0NjIsImVtYWlsIjoiYWRtaW5Ab3Blbm1ldGFkYXRhLm9yZyJ9.tS8um_5DKu7HgzGBzS1VTA5uUjKWOCU0B_j08WXBiEC0mr0zNREkqVfwFDD-d24HlNEbrqioLsBuFRiwIWKc1m_ZlVQbG7P36RUxhuv2vbSp80FKyNM-Tj93FDzq91jsyNmsQhyNv_fNr3TXfzzSPjHt8Go0FMMP66weoKMgW2PbXlhVKwEuXUHyakLLzewm9UMeQaEiRzhiTMU3UkLXcKbYEJJvfNFcLwSl9W8JCO_l0Yj3ud-qt_nQYEZwqW6u5nfdQllN133iikV4fM5QZsMCnm8Rq1mvLR0y9bmJiD7fwM1tmJ791TUWqmKaTnP49U493VanKpUAfzIiOiIbhg'

sleep 60 # Sleep for 60 seconds to make sure the elasticsearch reindexing from UI finishes

echo "✔running RDF reindexing"
curl --location --request POST 'http://localhost:8585/api/v1/apps/trigger/RdfIndexApp' \
--header 'Authorization: Bearer eyJraWQiOiJHYjM4OWEtOWY3Ni1nZGpzLWE5MmotMDI0MmJrOTQzNTYiLCJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImlzQm90IjpmYWxzZSwiaXNzIjoib3Blbi1tZXRhZGF0YS5vcmciLCJpYXQiOjE2NjM5Mzg0NjIsImVtYWlsIjoiYWRtaW5Ab3Blbm1ldGFkYXRhLm9yZyJ9.tS8um_5DKu7HgzGBzS1VTA5uUjKWOCU0B_j08WXBiEC0mr0zNREkqVfwFDD-d24HlNEbrqioLsBuFRiwIWKc1m_ZlVQbG7P36RUxhuv2vbSp80FKyNM-Tj93FDzq91jsyNmsQhyNv_fNr3TXfzzSPjHt8Go0FMMP66weoKMgW2PbXlhVKwEuXUHyakLLzewm9UMeQaEiRzhiTMU3UkLXcKbYEJJvfNFcLwSl9W8JCO_l0Yj3ud-qt_nQYEZwqW6u5nfdQllN133iikV4fM5QZsMCnm8Rq1mvLR0y9bmJiD7fwM1tmJ791TUWqmKaTnP49U493VanKpUAfzIiOiIbhg' \
--header 'Content-Type: application/json' \
--data-raw '{
"entities": ["all"],
"recreateIndex": true,
"batchSize": 100,
"useDistributedIndexing": true,
"partitionSize": 10000
}'
Copy link

Copilot AI Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The RDF reindex trigger payload uses "entities": ["all"], but the updated RDF indexing app configs/schemas now default to an empty list to mean “all entities” and don’t list all as an allowed enum value. To align with the new config, consider omitting entities entirely or sending an empty array.

Copilot uses AI. Check for mistakes.

sleep 30
tput setaf 2
echo "✔ OpenMetadata is up and running"

echo "✔ RDF/Knowledge Graph support is enabled"
echo " - Fuseki UI: http://localhost:3030"
echo " - SPARQL endpoint: http://localhost:3030/openmetadata/sparql"
Loading
Loading