Skip to main content

Import methods

Alana supports multiple import methods depending on your data source:
MethodHowBest for
CSV / ExcelFile uploadBulk catalog from spreadsheet
URL scrapingBright Data web scrapingScrape product data from a URL
Shopify / WooCommercePlatform connectorExisting e-commerce store
MCP inboundAI agent push via MCP APIAgent-driven catalog population
Dataset importBright Data dataset deliveryLarge-scale data acquisition
After any import, the Bronze stage runs automatically — products are ingested with an idempotency key to prevent duplicates. Silver normalization and Gold scoring run on demand or via auto-trigger (configurable in Pipeline Settings).

URL import (Bright Data)

Import products by providing a product page URL. Bright Data scrapes the page and extracts structured product data.
curl -X POST "https://app.alana.shopping/api/workspace/{workspaceId}/url-import/jobs" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com/products/my-product",
    "catalogId": "CATALOG_ID"
  }'
See the URL Import guide for details on job management and webhook notifications.

Supported file formats

The CSV/Excel import endpoint accepts:
  • CSV (.csv) — comma or semicolon separated
  • Excel (.xlsx) — first sheet is used

Required columns

ColumnRequiredDescription
titleYesProduct name
skuYesUnique stock-keeping unit
priceYesSelling price (numeric)
currencyNoISO 4217 code (defaults to workspace currency)
descriptionNoProduct description
brandNoBrand name (must exist in workspace)
categoryPathNoCategory hierarchy separated by >
primaryImageUrlNoMain product image URL
gtinNoGlobal Trade Item Number
originalPriceNoOriginal price for discount display
availabilityNoStock status (e.g. “in stock”, “out of stock”)
Additional columns are stored as flexible attributes.

Import via API

curl -X POST ".../catalog/products/import" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "file=@products.csv" \
  -F "catalogId=CATALOG_ID"

Import response

The import returns a summary:
{
  "total": 500,
  "created": 487,
  "errors": 13,
  "errorDetails": [
    {"row": 45, "field": "price", "message": "Invalid number format"},
    {"row": 112, "field": "sku", "message": "Duplicate SKU: PROD-112"}
  ]
}

Pipeline auto-processing after import

When products are imported, the Bronze → Silver → Gold pipeline processes them automatically (if configured) or on demand:
  1. Bronze — raw product stored with idempotency key; duplicate imports are safely skipped
  2. Silver — fields normalized, duplicates detected, image URLs validated
  3. Gold — optimization score (0–100) computed across 7 rubric stages; gaps list returned
Trigger Silver and Gold in bulk via Batch Actions, or configure auto-trigger in Pipeline Settings.

Best practices

Use a small test file (10-20 rows) before importing your full catalog. Check the error details to fix formatting issues.
Follow a consistent hierarchy format: Level 1 > Level 2 > Level 3. Inconsistent paths create duplicate categories.
Products with GTINs score higher on optimization and are required for most shopping feeds (Google, Meta).
After importing, run Batch Silver to normalize fields, then Batch Gold to compute optimization scores. This gives you a quality baseline before publishing. See Data Enrichment.
Last modified on March 18, 2026