From sandbox to pipeline: A builder's guide to production-grade visual AI

Bria ai

From sandbox to pipeline: A builder's guide to production-grade visual AI

How five industries go from testing visual AI to running it in production.

Let's be honest about how this usually goes

Someone on the team says "we should try Bria." They open a few sandboxes, generate a few images, think "ok that's pretty cool," and then... nothing ships. The gap between "I tested it" and "we have a pipeline running" is where most teams get stuck.

This guide is about closing that gap.

We take five industries and walk through the full journey – from opening the sandbox for the first time to having something real in production. What to test, how to build, and what it looks like when it works.


eCommerce & Retail

You've got 2,000 SKUs, a seasonal campaign deadline in 3 weeks, and a creative team that's already stretched. Your studio can handle maybe 50 hero shots. Everything else goes live with a white background and a crossed-fingers prayer that it converts.

This is the pipeline that changes that math.

Step by step

1. Start with Fibo Gen – ideate before you commit to anything

Before you book a studio or brief a photographer, spin up Fibo and generate some scene concepts. Describe the product category, environment, season, and mood. Ask for 4 variations. You'll know within 10 minutes whether the direction works – and you can swap individual attributes (change "summer" to "autumn", "warm" to "cool") without regenerating the whole thing. That's the VGL structure at work.

API endpoint: POST /image/generate

πŸ’‘ Try it first: Use the sandbox first. Type a prompt like "minimalist white kitchen, morning light, glass water bottle, spring”. No code needed at this stage.

2. Refresh last season's shots instead of reshooting everything

Your catalog already has hundreds of approved product images. Don't throw them out – adapt them. Replace the background to match the new campaign theme. Shift the lighting from harsh midday to golden hour. Recolor the props from last season's palette to this one. Fibo Edit keeps the actual product pixel-perfect while changing everything around it.

API endpoint:Β POST /replace_background Β· POST /relight Β· POST /recolor

πŸ’‘ Try it first:Β Upload any product photo from last season. Hit Replace Background with a new scene prompt. Compare before and after. Check that the product stays pixel-perfect while the context changes, that’s the core behavior you’re validating before you build.

3. Drop your products into those scenes β€” precisely

You've got a great lifestyle background. Now put the actual product in it. The Product Embedding API takes your product image and places it into the scene at exact coordinates you specify. It automatically matches the lighting, adds a natural contact shadow, and handles transparent or reflective materials. The product doesn't get regenerated or altered – it's embedded as-is.

API endpoint: POST /product/embed

πŸ’‘ Try it first: Use the Product Placement sandbox. Upload a product cutout, pick a background, drag it into position. The sandbox will give you the exact API parameters – copy those directly into your integration.

4. One master image β†’ every channel format, automatically

This is where the pipeline really pays off. Feed the master image into a resize loop: PDP hero (1200Γ—1200), Instagram square (1080Γ—1080), Instagram story (1080Γ—1920), Google Shopping (800Γ—800), email header (600Γ—200), print catalog (300 DPI via 4x upscale). Eight formats in under 60 seconds, no manual cropping.

API endpoint: POST /crop Β· POST /expand Β· POST /increase_resolution

πŸ’‘ Try it first: Build a simple format manifest – a JSON list of target dimensions. Feed it one image. Run the resize calls in parallel. That's the entire automation.

What does this actually save?

Time per SKU

4–6 hours β†’ under 5 minutes

Seasonal campaign turnaround

3 weeks β†’ 48 hours

Cost per visual asset

Down 70–90% vs. studio

Channel formats per image

1 β†’ 8–12 auto-generated

Travel & Hospitality

Hotel group with 80 properties. The summer campaign needs fresh hero images for all of them. The photography budget covers maybe 15. The other 65 either recycle last year's shots or get skipped entirely.

With Bria, a 2-person content team can cover all 80 in a week.


Step by step

1. Generate destination scenes for any property, any season

Describe the property vibe – boutique beachfront, mountain lodge, urban rooftop – and let Fibo generate the aspirational scene. The real advantage is that you can lock the composition and just vary the season. Same pool shot, four seasons, four campaigns. You're not reshooting; you're parameterizing.

API endpoint: POST /image/generate

πŸ’‘ Try it first: Start with one property. Write a description of the visual you'd want for it. Generate 4 variations with different seasons or times-of-day. Pick the winner, save the prompt parameters for the rest of the portfolio.

2. Take last year's property photos and make them feel new

You don't always need to generate from scratch. Take an existing exterior shot and run it through /reseason to shift it from summer to autumn. Use /relight to swap midday brightness for that golden-hour warmth. Suddenly a photo from 18 months ago works perfectly for the new campaign – and it's still authentically that property.

API endpoint: POST /reseason Β· POST /relight Β· POST /replace_background

πŸ’‘ Try it first: Upload a property exterior photo. Try /reseason with a winter prompt. Compare how well architectural details are preserved vs. a competitor tool. The difference is usually obvious.

3. Add branded lifestyle products to room shots

Want that amenity pouch on the bedside table? That bottle of wine on the terrace? You don't need a prop stylist – just the product image and a coordinate. Embed it into the room scene, and Bria handles the lighting match and shadow generation. Run the same product into 10 different room backgrounds in one batch.

API endpoint: POST /product/embed

πŸ’‘ Try it first: Take any room photography background. Embed a single product (minibar item, amenity kit, branded glassware). Check that the shadow looks natural. Adjust placement coordinates if needed. That's it.

4. Localize for every market and booking platform

OTA thumbnail specs, Meta story ratios, email headers, print brochures – they all want different crops. Use /expand to extend the canvas for wider formats without cropping out the subject, and /crop to hit the exact dimensions each platform needs. For a portfolio of 80 properties Γ— 4 seasons Γ— 8 formats, that's 2,560 assets from a single automated run.

API endpoint: POST /expand Β· POST /crop Β· POST /increase_resolution

πŸ’‘ Try it first: Build your format list once per client or property group. Every future campaign run reuses the same manifest. New season = new prompt + same pipeline.

What does this actually save?

Seasonal refresh cycle

8 weeks of production β†’ 5 days

Properties covered

15 (budget-limited) β†’ 80 (all of them)

Localized market variants

1 hero β†’ 15+ per property automatically

Campaign time-to-market

6 weeks β†’ under 1 week

CPG & Consumer Brands

You're launching 40 SKU variants. Each one needs a white-background packshot, a lifestyle image, a social crop, a retailer-compliant version for Amazon, one for Walmart, one for Target, and an email banner. Times 6 regional markets. The math on manual production is brutal.

The good news: this entire output can come from one API pipeline, run once per SKU.


Step by step

1. Generate on-brand campaign scenes before the product even ships

You don't need the physical product on set to start campaign production. Describe what the lifestyle scene should look and feel like – and if you've trained a Tailored Generation model on your brand's approved imagery, Fibo will match your visual language automatically. Consistent color palette, lighting aesthetic, composition style, all locked in without manual art direction on every single output.

API endpoint: POST /image/generate (with TailoredGen for brand conditioning)

πŸ’‘ Try it first: Start without Tailored Generation. Generate lifestyle scenes for your product category. Then compare the output after fine-tuning on 20–30 brand-approved images. The difference in brand consistency is what tells you whether model training is worth adding to your pipeline.

2. Adapt packshots for seasonal campaigns without reshooting

That approved hero packshot from the annual shoot is doing a lot of heavy lifting. Recolor the props and background to the seasonal palette (warm holiday reds, pastel spring tones). Replace the background with a seasonal context scene. Use gen_fill to add decorative elements around the product, without touching the product artwork itself.

API endpoint: POST /recolor Β· POST /replace_background Β· POST /gen_fill

πŸ’‘ Try it first: Upload a packshot. Run /recolor with a "warm holiday" color direction. Then try /replace_background with a cozy winter scene. Both take about 10 seconds. If the outputs meet your quality bar, you’re ready to parameterize this step into the pipeline.

3. Generate retailer packshots and lifestyle images from the same product asset

One product cutout, two completely different use cases. /product/packshot places it on a white or neutral background at exact retailer specs (Amazon needs 1000Γ—1000 with white BG, Walmart wants 2000Γ—2000, etc.). /product/embed takes the same cutout and drops it into a lifestyle background for social, DTC, and display. You're not maintaining two separate asset pipelines, they branch from the same source.

API endpoint: POST /product/cutout Β· POST /product/packshot Β· POST /product/embed

πŸ’‘ Try it first: In the eCommerce Playground: upload a product image. Generate the packshot. Then use Lifestyle Shot by Text to see it in context. Show both outputs side by side β€” same product, two use cases, one source image.

4. Hit every retailer spec automatically

Build a format manifest with every retailer's requirements: Amazon main (1000Γ—1000 white BG), Amazon A+ (970Γ—300), Walmart (2000Γ—2000), Target hero (16:9), Kroger banner (1200Γ—628), Costco print spread (300 DPI). Feed one master image per SKU, get 6+ compliant outputs back. Set up once, reuse for every future launch.

API endpoint: POST /crop Β· POST /expand Β· POST /increase_resolution

πŸ’‘ Try it first: Manually enter your top 3 retailer specs as a test. Run one SKU through them. If the outputs are compliant, the manifest approach scales to all 40 SKUs immediately.

What does this actually save?

Product launch content cycle

18 weeks β†’ 2–3 weeks

Retailer spec compliance

100% automated β€” no manual resizing

SKU coverage at launch

All 40 variants have full assets on Day 1

Seasonal refresh frequency

2x/year (budget) β†’ 6x/year (API cost only)

Marketing Agencies

You serve 25 clients. Each has brand guidelines, a campaign brief, channel specs, and a legal team that has opinions. Currently about 40% of production budget goes to stock licensing, and another chunk goes to freelancers doing manual Photoshop work that AI should be doing.

The agencies that figure this out first are going to eat everyone else's lunch on margins.


Step by step

1. Turn a written brief into visual concepts in under an hour

Feed the client's brief into /structured_prompt/generate. It converts free-text into a structured JSON (VGL) encoding the composition, lighting, color temperature, subject, environment, and style. Generate 4 scene concepts from it. Then, and this is the Bria magic, modify individual parameters to explore different directions without regenerating from scratch. "Urban" becomes "natural." "Evening" becomes "golden hour." The direction stays intact while the variation is surgical.

API endpoint: POST /structured_prompt/generate Β· POST /image/generate

πŸ’‘ Try it first: Take any client brief sitting in your inbox. Paste it into the sandbox as a prompt. Check the structured JSON the VLM Bridge produces. Edit one or two attributes. Regenerate. That's the ideation workflow.

2. Breathe new life into the client's existing photo library

Most clients have a library of approved photos gathering dust. Before you suggest a new shoot, audit what they have and see what Bria can do with it. /relight shifts the mood across the whole batch. /restyle adapts the aesthetic to the new campaign direction – "clean minimalist" to "warm editorial." A creative team that spent 3 days on 50 Photoshop adaptations can now process the same batch in 2 hours.

API endpoint: POST /relight Β· POST /restyle Β· POST /replace_background

πŸ’‘ Try it first: Pick 5 images from a client library. Run them through /relight and /restyle in the relevant sandboxes. Note how well the original composition and subject are preserved. If the quality holds, batch processing becomes a simple loop over the full library.

3. Place client products into campaign scenes at scale

For brand, retail, or CPG clients with physical products: take the approved product image, provide the campaign background, specify coordinates, done. Bria matches the lighting and generates a natural shadow, no compositing work in Photoshop. And because Bria's training data is 100% licensed from leading partners like Getty and Envato, you can hand these assets to clients with full commercial indemnification.

API endpoint: POST /product/embed Β· POST /product/lifestyle_shot_by_text

πŸ’‘ Try it first: Take a client product image. In the sandbox, use Lifestyle Shot by Text with their campaign brief as the scene prompt. Review the 4 outputs for lighting match, shadow quality, and product fidelity. Those are the three things to validate before you integrate this step into production.

4. Deliver the complete asset matrix without a production team

Build a per-client format manifest once. Every future campaign run uses it automatically. Digital OOH (1920Γ—1080), Meta feed (1080Γ—1080), Meta story (1080Γ—1920), LinkedIn banner (1200Γ—628), display leaderboard (728Γ—90), email header (600Γ—200), YouTube thumbnail (1280Γ—720), print full-page (300 DPI). All from one master. Time to full channel delivery: under 30 minutes of compute.

API endpoint: POST /crop Β· POST /expand Β· POST /increase_resolution

πŸ’‘ Try it first: Store each client's format spec as a JSON manifest. New campaign = new prompt + same manifest. You set it up once, it runs forever.

What does this actually save?

Brief to visual concepts

5 days β†’ same day

Asset production per campaign

40 hours β†’ under 4 hours

Stock licensing spend

Eliminated (Bria outputs are commercially licensed)

Revision rounds

4–6 rounds β†’ 1–2 (structured prompts make edits exact)

Tech & SaaS

You’re a small creative team producing visuals for 12 feature launches a quarter, a growth team running 30+ paid social variants per campaign, or a developer embedding visual AI directly into your product.

The common thread: manual production doesn’t scale, and the gap between what your team can produce and what your pipeline needs keeps widening.

The same API pipeline closes that gap whether you’re the team creating the visuals or the developer building the tool that creates them.


Step by step

1. Generate product marketing visuals for every launch cycle

You don’t need a designer briefed and a stock license cleared for every feature release. Describe the scene, product category, visual style, environment, campaign mood, and generate 4 concept variants. From there, lock what works and vary only what needs to change: swap the color palette from one campaign to the next, update the environment for a new vertical. Each output is a parameter change, not a new brief.

API endpoint: POST /structured_prompt/generate Β· POST /image/generate

πŸ’‘ Try it first: Write a brief for an upcoming feature launch (e.g., β€œB2B SaaS dashboard, clean UI, light mode, professional, neutral background”). Generate 4 variants. Then lock the composition and change one attribute: mood, color temperature, environment. That repeatability is what you’re validating before you build the generation step into your pipeline.

2. Reuse and refresh existing assets for every new release or campaign

Your library of approved visuals is an asset, not a constraint. Instead of reshooting or re-licensing for every campaign, adapt what you already have. Shift the lighting from last quarter’s hero image to match the new campaign tone. Restyle an existing product visual to fit a new vertical or market. Replace the background to update seasonal context. The product or UI element stays unchanged, only the context around it shifts.

API endpoint: POST /relight Β· POST /restyle Β· POST /replace_background

πŸ’‘ Try it first: Take a hero image from your last launch. Run /restyle with your new campaign direction (e.g., β€œdark mode, high contrast”). Check that branded UI elements and product details are preserved. That fidelity check is the quality gate before you build this into a batch refresh loop.

3. Generate the full paid social format matrix from one master creative

Growth teams running A/B tests at volume hit the same bottleneck: creative production can’t keep pace with the test cadence. One approved master image should generate every format you need – Meta feed, Meta story, LinkedIn, Google Display, YouTube thumbnail, email header – automatically. No manual resizing, no aspect ratio guesswork, no waiting on design.

API endpoint: POST /crop Β· POST /expand Β· POST /increase_resolution Β· POST /recolor

πŸ’‘ Try it first: Take one approved creative. Build a format manifest listing your top 5 paid social specs with exact dimensions. Run the resize and crop calls against it. That five-minute sandbox test is the full automation – the same manifest runs on every future launch.

4. Embed Bria as the visual AI layer inside your own product

If you’re building a product that generates or edits images for your end users, you need a visual AI layer that can handle commercial scale: consistent outputs, defensible provenance, and the flexibility to customize the behavior for your use case. Bria’s API catalog covers generation, editing, background, product, and video, all accessible via standard REST endpoints. You can start with one endpoint, validate quality against your acceptance criteria, and expand from there.

API endpoint: Full API suite β€” see docs.bria.ai for the complete endpoint reference

πŸ’‘ Try it first: Pick the one endpoint most critical to your core feature. Call it in the sandbox with representative inputs from your actual use case, not synthetic test data. Review the outputs against your acceptance criteria before writing any integration code. Once that endpoint passes, the path to the full catalog is the same pattern repeated.

What does this actually save?

Visual assets per feature launch

3-5 (manual) β†’ 20+ automated variants

Paid social format coverage

5 formats per creative β†’ full channel matrix, automated

Time from sandbox to first production call

1–2 weeks β†’ 1–3 days

Designer hours per campaign cycle

Reduced 60–80% on repeatable visual production

From testing to production: what the path actually looks like

Every pipeline in this guide follows the same underlying pattern. You start in the sandbox – no code, no commitment – and validate one thing: does Bria’s output quality meet your bar for this use case? If it does, the integration moves faster than most teams expect. If it doesn’t, you’ve learned that before writing a single line of production code.

The step from sandbox to first API call is typically one afternoon. First call to a working single-endpoint pipeline is usually 3–5 days. From there, you stack steps.

The multi-stage pipelines in this guide (4 steps, branching outputs, format manifests) are still just individual API calls chained together. Each one started as a sandbox test.

One thing worth building into your architecture from the start: every output from Bria carries traceable provenance. The training data is 100% licensed from premium content partners. That means assets you generate are commercially safe to ship, no legal review loop, no stock clearance, no licensing exposure.

For pipelines running at scale, that’s not a footnote. It’s a structural advantage.

Start with the endpoint most relevant to your use case. Use your own images, not demos. See what the output quality looks like before you build anything. That’s all Day 1 needs to be.

Visual production is becoming infrastructure

The volume of visual content the world needs is growing faster than any team can produce manually. That gap doesn't close with more designers or bigger stock budgets β€” it closes with infrastructure. Pipelines that generate, adapt, and deliver at scale. AI that behaves predictably, outputs that are reproducible, and assets that are legally defensible from the moment they're created.

That's the shift happening now β€” and it's accelerating as AI agents take on more of the pipeline themselves. The question isn't whether to automate visual production. It's whether the foundation you build on is one you can trust at scale.

Bria exists to be that foundation. Rights-clear models. Structured, auditable outputs. A platform built for production from day one β€” not bolted together after the fact.

Explore the full API catalog and start testing β†’ console.bria.ai


[@portabletext/react] Unknown block type "span", specify a component for it in the `components.types` prop