Portfolio

Real systems in production

These are not mockups. Each case study is a working system built on our 3-layer architecture, running in production environments.

CRE Daily News Brief

Automated industry intelligence for commercial real estate — from 20+ sources to one executive digest, every morning.

Problem

A commercial real estate firm needed daily industry intelligence but spent hours manually scanning news sites, filtering for relevance, and summarizing for their team.

Solution

An automated pipeline that fetches articles from 20+ RSS feeds, filters by keyword clusters (industrial investments, sale-leasebacks, institutional capital flows), scrapes full content, and synthesizes an executive digest using AI.

Delivery

Daily digest delivered automatically via email, Slack, and Google Docs — before the team arrives in the morning.

Stack

Python (fetch_rss, filter_articles, scrape_article, synthesize_news, deliver_news), Claude Haiku for synthesis, Modal for scheduling.

20+
News Sources
5
Scripts in Pipeline
~$3
Monthly Cost
Daily
Delivery

Jump Cut Video Editor

Neural VAD-powered video editing that turns raw talking-head footage into polished content — automatically.

Problem

A content creator spent 4-6 hours per video manually removing silences and dead air from talking-head recordings. Standard FFmpeg-based tools produced poor results with background noise.

Solution

A neural voice activity detection (VAD) pipeline using the Silero model. Detects speech with far greater accuracy than threshold-based approaches, handles background noise, breathing, and quiet speech naturally.

Features

"Cut cut" restart detection, audio enhancement (EQ, compression, loudness normalization), LUT-based color grading, and 3D pan transitions between segments.

Stack

Python (jump_cut_vad_singlepass, insert_3d_transition), Silero VAD, FFmpeg with hardware acceleration.

85%
Render Time Reduction
20-25 min
Before
2-4 min
After
Silero VAD
Detection Model

Lead Enrichment Pipeline

From a search query to 100+ fully enriched, scored leads in your spreadsheet — hands-free.

Problem

An agency needed to build targeted lead lists for outreach campaigns but manual research took 5-10 minutes per lead. At 100+ leads per campaign, this was unsustainable.

Solution

A multi-source enrichment pipeline that pulls data from Google Places, scrapes company websites, and searches DuckDuckGo — extracting 36 structured fields per lead including owner info, social media, emails, and team contacts.

Classification

Claude scores each lead against custom criteria, automatically sorting them into tiers so the sales team focuses on the highest-value prospects first.

Stack

Python (google_places_search, scrape_google_maps, enrich_emails, extract_website_contacts, classify_leads_llm), Google Sheets for output.

36
Data Fields per Lead
3
Data Sources
5-10 min
Time per Lead (Manual)
<10 sec
Time per Lead (Auto)

Have a similar problem?

Tell us about your workflow bottleneck and we will design an automation system to eliminate it.