I'm Ellison Yeung. I work across creativity, strategy, and technology — from campaigns and content to tools, systems, and smarter ways of working. Most of the time I'm trying to turn an interesting idea into something useful.
I grew up in the UK, went to the University of Leeds, and spent a few years working across digital learning, content production, and SEO at a consumer electronics company. That's where I started getting into how automation and AI could actually make marketing work better — not in theory, but in practice.
In 2025 I moved to Hong Kong and joined Publicis Groupe as a management trainee. It's a rotational programme, so I'm getting hands-on time across media planning, client servicing, and campaign execution — learning how the different parts of the business actually connect. On the side, I've been building tools that help people work faster and skip the repetitive stuff.
Management trainee at Publicis Groupe in Hong Kong — rotating across departments, learning how media, creative, and strategy come together in practice. Building AI tools on the side.
Tracking what competitors are doing — which events they're attending, how they're positioning themselves — is useful strategic intel. But doing it manually means hours of Googling, reading event pages, and copying data into spreadsheets.
There was no repeatable way to gather, structure, and store competitor event data across multiple airlines. The information existed, but collecting it was slow and inconsistent.
A multi-agent automation workflow using n8n, Perplexity AI, OpenAI, and web scraping. It queries competitor events, scrapes relevant pages, uses AI to parse and structure the data into clean JSON, and outputs everything to Google Sheets — event names, dates, locations, participation type, and descriptions. The whole system is modular, so swapping in a different industry or query type is straightforward.
What used to be a manual research task is now a single trigger. The workflow handles multiple brands in parallel, and the output is structured enough to act on immediately. I've also mapped out how the same architecture could adapt to ad intelligence, trend monitoring, and internal knowledge automation.
Understanding how competitors run ads — who they're targeting, what formats they use, how long campaigns run — is genuinely useful for media planning. But pulling that information manually from Meta Ad Library is tedious, and the data comes back messy.
I wanted a way to automatically scrape ad metadata for competitor airlines — creative formats, demographic breakdowns, spend ranges, campaign durations — and turn it into structured, actionable insight. Not just raw data, but something you could actually brief from.
An n8n workflow that connects to the Meta Ad Library API, scrapes ad creatives and targeting data, filters for ads with demographic breakdowns available, then pipes everything through OpenAI to summarize trends, identify audience patterns, and output clean JSON to Google Sheets. Also built a chat agent layer so you can ask questions about the data in real time. Hit some interesting edge cases — not all advertisers expose the same metadata, so I had to build filtering logic to handle inconsistent fields across brands.
Works for Singapore Airlines with full demographic and creative data. Expanding to other airlines and running into real platform limitations — which is honestly the most interesting part. The architecture is designed to extend to TikTok, Google, and LinkedIn ad libraries too.
Knowing what competitors post on LinkedIn — and how people respond — is useful for understanding positioning, messaging trends, and audience sentiment in cargo. But monitoring multiple airline pages manually doesn't scale.
I needed a way to automatically collect LinkedIn posts from competitor airlines, capture engagement data and public comments, then generate structured summaries and trend analysis — all without logging in and scrolling feeds every week.
A pipeline using Apify for LinkedIn scraping, n8n for orchestration, and OpenAI for summarization and comment intelligence. Posts get normalized, deduplicated, and branched — if comments exist, they get analyzed; if not, it flags it and moves on. OpenAI generates post summaries, observed trends, and comment breakdowns in structured JSON, which feeds into Google Sheets. One of the more interesting challenges was handling LLM output consistency — different models return slightly different formats, so I built a validation layer that coerces types, normalizes bullets, and falls back gracefully when parsing fails.
A repeatable system that turns scattered LinkedIn activity into organized competitive intel. The output is structured enough to spot patterns — what topics competitors lean into, how their engagement shifts, what their audience actually cares about. Also learned a lot about the real-world tradeoffs between model cost, speed, and output reliability.
After building several standalone automation workflows, I started thinking about how they connect. Each tool I'd built — event tracking, ad scraping, LinkedIn monitoring — was solving one piece of a bigger problem. I wanted to map out what a full system looks like when you treat AI agents like a small team, each with a clear job.
A five-agent framework where each agent has a specific role: one gathers competitor intelligence, one repurposes content, one handles client onboarding, one builds insight dashboards, and one researches leads. They're designed to be built in order — start with the one that delivers value fastest, prove it works, then layer in the rest. Each agent is an n8n workflow with AI at the core, but the emphasis is on keeping a human in the loop for anything client-facing or strategic.
Most AI agent content online is either too abstract or too hyped. I wanted to write something genuinely useful — a practical build order, realistic pricing, what to automate vs. what to keep human, common mistakes, and how the agents actually talk to each other. The goal was a framework someone could follow to build a real service, not a demo.
The first three agents (intelligence, content, onboarding) map directly to tools I've already built or am building. The framework itself is the strategic layer — it's how I think about turning individual automations into something that compounds. It's also how I'd explain this work to someone who's never touched n8n.
A company needed a way to make sense of the competitive data their scraping workflows were producing. Raw Google Sheets full of LinkedIn posts and competitor event data aren't useful if nobody looks at them. They needed something visual, filterable, and smart enough to surface what actually matters.
A two-tab dashboard using Lovable, powered by Google Sheets. One tab analyzes competitor LinkedIn content — engagement trends, post type breakdowns, top content leaderboards, competitor comparison tables. The other tracks competitor event participation — timelines, geo distribution, participation types. Both tabs have filters for date, competitor, content type, and full-text search. I also designed the data schemas, normalization rules, and deduplication logic from scratch.
The AI insights layer. Instead of generic summaries, every insight references the specific posts or events that support it — and those references are clickable, opening a detail drawer with the full data. If the filters don't support an insight, it says so rather than making something up. I spent a lot of time getting the referencing and traceability right, because insight without source is just opinion.
The client uses it as their primary competitive intelligence tool. It turns data that was sitting in spreadsheets into something people actually open, explore, and make decisions from. Also wrote full documentation covering maintenance routines, troubleshooting, and future cross-tab features like event-post correlation and automated alerts.
I was working as a digital specialist at a consumer electronics company in the UK. They had products and channels but no real system for how content got made, optimized, or measured. SEO was an afterthought, video was sporadic, and there wasn't a consistent approach to how anything reached people online.
I designed and implemented an SEO strategy across their website and YouTube — keyword research, metadata optimization, content structuring, and internal linking. On the content side, I produced short-form videos and installation guides, directed the visual brand identity across digital channels, and worked with the marketing and dev teams to make sure everything was aligned. This was around 2023, when AI tools were just becoming usable, so I started experimenting with early automation to speed up content workflows and improve targeting.
Website traffic increased around 50%, video engagement went up significantly, and the content system I set up gave them a repeatable process instead of one-off posts. It was my first real experience of combining content, data, and early automation into something that actually moved numbers — and it shaped how I think about digital work now.
I don't think of myself as just a "media person" or just a "creative person." I'm interested in the whole picture — how an idea starts, how it gets shaped, how it reaches people, and whether it actually lands.
What I keep coming back to is this: the best work happens when you close the gap between thinking and doing. A strategy that never becomes anything real is just a slide deck. An execution without a clear reason behind it is just noise.
I try to stay in the space between — where the thinking is sharp enough to guide decisions, and close enough to execution that things actually get made.
I'm less interested in ideas that sound good and more interested in ideas that work. Can this become a tool? A workflow? A system someone actually uses?
I use AI every day — not because it's trendy, but because it genuinely makes my work better. I'm figuring out where it helps and where it doesn't.
Good strategy starts with paying attention. What do people actually care about? What are they ignoring? What's the gap between what brands say and what audiences hear?
I care about stories that move people somewhere — not just content that fills a feed. The best storytelling has a job to do.
I'd rather build something that works ten times than do the same thing manually ten times. I think about how to make processes repeatable and scalable.
Before I build anything, I run through the same questions: what triggers it, how does the data move, what actions need to happen, where do we need AI or APIs, and what integrations are involved. It keeps things modular and saves a lot of rework.
I'm always open to interesting conversations — whether it's about work, ideas, or something you're building. Feel free to reach out.