WHO WE ARE
DELVE Deeper is a global performance media agency where data, technology, and marketing intersect.
We help brands like UNICEF, Virgin Voyages, and Orange grow by using data, analytics, and automation to drive measurable results. Our teams work at the intersection of media, data science, and technology in a fast-paced, international environment.
ROLE OVERVIEW
You are accountable for making AI work across the agency โ in practice, at scale, producing results people can feel. The foundation of this role is a rigorous understanding of how the business actually operates: where time goes, how decisions get made, where information stalls, and which processes are ripe for change.
This role has two connected areas of ownership. The first is fast, iterative workflow automation: mapping processes analytically, identifying where the agency is losing time, and getting working solutions built and adopted quickly via n8n. The second is broader AI enablement: identifying where the right AI tool โ applied to the right operational problem โ can change how the agency stores knowledge, surfaces information, or supports decisions.
Both areas start from the same place: a clear-eyed read of how work actually flows. This person brings business operations experience and process mining skills to that analysis โ the ability to observe a workflow, decompose it analytically, identify where value is being lost, and design the right intervention. AI is the toolkit. Business acumen is what determines where to point it.
Speed and judgement are the twin engines of the role. On the automation side, the backlog moves fast and working solutions reach people fast. On the broader AI enablement side, you prototype and test before recommending โ enough hands-on work to make confident calls about what actually belongs where. Across both, you are accountable for outcomes. Deployed tools that go unused are not counted as wins.
WHAT ARE YOU ACCOUNTABLE FOR
PROCESS ANALYSIS AND OPPORTUNITY IDENTIFICATION
You map how work actually flows across the agency โ where time is spent, where handoffs break down, where decisions stall, and where the same effort repeats. You bring process mining discipline to this analysis: decomposing workflows into their component steps, quantifying the cost of each, and identifying precisely where AI intervention creates the most leverage. This analysis feeds both the automation backlog and the broader AI evaluation pipeline.
WORKFLOW AUTOMATION VIA N8N
From the process analysis, you identify and prioritise the workflows best suited for automation. You score opportunities by recoverable time and implementation effort, brief the implementation team to build in n8n, and sign off before anything reaches the people using it. The backlog is always live and prioritised. The team always has clear direction.
BROADER AI USE CASE IDENTIFICATION AND RECOMMENDATION
You are continuously scanning for AI opportunities that fall outside the automation backlog โ the knowledge management problems, the information retrieval gaps, the workflow friction that structured tooling could solve without a custom build. You prototype and test candidate solutions hands-on before recommending them, and you own the recommendation. Examples: evaluating whether call transcripts belong in NotebookLM or a structured database; deciding how client information should be stored and surfaced to the working group; assessing whether a shared Claude Project or a purpose-built integration better serves a team's needs.
TOOL EVALUATION AND SELECTION
When a new need surfaces โ from a team conversation, a champion observation, or your own analysis โ you evaluate the right tool to address it. That means picking up the candidate tools, running them against real agency content and workflows, and forming a considered view before any recommendation is made. You build enough to know what you are recommending and why.
ADOPTION AND OUTCOME TRACKING
You track usage and outcomes across everything deployed โ automations, knowledge tools, AI-assisted workflows. You measure what is working and what is getting traction, identify friction early, and direct iteration before patterns calcify. The measure of success is the agency operating differently: time recovered, knowledge accessible, decisions faster.
WHAT GOOD JUDGEMENT LOOKS LIKE
A significant part of this role is making good technology decisions quickly. The agency will surface problems. Your job is to evaluate the solution space, prototype where needed, and recommend the right approach. These examples illustrate the kind of thinking the role requires.
A process that looks simple but is not
A team reports spending several hours a week on a reporting workflow. Before recommending an automation, you map the full process: every step, every decision point, every handoff. You discover that two of the six steps are genuinely repeatable, two require contextual judgement, and two exist only because of a structural gap in how information is shared upstream. The automation brief covers the two repeatable steps. The structural gap becomes a separate recommendation. The judgement steps are left to the person doing them. That kind of decomposition โ separating what can be systematised from what genuinely requires a human โ is the core analytical skill this role demands.
Call transcripts
The agency generates call transcripts regularly. The question is how to store them, search them, and put them to use. You prototype the leading options โ NotebookLM as a knowledge base, a structured folder system with AI retrieval, a Claude Project with uploaded sources, direct database storage with tagging โ and you form a view based on how each performs against real agency content. You recommend the approach that is most useful for the people who need to access the information, and you own the implementation of that recommendation.
Client knowledge and working group access
Client-related information is scattered across emails, documents, and people's heads. The question is how to centralise it in a way the working group can actually use. You evaluate whether a shared workspace, a structured Notion setup, a Claude Project, or an AI-enhanced document repository best fits how the team works. You test the leading options against real client content before recommending, and you own the rollout.
Automation vs. AI-assisted workflow
A team is spending significant time on a repeatable task. You assess whether this is an n8n automation opportunity, a prompt playbook, a Claude Project workflow, or a combination. You make the call based on the nature of the task, the technical overhead of each approach, and the team's actual working patterns. Speed of the right solution matters more than elegance of the perfect one.
AI FLUENCY REQUIREMENT
This role requires someone with genuine, current fluency across the AI tool landscape โ someone who uses these tools daily, has strong opinions about where each one excels, and reaches for the right one instinctively when a new problem surfaces.
AI Productivity & Knowledge
Claude / Claude Projects
NotebookLM
Gemini / Gems
Google Workspace AI
Notion AI
ClickUp AI
Other platforms as needed
Automation & Integration
n8n โ hands-on capable
Webhook & API integrations
Conditional logic & branching
Google Sheets as a data layer
Error handling & monitoring
Other platforms as needed
CANDIDATE PROFILE
n8n โ workflow building, reviewing, and quality sign-off
Genuine working fluency across the current AI tool stack
Hands-on tool evaluation โ prototypes against real content before recommending
Prompt engineering โ design and evaluate for specific, production-grade use cases
Claude Projects and custom AI workspace configuration and deployment
NotebookLM and knowledge synthesis tools
Google Workspace AI (Docs, Meet, Gmail AI features)
Structured data layers: Google Sheets, Notion
API and webhook concepts in automation environments
Process mining โ decomposes workflows into steps, identifies value loss, quantifies automation opportunity
Business operations experience โ understands how agency functions actually work, not just how they are described
Workflow analysis and process mapping across complex, cross-functional operations
Leverage prioritisation โ scoring and sequencing opportunities under time pressure
Technology selection judgement โ matches tool to operational problem based on real testing
Brief writing โ translates process analysis into tight, actionable implementation specs
Outcome accountability โ tracks adoption and results, not just delivery
Change management โ moves resistant teams from scepticism to habitual use
High-trust relationship building at every level of seniority
Clear written communication โ briefs, process maps, recommendations, escalation documents
WHAT WE OFFER
Hybrid working model: three days in the office (Tuesday to Thursday)
A competitive salary with opportunities for growth
Private medical care at Medicover
Multisport card
Annual education budget of $250
Generous employee referral program
Catered office lunch every Tuesday
Snacks and occasional breakfasts available in the office
Opens in a new tab