First of all, here are some useful links for your reference:
The Problem
Imagine managing a client with an ultra-high monthly budget for paid ads across multiple platforms, all targeting around 40 different landing pages.
Now, picture one of those landing pages suddenly loading very slowly—or worse, not loading at all.
Would you catch the issue in time to prevent wasted ad spend? Or would you only realize it retroactively, after pouring significant resources into a non-functional page? Worse yet, would you even notice at all?
In such a scenario, where would you point the blame for the ad’s underperformance? Would you scrutinize the optimization settings? Question the attribution model? Reassess the creatives or the overall funnel? Would it even cross your mind that a technical issue with the landing page might be the real culprit?
This exact challenge highlights the critical need to constantly monitor every landing page your ads are targeting across all platforms. But let’s be honest—how can you possibly keep tabs on dozens of pages manually? The effort sounds exhausting, time-consuming, and impractical at scale.
This was the precise issue we encountered with one of our clients.
The solution? AUTOMATION, of course!
STEP 1 – Getting the landing pages
Our first challenge was to compile and maintain a real-time list of all landing pages targeted by active ads across platforms.
For that, we used Funnel, an ETL (Extract, Transform, Load) tool that connects to data sources, filters and adjusts data, and saves it to destinations like Google Sheets or a data warehouse.
Here at Terrific Digital, we recommend using Funnel as your go-to ETL tool. We use it to create reports, automations and even AI bots—making it ideal for this project.
Every few hours, Funnel pulls data from our client’s marketing platforms and aggregates all ad destination URLs into a single custom dimension called: “Original Ad Landing Page URL“.
While this works well in theory, a key issue arises—the data isn’t clean. The same landing page often appears differently across ads and platforms due to URL parameters and syntax variations.
To make this data usable, we needed to clean and standardize it.
STEP 2 – Clean the URLs
In Funnel, we created a custom dimension called “Clean Ad Landing Page URL“:
This dimension removes the “https://” prefix and strips all URL parameters from the “Original Ad Landing Page URL“.
This process leaves us with a distinct value for each unique landing page.
STEP 3 – Export the data to BigQuery
The final step in Funnel is exporting the data to Google BigQuery every few hours. While any data warehouse works, we prefer BigQuery.
We created a table called “active_lps” with the following columns:
- date: The day the ad ran.
- platform: The platform where the ad ran.
- landing_page: The cleaned landing page URL.
- cost: The ad spend for that day.
To keep the data manageable, we filtered it to include only the last 30 days.
STEP 4 – Preparing the data for automation
The “active_lps” table still needs refinement. Our goal is:
- Only include rows from yesterday and today to ensure we’re viewing active landing pages and not those from ads that ended weeks ago.
- One row per unique “landing_page“.
- Aggregate all platform values into a single column called “platforms“.
- Filter only yesterday’s cost into a new column called “yesterday_cost“.
Using ChatGPT, we generated the SQL code to achieve this and saved it as a new view table called “active_lps_filter“.
STEP 5 – Automating landing pages health check
With our “active_lps_filter” table containing all unique active landing pages, we can now automate their validation. For this, we use PipeDream, an automation tool we frequently rely on for advanced workflows.
Here’s the Python-based workflow we implemented:
- Every few hours, pull the “active_lps_filter” table from BigQuery.
- Iterate through each “landing_page” and check its validity using Google’s Pagespeed Insights API.
The PageSpeed Insights API returns detailed JSON data, including performance metrics, loading times, and error statuses like 404 (not found) or 400 (down).
For example, to see how this JSON looks for Google’s homepage, use the following GET endpoint: https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=https://google.com
STEP 6 – Create a log for each health check
After retrieving results from the PageSpeed Insights API, we log them for tracking and analysis.
We created a new BigQuery table called “lps_status_logs” with the following columns:
- timestamp: The exact time the log was saved.
- landing_page: The landing page that was checked.
- platforms: The value from the “platforms” column in the “active_lps_filter” table.
- yesterday_cost: The value from the “yesterday_cost” column in the “active_lps_filter” table.
- status: The status from the API result.
- performance_score: The page’s performance score from the API result.
- avg_load_time: The page’s average loading time from the API result.
- fcp: The page’s first contentful paint from the API result.
- ttfb: The page’s time to first byte from the API result.
This setup ensures all essential data is logged for monitoring purposes.
STEP 7 – Cleaning the logs
With our “lps_status_logs” table complete, the next step was to create a view that includes only the most recent log for each landing page.
Using ChatGPT, we generated the SQL code to achieve this and saved it as a new view table called “lps_status“.
This ensures that we always have the latest status and performance data for every active landing page.
STEP 8 – Display the data in a dashboard
Using Google Looker Studio, we connected the “lps_status” table to display it as an interactive report.
We enhanced the table with:
- Conditional Formatting: Highlights any status other than “200” in red for quick issue detection.
- “Last Check (Hours)“ Column: Calculates the hours elapsed since the most recent log.
This table serves multiple purposes:
- When the client asks which landing pages are currently targeted in their paid ads, we can quickly provide an accurate answer, including the specific platforms.
- It allows us to spot issues with any landing page at a glance.
- It helps us monitor yesterday’s budget for each page, giving insight into how much we’re likely spending daily on specific pages. For pages with issues and high daily budgets, this table helps us act fast—either pausing ads or escalating to the dev team to resolve the problem.
Next, we connected the “lps_status_logs” table to create a timeline graph:
Since we save a log for each health check, this graph allows us to:
- Visualize trends over time for all landing pages.
- Filter by a specific landing page to identify patterns or determine if an issue was isolated or persisted across multiple days.
This visualization helps us understand the health and performance of landing pages more comprehensively and address recurring issues proactively.
STEP 9 – Get notified about issues in real time
Although the dashboard we built is highly effective, it requires manual monitoring to detect issues.
To streamline this, we added error notifications to our automation workflow.
We enhanced our PipeDream automation by adding a step:
If the API returns an error for a page, or if the loading time proceeds a threshold we decided on, the data is sent via an HTTP request to a Make webhook.
We chose Make for this step because it’s less technical than PipeDream and is already integrated with our tools like Monday and Gmail.
In Make, we created a scenario to handle these HTTP requests:
It does the following:
- Creates a task on Monday.
- Adds as many details as possible to the newly created task.
- Sends an email alert to the client’s campaign managers, ensuring immediate attention.
This email contains quick links to both the task on Monday, and the Looker Studio dashboard.
This addition ensures issues are identified and addressed without relying on manual checks, and as soon as they occur.
That’s it!
We hope this article inspires you to adapt and enhance this process to meet your specific needs.
If you found it helpful, please share it with your colleagues! 🙂