Spatial Scoring: Measuring merchant attractiveness and performance
Last updated
Was this helpful?
Last updated
Was this helpful?
Spatial scores provide a unified measure that combines diverse data sources into a single score. This allows businesses to comprehensively and holistically evaluate a merchant's potential in different locations. By consolidating variables such as , and , data scientists can develop actionable strategies to optimize sales, reduce costs, and gain a competitive edge.
In this tutorial, we’ll be scoring potential merchants across Manhattan to determine the best locations for our product: canned iced coffee!
This tutorial has two main steps:
Data Collection & Preparation to collate all of the relevant variables into the necessary format for the next steps.
Calculating merchant attractiveness for selling our product. In this step, we’ll be combining data on footfall and proximity to transport hubs into a meaningful score to rank which potential points of sale would be best placed to stock our product.
Potential Points of Sale (POS) data. We will be using retail_stores from the CARTO Data Warehouse (demo data > demo tables).
The first step in any analysis is data collection and preparation - we need to calculate the footfall for each store location, as well as the proximity to a station.
To get started:
Log into the CARTO Workspace, then head to Workflows and Create a new workflow; use the CARTO Data Warehouse connection.
Drag the four data sources onto the canvas:
To do this for the Points of Sale, Footfall and Public transport hubs, go to Sources (on the left of the screen) > Connection > Demo data > demo_tables .
For the AOI counties layer, switch from Connection to Data Observatory then select CARTO and find County - United States of America (2019).
The full workflow for this analysis is below; let's look at this section-by-section.
The input geometry columns should both be "geom" and the ID columns shouild be "cartodb_id" and "quadbin" respectively.
Make sure to change the radius to 1000 meters; this is the maximum search distance for nearby features.
We'll take a similar approach in this section to establish the distance to nearby stations.
The geometry columns should both be "geom"
The ID columns should be "cartodb_id" and "osm_id" respectively
Set the search distance this time to 2000m
Now we need to do something a little different. For our spatial scoring, we want stores close to stations to score highly, so we need a variable where a short distance to a station is actually assigned a high value. This is really straightforward to do!
The result of this is a table containing our retail_stores, all of which we now have a value for footfall and proximity to a station - so now we can run our scoring!
Let's check out the results! First, you'll need to join the results of the scoring process back to the retail_stores table as the geometry column is not retained in the process. You can use a Join component in workflows or adapt the SQL below.
You can see in the map that the highest scoring locations can be found in extremely busy, accessible locations around Broadway and Times Square - perfect!
An Area of Interest (AOI) layer. This is a polygon layer which we will use to filter USA-wide data to just the area we are analyzing. Subscribe to the layer via the Data Observatory tab of your CARTO Workspace. Note you can use any AOI that you like, but you will not be able to use the footfall sample data for other regions (see below).
Footfall data. Our data partner Unacast have kindly provided a sample of their data for this tutorial, which you can find again in the CARTO Data Warehouse called unacast_activity_sample_manhattan (demo data > demo tables). The assumption here is that the higher the footfall, the more potential sales of our iced coffee!
Proximity to public transport hubs. Let's imagine the marketing for our iced coffee cans directly targets professionals and commuters - where better to stock our products than close to stations? We'll be using as the source for this data, which again you can access via the CARTO Data Warehouse (demo data > demo tables).
Use a with the conditon do_label equal to New York to filter the polygon data to Manhattan.
Next, use a to filter the retail_stores table to those which intersect the AOI we have just created. There should be 66 stores remaining.
There are various methods for assigning grid data to points such as retail stores. You may have noticed that our sample footfall data has some missing values, so we will assign footfall based on the value of the closest Quadbin grid cell.
Use to convert each grid cell to a central point geometry.
Now we have two geometries, we can run the component. Use the output of Section 1 (Spatial Filter; all retail stores in Manhattan) as the top input, and the Quadbin Center as the bottom input.
Finally, use a component to access the footfall value from unacast_activity... (this is the column called "staying"). Use a Left join and set the join columns to "nearest_id" and "quadbin."
Use the component to omit the nearest_id, nearest_distance and quadbin_joined columns; as we're about the run the Distance to nearest process again, we don't want to end up with confusing duplicate column names.
Let's turn our attention to osm_pois_usa. Run a with the condition subgroup_name equal to Public transport station.
Now we can run another using these two inputs. Set the following parameters:
Connect the results of Distance to nearest to a component, using the column "nearest_distance." This will create a new column nearest_distance_norm, with normalized values from 0 to 1.
Next, use a component, calling the column station_distance_norm_inv and using the code 1-nearest_distance_norm
which will reverse the normalization.
Commit the results of this using .
In this next section, we’ll create our attractiveness scores! We’ll be using the function to do this; you can read a full breakdown of this code in our documentation .
Sample code for this is below; you can run this code either in a component in Workflows, or directly in your data warehouse console. Note you will need to replace "yourproject.yourdataset.potential_POS_inputs" with the path where you saved the previous table (if you can't find it, it will be at the bottom of the SQL preview window at the bottom of your workflow). You can also adjust the weights (ensuring they always add up to 1) and number of buckets in the scoring parameters section.
Want to take this one step further? Try calculating merchant performance, which assesses how well stores perform against the expected performance for that location - check out to get started!