Spatial Scoring: Measuring merchant attractiveness and performance

Spatial scores provide a unified measure that combines diverse data sources into a single score. This allows businesses to comprehensively and holistically evaluate a merchant's potential in different locations. By consolidating variables such as footfall, demographic profiles and spend, data scientists can develop actionable strategies to optimize sales, reduce costs, and gain a competitive edge.

A step-by-step guide to Spatial Scoring

In this tutorial, we’ll be scoring potential merchants across Manhattan to determine the best locations for our product: canned iced coffee!

This tutorial has two main steps:

  1. Data Collection & Preparation to collate all of the relevant variables into the necessary format for the next steps.

  2. Calculating merchant attractiveness for selling our product. In this step, we’ll be combining data on footfall and proximity to transport hubs into a meaningful score to rank which potential points of sale would be best placed to stock our product.

You will need...

  • An Area of Interest (AOI) layer. This is a polygon layer which we will use to filter USA-wide data to just the area we are analyzing. Subscribe to the County - United States of America (2019) layer via the Data Observatory tab of your CARTO Workspace. Note you can use any AOI that you like, but you will not be able to use the footfall sample data for other regions (see below).

  • Potential Points of Sale (POS) data. We will be using retail_stores from the CARTO Data Warehouse (demo data > demo tables).

  • Footfall data. Our data partner Unacast have kindly provided a sample of their Activity - United States of America (Quadgrid 17) data for this tutorial, which you can find again in the CARTO Data Warehouse called unacast_activity_sample_manhattan (demo data > demo tables). The assumption here is that the higher the footfall, the more potential sales of our iced coffee!

  • Proximity to public transport hubs. Let's imagine the marketing for our iced coffee cans directly targets professionals and commuters - where better to stock our products than close to stations? We'll be using OpenStreetMap as the source for this data, which again you can access via the CARTO Data Warehouse (demo data > demo tables).


Step 1: Data Collection & Preparation

The first step in any analysis is data collection and preparation - we need to calculate the footfall for each store location, as well as the proximity to a station.

To get started:

  1. Log into the CARTO Workspace, then head to Workflows and Create a new workflow; use the CARTO Data Warehouse connection.

  2. Drag the four data sources onto the canvas:

    1. To do this for the Points of Sale, Footfall and Public transport hubs, go to Sources (on the left of the screen) > Connection > Demo data > demo_tables .

    2. For the AOI counties layer, switch from Connection to Data Observatory then select CARTO and find County - United States of America (2019).

The full workflow for this analysis is below; let's look at this section-by-section.

Section 1: Filter retail stores to the AOI

  1. Use a Simple Filter with the conditon do_label equal to New York to filter the polygon data to Manhattan.

  2. Next, use a Spatial Filter to filter the retail_stores table to those which intersect the AOI we have just created. There should be 66 stores remaining.

Section 2: Calculating footfall

There are various methods for assigning Quadbin grid data to points such as retail stores. You may have noticed that our sample footfall data has some missing values, so we will assign footfall based on the value of the closest Quadbin grid cell.

  1. Use Quadbin Center to convert each grid cell to a central point geometry.

  2. Now we have two geometries, we can run the Distance to nearest component. Use the output of Section 1 (Spatial Filter; all retail stores in Manhattan) as the top input, and the Quadbin Center as the bottom input.

    1. The input geometry columns should both be "geom" and the ID columns shouild be "cartodb_id" and "quadbin" respectively.

    2. Make sure to change the radius to 1000 meters; this is the maximum search distance for nearby features.

  3. Finally, use a Join component to access the footfall value from unacast_activity... (this is the column called "staying"). Use a Left join and set the join columns to "nearest_id" and "quadbin."

Section 3: Calculating distance to stations

We'll take a similar approach in this section to establish the distance to nearby stations.

  1. Use the Drop Columns component to omit the nearest_id, nearest_distance and quadbin_joined columns; as we're about the run the Distance to nearest process again, we don't want to end up with confusing duplicate column names.

  2. Let's turn our attention to osm_pois_usa. Run a Simple Filter with the condition subgroup_name equal to Public transport station.

  3. Now we can run another Distance to nearest using these two inputs. Set the following parameters:

    1. The geometry columns should both be "geom"

    2. The ID columns should be "cartodb_id" and "osm_id" respectively

    3. Set the search distance this time to 2000m

Now we need to do something a little different. For our spatial scoring, we want stores close to stations to score highly, so we need a variable where a short distance to a station is actually assigned a high value. This is really straightforward to do!

  1. Connect the results of Distance to nearest to a Normalize component, using the column "nearest_distance." This will create a new column nearest_distance_norm, with normalized values from 0 to 1.

  2. Next, use a Create Column component, calling the column station_distance_norm_inv and using the code 1-nearest_distance_norm which will reverse the normalization.

  3. Commit the results of this using Save as Table.

The result of this is a table containing our retail_stores, all of which we now have a value for footfall and proximity to a station - so now we can run our scoring!


Step 2: Calculating merchant attractiveness

In this next section, we’ll create our attractiveness scores! We’ll be using the CREATE_SPATIAL_SCORE function to do this; you can read a full breakdown of this code in our documentation here.

Sample code for this is below; you can run this code either in a Call Procedure component in Workflows, or directly in your data warehouse console. Note you will need to replace "yourproject.yourdataset.potential_POS_inputs" with the path where you saved the previous table (if you can't find it, it will be at the bottom of the SQL preview window at the bottom of your workflow). You can also adjust the weights (ensuring they always add up to 1) and number of buckets in the scoring parameters section.

CALL `carto-un`.carto.CREATE_SPATIAL_SCORE(
   -- Select the input table (created in step 1)
   'SELECT geom, cartodb_id, staying_joined, station_distance_norm_inv FROM `yourproject.yourdataset.potential_POS_inputs`',
   -- Merchant's unique identifier variable
   'cartodb_id',
   -- Output table name
   'yourproject.yourdataset.scoring_attractiveness',
   -- Scoring parameters
   '''{
     "weights":{"staying_joined":0.7, "station_distance_norm_inv":0.3 },
     "nbuckets":5
   }'''
);

Let's check out the results! First, you'll need to join the results of the scoring process back to the retail_stores table as the geometry column is not retained in the process. You can use a Join component in workflows or adapt the SQL below.

WITH
  scores AS (
  SELECT
    *
  FROM
    `yourproject.yourdataset.scoring_attractiveness`)
SELECT
  scores.*,
  input.geom
FROM
  scores
LEFT JOIN
  `carto-demo-data.demo_tables.retail_stores` input
ON
  scores.cartodb_id = input.cartodb_id

You can see in the map that the highest scoring locations can be found in extremely busy, accessible locations around Broadway and Times Square - perfect!


Want to take this one step further? Try calculating merchant performance, which assesses how well stores perform against the expected performance for that location - check out this tutorial to get started!

Last updated