Customer Review and Satisfaction Analysis

This section will explore the effect of delivery delays on customer satisfaction, identify product categories with mismatched delivery speed and review scores, and analyze the distribution of review scores across different product categories to uncover common issues in negative feedback.

9Project Roadmap: Step-by-Step Navigation
1
Problem Description - Olist E-Commerce Data AnalysisThis article provides a detailed problem description for the Olist E-Commerce Data Analysis project, outlining the tasks assigned by the company, the structure of the analysis, the tools and skills used, acknowledgements, and references.
2
Data NormalizationThis section willl cover the data normalization process for the Olist E-Commerce dataset, including data ingestion, cleaning, star schema modeling, and the creation of an explicit date table to support time-based analysis.
3
Business QuestionsIn this section, business questions will be defined to guide the data analysis process for the Olist E-Commerce dataset. These questions will be categorized into different domains such as customer behavior, sales performance, delivery logistics, seller performance, payments, and customer experience to ensure a comprehensive analysis of the data.
4
Customer Retention AnalysisThis section focuses on analyzing customer retention patterns using the Olist e-commerce dataset. The analysis includes segmenting customers based on their purchasing behavior, understanding trends in customer retention, and exploring the factors that influence customer loyalty.
5
Order and Sales Performance AnalysisThis section will explore the order and sales performance metrics for Olist.
6
Delivery Performance and SLA AnalysisThis section will explore the delivery performance and logistics efficiency of Olist's e-commerce operations, analyzing on-time delivery rates, freight costs, and the decomposition of delivery timelines to identify bottlenecks and regional disparities.
7
Order Fullfilment Efficiency AnalysisThis section will explore how efficiently sellers are fulfilling orders on the Olist platform by analyzing the time gap between order approval and handover to logistics partners, identifying the 'Bottom 10%' of sellers in terms of dispatch efficiency, and providing insights into potential areas for improvement in the fulfillment process.
8
Payments and Risk AnalysisThis section will explore the payment distribution across different methods, analyze the correlation between installment counts and order values, and identify regional payment preferences to understand the risk and revenue implications for Olist's e-commerce platform.
9
Customer Review and Satisfaction AnalysisThis section will explore the effect of delivery delays on customer satisfaction, identify product categories with mismatched delivery speed and review scores, and analyze the distribution of review scores across different product categories to uncover common issues in negative feedback.
10
Times Series Analysis of Orders and Sales PerformanceThis section will explore the time series analysis of orders and sales performance for Olist, including month-over-month growth rates, seasonality effects, and the distribution of order volume by day and hour to identify key trends and patterns in customer behavior.

Delivery Delay & Customer Satisfaction Analysis

23. What is the quantitative correlation between Delivery Delay (Actual Delivery Date - Estimated Delivery Date) and the Average Review Score? Specifically, at what ‘delay threshold’ (e.g., +1 day, +3 days) does the average customer rating strictly drop below 3 stars?

Solution:

Analytical Methodology for Delivery Delay and Sentiment Correlation

  1. Quantification of Logistical Latency (order_delay_metrics CTE): The initial phase of the analysis involves calculating the temporal variance between the order_estimated_delivery_date and the order_delivered_customer_date. By performing an inner join between the orders and order_reviews datasets, we isolate the delay_days—where positive integers signify late deliveries—and associate them directly with the corresponding customer review_score.

  2. Aggregation of Sentiment Baselines (aggregated_scores CTE): To evaluate the impact of fulfillment delays on customer perception, the data is aggregated at the delay_days level. This step computes the arithmetic mean of review scores and total transaction volumes for each specific interval, establishing the “On-Time” benchmark required to measure the rate of sentiment degradation.

  3. Identification of the ‘Satisfaction Cliff’ (Final Query): The final logic categorizes results into hierarchical satisfaction tiers via a CASE statement. We distinguish between ‘Tolerable’ delays (where the average score remains 3.0\geq 3.0) and ‘Critical’ delays (where scores fall below the 3.03.0 threshold). Furthermore, a percentage variance from the “On-Time” benchmark is calculated to quantify the financial and reputational risk associated with extended latencies.

  4. Statistical Significance and Boundary Control: To maintain the integrity of the findings and prevent outliers from skewing the commercial insights, the output is restricted to intervals with a volume > 5. The temporal scope is further constrained to a range of 10-10 to +30+30 days to focus the business intelligence on the most statistically significant and actionable delivery scenarios.

SQL SQL Code

/* delivery satisfaction threshold analysis
    objective: identify the "cliff" where delivery delays result in failing satisfaction scores (< 3 stars)
    logic: 
      1. calculate 'delay_days' as the delta between actual and estimated delivery.
      2. aggregate average review scores for every day of delay.
      3. identify the first day where the average rating strictly falls below 3.0.
    granularity: delay_days
*/

with order_delay_metrics as (
    -- step 1: calculate the delay for every delivered order with a review
    select 
        o.order_id,
        -- delay: positive = late, 0 = on time, negative = early
        datediff(day, o.order_estimated_delivery_date, o.order_delivered_customer_date) as delay_days,
        r.review_score
    from 
        orders o
    inner join 
        order_reviews r on o.order_id = r.order_id
    where 
        o.order_status = 'delivered'
        and o.order_delivered_customer_date is not null
        and o.order_estimated_delivery_date is not null
),

aggregated_scores as (
    -- step 2: find average score and volume per delay day
    select 
        delay_days,
        count(order_id) as volume,
        avg(review_score * 1.0) as avg_review_score
    from 
        order_delay_metrics
    group by 
        delay_days
)

-- final output: categorizing the impact of delays on sentiment
select 
    delay_days,
    volume as order_volume,
    cast(avg_review_score as decimal(10,2)) as avg_rating,
    
    -- identifying the threshold status
    case 
        when delay_days < 0 then 'Early (Bonus Rating)'
        when delay_days = 0 then 'On-Time (High Satisfaction)'
        when delay_days > 0 and avg_review_score >= 3.0 then 'Late (Tolerable)'
        when delay_days > 0 and avg_review_score < 3.0 then 'Critical Delay (Satisfaction Cliff)'
        else 'Standard'
    end as satisfaction_tier,
    
    -- calculating the percentage drop from the "On-Time" benchmark
    cast(100.0 * (avg_review_score - (select avg_review_score from aggregated_scores where delay_days = 0)) 
        / (select avg_review_score from aggregated_scores where delay_days = 0) as decimal(10,2)) as pct_drop_from_ontime
from 
    aggregated_scores
where 
    volume > 5 -- filter for statistical relevance
    and delay_days between -10 and 30 -- focusing on the most relevant window
order by 
    delay_days asc;

Results:

delay_daysorder_volumeavg_ratingsatisfaction_tierpct_drop_from_ontime
-1046524.33Early (Bonus Rating)7.47
-946324.29Early (Bonus Rating)6.44
-848184.26Early (Bonus Rating)5.53
-748414.26Early (Bonus Rating)5.55
-636164.25Early (Bonus Rating)5.26
-522214.19Early (Bonus Rating)3.96
-419104.15Early (Bonus Rating)2.80
-317264.12Early (Bonus Rating)2.16
-215484.12Early (Bonus Rating)2.23
-114574.15Early (Bonus Rating)2.97
012914.03On-Time (High Satisfaction)0.00
18233.73Late (Tolerable)-7.51
25373.18Late (Tolerable)-21.28
34962.68Critical Delay (Satisfaction Cliff)-33.47
44372.49Critical Delay (Satisfaction Cliff)-38.16
54362.19Critical Delay (Satisfaction Cliff)-45.58
64081.81Critical Delay (Satisfaction Cliff)-55.15
74751.92Critical Delay (Satisfaction Cliff)-52.50
83321.72Critical Delay (Satisfaction Cliff)-57.36
92191.67Critical Delay (Satisfaction Cliff)-58.68
102081.59Critical Delay (Satisfaction Cliff)-60.66
111791.73Critical Delay (Satisfaction Cliff)-57.20
121551.66Critical Delay (Satisfaction Cliff)-58.73
131901.75Critical Delay (Satisfaction Cliff)-56.68
141691.59Critical Delay (Satisfaction Cliff)-60.54
151571.71Critical Delay (Satisfaction Cliff)-57.68
16951.68Critical Delay (Satisfaction Cliff)-58.24
17901.67Critical Delay (Satisfaction Cliff)-58.68
18731.73Critical Delay (Satisfaction Cliff)-57.21
19741.51Critical Delay (Satisfaction Cliff)-62.47
20871.66Critical Delay (Satisfaction Cliff)-58.96
21761.67Critical Delay (Satisfaction Cliff)-58.57
22641.72Critical Delay (Satisfaction Cliff)-57.39
23341.47Critical Delay (Satisfaction Cliff)-63.54
24531.53Critical Delay (Satisfaction Cliff)-62.11
25401.60Critical Delay (Satisfaction Cliff)-60.33
26341.35Critical Delay (Satisfaction Cliff)-66.46
27401.53Critical Delay (Satisfaction Cliff)-62.19
28461.61Critical Delay (Satisfaction Cliff)-60.11
29371.32Critical Delay (Satisfaction Cliff)-67.17
30141.29Critical Delay (Satisfaction Cliff)-68.12

Insights:

  • Early Delivery Bonus: On-time deliveries average a solid 4.03 rating. Beating the estimated delivery date by 1 to 10 days gives a slight “bonus,” pushing the average score up to 4.33.
  • The Tolerable Delay: Customers show a small amount of patience for minor hiccups. Packages arriving 1 or 2 days late take a hit but still manage to maintain an average score above 3 stars (3.73 and 3.18, respectively).
  • The 3-Day Cliff: The exact threshold where satisfaction falls apart is 3 days late. At this point, the average rating strictly drops below 3 stars, crashing to 2.68 (a 33.5% drop compared to on-time deliveries).
  • The Point of No Return: Once a delivery hits 6 days late, the average score plummets below 2.0. From that day onward, ratings mostly flatline near the 1.5 mark. At this stage, the customer experience is completely ruined, regardless of whether the package takes 7 days or 20 days to finally arrive.

Quality vs. Logistics Matrix Analysis

24. Can we identify product categories that consistently have Fast Delivery (Top 25% speed) but Low Review Scores (Bottom 25% rating)? Conversely, which categories have slow delivery but high scores?

Solution:

  1. The initial phase of the analysis involves the aggregation of delivery lead times and customer satisfaction scores at the category level. By joining the orders, order_items, products, and order_reviews datasets, we calculate the arithmetic mean of the avg_delivery_lead_time (measuring the duration from purchase to customer delivery) and the avg_review_score. A HAVING clause is utilized to restrict the results to categories with a total_orders > 30, ensuring the statistical integrity of the benchmarks.

  2. To establish relative performance tiers, we apply the percent_rank() window function to both metrics. The speed_rank is calculated in ascending order (where 0.0 represents the fastest 1% of the platform), while the rating_rank is also calculated in ascending order (where 0.0 represents the lowest 1% of scores). This normalization allows for a direct comparison across diverse product lines regardless of varying volume.

  3. The categorical data is then segmented into a four-quadrant matrix using a conditional CASE statement. We specifically isolate ‘Product/Catalog Issues’—defined as categories within the top 25% for speed (speed_rank <= 0.25) but the bottom 25% for satisfaction (rating_rank <= 0.25). Conversely, ‘Logistics Bottlenecks’ are identified as categories in the bottom 25% for speed (speed_rank >= 0.75) that maintain top 25% sentiment scores (rating_rank >= 0.75).

  4. The resulting classification provides actionable business intelligence for supply chain and catalog management. By identifying the ‘Systemic Failure’ quadrant (Slow and Hated) versus the ‘Gold Standard’ (Fast and Loved), stakeholders can prioritize interventions—distinguishing between the need for logistical optimization versus fundamental improvements in product quality or catalog accuracy.

SQL SQL Code

 /* quality vs. logistics matrix analysis
    objective: segment categories to identify if low scores are caused by product quality or delivery speed
    logic: 
      1. calculate average lead time (purchase to delivery) and average review score per category.
      2. use percent_rank() to identify categories in the top/bottom 25th percentiles.
      3. categorize into the 4 quadrants of the quality/logistics matrix.
    granularity: product_category_name
*/

with category_metrics as (
    -- step 1: aggregate logistics and satisfaction metrics at the category level
    select 
        p.product_category_name,
        count(o.order_id) as total_orders,
        -- lead time speed (days)
        avg(datediff(day, o.order_purchase_timestamp, o.order_delivered_customer_date) * 1.0) as avg_delivery_lead_time,
        -- customer satisfaction
        avg(r.review_score * 1.0) as avg_review_score
    from 
        orders o
    inner join 
        order_items oi on o.order_id = oi.order_id
    inner join 
        products p on oi.product_id = p.product_id
    inner join 
        order_reviews r on o.order_id = r.order_id
    where 
        o.order_status = 'delivered'
        and o.order_delivered_customer_date is not null
        and p.product_category_name is not null
    group by 
        p.product_category_name
    having 
        count(o.order_id) > 30 -- filter for statistical significance
),

category_percentiles as (
    -- step 2: assign percentile ranks (0.0 to 1.0)
    select 
        *,
        -- speed: 0.0 is fastest, 1.0 is slowest
        speed_rank = percent_rank() over (order by avg_delivery_lead_time asc),
        -- rating: 0.0 is lowest, 1.0 is highest
        rating_rank = percent_rank() over (order by avg_review_score asc)
    from 
        category_metrics
)

-- final output: matrix segmentation
select 
    product_category_name,
    total_orders,
    cast(avg_delivery_lead_time as decimal(10,2)) as avg_lead_time_days,
    cast(avg_review_score as decimal(10,2)) as avg_rating,
    
    -- step 3: segmenting into the quality/logistics quadrants
    case 
        -- fast delivery (top 25%) + low score (bottom 25%)
        when speed_rank <= 0.25 and rating_rank <= 0.25 then 'Product/Catalog Issue (Fast but Hated)'
        
        -- slow delivery (bottom 25%) + high score (top 25%)
        when speed_rank >= 0.75 and rating_rank >= 0.75 then 'Logistics Bottleneck (Slow but Loved)'
        
        -- fast delivery + high score
        when speed_rank <= 0.25 and rating_rank >= 0.75 then 'Gold Standard (Fast & Loved)'
        
        -- slow delivery + low score
        when speed_rank >= 0.75 and rating_rank <= 0.25 then 'Systemic Failure (Slow & Hated)'
        
        else 'Standard Performance'
    end as matrix_segment
from 
    category_percentiles
order by 
    avg_rating asc;

Results:

product_category_nametotal_ordersavg_lead_time_daysavg_ratingmatrix_segment
fraldas_higiene3710.653.38Standard Performance
moveis_escritorio166420.783.52Systemic Failure (Slow & Hated)
fashion_roupa_masculina12412.743.76Standard Performance
telefonia_fixa25312.603.76Standard Performance
audio35913.273.83Systemic Failure (Slow & Hated)
artigos_de_festas429.143.83Product/Catalog Issue (Fast but Hated)
casa_conforto43013.473.85Systemic Failure (Slow & Hated)
moveis_colchao_e_estofado3714.413.89Systemic Failure (Slow & Hated)
cama_mesa_banho1098512.713.92Standard Performance
moveis_sala49413.803.94Systemic Failure (Slow & Hated)
153312.713.94Standard Performance
moveis_decoracao815912.773.95Standard Performance
fashion_roupa_feminina4611.853.96Standard Performance
casa_construcao59313.183.96Systemic Failure (Slow & Hated)
construcao_ferramentas_seguranca18211.833.97Standard Performance
informatica_acessorios76727.493.98Product/Catalog Issue (Fast but Hated)
telefonia440812.804.00Standard Performance
moveis_cozinha_area_de_servico_jantar_e_jardim27211.834.03Standard Performance
fashion_underwear_e_moda_praia12613.564.05Standard Performance
climatizacao28412.134.05Standard Performance
eletronicos271112.794.07Standard Performance
artigos_de_natal14314.484.07Standard Performance
relogios_presentes5825-2.334.07Standard Performance
market_place30312.094.07Standard Performance
bebes296712.364.08Standard Performance
ferramentas_jardim425413.594.08Standard Performance
artes19511.214.08Standard Performance
construcao_ferramentas_iluminacao2939.514.09Standard Performance
agro_industria_e_comercio20611.584.09Standard Performance
construcao_ferramentas_construcao91310.574.09Standard Performance
sinalizacao_e_seguranca19610.314.09Standard Performance
utilidades_domesticas678010.864.11Standard Performance
automotivo41171.634.12Standard Performance
tablets_impressao_imagem8112.804.12Standard Performance
construcao_ferramentas_jardim23411.844.12Standard Performance
bebidas36510.354.13Standard Performance
industria_comercio_e_negocios263-154.024.13Standard Performance
consoles_games108013.514.14Standard Performance
eletrodomesticos_223213.804.16Standard Performance
esporte_lazer84366.974.17Standard Performance
dvds_blu_ray6112.854.18Standard Performance
beleza_saude945611.884.19Standard Performance
fashion_bolsas_e_acessorios199411.034.19Standard Performance
cool_stuff369812.294.19Standard Performance
brinquedos40070.764.21Standard Performance
musica3811.554.21Standard Performance
perfumaria334411.714.22Standard Performance
pet_shop191711.134.22Standard Performance
pcs19613.254.22Logistics Bottleneck (Slow but Loved)
instrumentos_musicais64612.934.22Logistics Bottleneck (Slow but Loved)
eletroportateis65710.724.23Standard Performance
eletrodomesticos78911.184.23Standard Performance
papelaria245912.664.24Standard Performance
moveis_quarto10313.174.25Logistics Bottleneck (Slow but Loved)
alimentos4879.524.26Gold Standard (Fast & Loved)
fashion_calcados25615.444.29Logistics Bottleneck (Slow but Loved)
cine_foto7110.424.30Gold Standard (Fast & Loved)
malas_acessorios107310.604.35Standard Performance
alimentos_bebidas27110.784.37Standard Performance
livros_tecnicos26410.584.39Gold Standard (Fast & Loved)
flores3110.814.42Standard Performance
portateis_casa_forno_e_cafe739.794.44Gold Standard (Fast & Loved)
construcao_ferramentas_ferramentas9911.714.44Standard Performance
livros_importados577.964.51Gold Standard (Fast & Loved)
livros_interesse_geral533-69.704.51Gold Standard (Fast & Loved)

Insights:

  • The “Fast but Hated” Problem: Categories like IT accessories (informatica_acessorios) and party supplies (artigos_de_festas) arrive faster than average (7 to 9 days) but still get bottom-tier reviews. Since delivery speed isn’t the issue, this strongly points to catalog problems—these items are likely of poor quality, arrive broken, or don’t match their online descriptions.
  • The “Slow but Loved” Forgiveness: Customers are surprisingly patient when buying specific, high-value, or specialized goods. Personal computers (pcs), musical instruments, and shoes take a long time to deliver (13 to 15 days), yet they maintain high ratings (above 4.2). People are clearly willing to wait as long as the product is exactly what they wanted.
  • The “Systemic Failure” Zone: Office furniture (moveis_escritorio) is the worst performer. It takes almost 21 days to deliver on average and holds a terrible 3.52 rating. Bulky, heavy items are clearly causing huge headaches for our logistics partners, and the slow delivery is destroying customer satisfaction.
  • The “Gold Standard” Champions: Books (imported, technical, and general) and food (alimentos) sit in the perfect quadrant. They are easy to package, ship fast, and have highly predictable quality. As a result, they enjoy the highest ratings on the platform (4.26 to 4.51).
  • Data Quality Warnings: While looking at the lead times, I noticed some categories like watches (relogios_presentes) and industry/business goods have negative lead times (e.g., -154 days or -69 days). This tells me there are data entry errors in the system where the delivery date was somehow recorded before the purchase date. This is something I’ll need to filter out in my data cleaning process.

Customer Review Deep-Dive Analysis

25. What is the distribution of review scores across different product categories? Specifically, which categories have the highest percentage of 1-star reviews, and can we identify any common issues or themes in the negative feedback for these categories?

Solution:

  1. The analysis commences by establishing a direct linkage between customer reviews and their corresponding product categories (category_reviews CTE). By performing inner joins across the order_reviews, order_items, and products datasets, and subsequently applying a left join to the English translation table, we accurately map each review to a standardized category name. A DISTINCT clause is strategically implemented to prevent the artificial inflation of review counts in instances where multiple identical items are purchased within a single transaction.

  2. Following the data integration, the methodology proceeds to aggregate the review score distributions at the category level (category_stats CTE). Utilizing conditional summation logic, the system computes the absolute frequency of each specific star rating—from one to five—alongside the total review volume for the category. A COALESCE function is applied to ensure that any products missing categorical metadata are grouped into an ‘unknown_category’ rather than systematically omitted from the dataset.

  3. In the final phase, the absolute frequencies are mathematically converted into proportional metrics to facilitate equitable comparative analysis across diverse product lines. The query calculates the exact percentage of one-star (pct_one_star) and five-star reviews. To ensure the analytical findings are statistically robust and not disproportionately skewed by low-volume anomalies, a strict threshold is enforced, restricting the output strictly to categories possessing a minimum of 50 total reviews. The results are ordered descendingly based on the one-star percentage, effectively isolating the product lines responsible for driving the highest proportion of negative customer sentiment on the platform.

SQL SQL Code
with category_reviews as (
    -- step 1: link reviews to categories.
    -- we use distinct to prevent overcounting reviews if a customer bought 3 of the EXACT same item in one order.
    -- however, if an order has multiple different categories, the review score applies to both.
    select distinct
        r.review_id,
        r.order_id,
        r.review_score,
        t.product_category_name_english as category_name
    from 
        order_reviews r
    inner join 
        order_items oi on r.order_id = oi.order_id
    inner join 
        products p on oi.product_id = p.product_id
    left join 
        product_category_name_translation t on p.product_category_name = t.product_category_name
    where 
        r.review_score is not null
),

category_stats as (
    -- step 2: aggregate the counts for each star rating per category
    select 
        coalesce(category_name, 'unknown_category') as category,
        count(review_id) as total_reviews,
        sum(case when review_score = 1 then 1 else 0 end) as one_star_reviews,
        sum(case when review_score = 2 then 1 else 0 end) as two_star_reviews,
        sum(case when review_score = 3 then 1 else 0 end) as three_star_reviews,
        sum(case when review_score = 4 then 1 else 0 end) as four_star_reviews,
        sum(case when review_score = 5 then 1 else 0 end) as five_star_reviews
    from 
        category_reviews
    group by 
        coalesce(category_name, 'unknown_category')
)

-- final output: calculate percentages and rank by the highest rate of 1-star reviews
select 
    category,
    total_reviews,
    one_star_reviews,
    cast((one_star_reviews * 100.0 / total_reviews) as decimal(5,2)) as pct_one_star,
    cast((five_star_reviews * 100.0 / total_reviews) as decimal(5,2)) as pct_five_star
from 
    category_stats
where 
    total_reviews >= 50 -- filtering out obscure categories with too few reviews to be statistically relevant
order by 
    pct_one_star desc;

Results:

categorytotal_reviewsone_star_reviewspct_one_starpct_five_star
fashion_male_clothing1112623.4252.25
construction_tools_safety1663018.0751.81
office_furniture126822717.9039.59
audio3485816.6753.16
unknown_category147024216.4653.74
dvds_blu_ray58915.5260.34
fixed_telephony2153315.3551.16
home_confort3986115.3353.02
air_conditioning2493413.6557.03
furniture_decor645783112.8755.85
bed_bath_table9432120112.7353.28
computers_accessories670285112.7055.60
baby286936012.5557.09
furniture_bedroom961212.5061.46
art2002512.5056.00
construction_tools_construction7448911.9657.80
computers1782111.8060.67
kitchen_dining_laundry_garden_furniture2472911.7454.25
watches_gifts558665511.7357.30
fashion_underwear_beach1201411.6749.17
telephony417348611.6552.96
agro_industry_and_commerce1822111.5450.55
home_construction4885611.4851.64
consoles_games105312011.4056.60
books_imported53611.3273.58
auto389543811.2557.02
market_place2783111.1552.52
christmas_supplies1261411.1151.59
drinks2973210.7758.92
electronics253227110.7056.24
small_appliances6286710.6761.46
cine_photo66710.6169.70
sports_leisure772181010.4960.58
perfumery316533210.4963.92
furniture_living_room4204410.4854.29
garden_tools350636710.4758.84
home_appliances7998210.2659.32
costruction_tools_garden1962010.2062.76
health_beauty882689910.1961.51
toys386139310.1861.98
signaling_and_security1381410.1456.52
housewares586559010.0659.18
musical_instruments623609.6360.67
cool_stuff36093409.4259.46
small_appliances_home_oven_and_coffee7579.3365.33
food445419.2164.72
fashion_bags_accessories18701709.0959.52
home_appliances_2234218.9755.56
industry_commerce_and_business234218.9759.40
pet_shop17031528.9363.12
fashion_shoes239218.7962.76
stationery23021988.6062.42
costruction_tools_tools9488.5169.15
construction_tools_lights242197.8554.55
books_technical259197.3471.43
books_general_interest508367.0973.23
luggage_accessories1030706.8065.15
tablets_printing_image7745.1953.25
food_drink228104.3964.91

Insights:

  • Male Fashion is the Most Problematic: Men’s clothing (fashion_male_clothing) has the highest 1-star rate on the platform at 23.4%. In e-commerce, high negative feedback in clothing is typically driven by inconsistent sizing or the physical fabric not matching the online pictures.
  • Office Furniture is a Confirmed Failure: Backing up our delivery speed findings, office_furniture performs terribly here as well. It has a nearly 18% 1-star rate and the absolute lowest 5-star rate on the entire list (just 39.6%). Bulky, hard-to-assemble items paired with slow logistics are heavily damaging this category.
  • The “Unknown” Category Risk: Items tagged as unknown_category generate a surprisingly high 1-star rate (16.5% out of 1,470 reviews). If a product is poorly classified, it likely lacks a clear description, which leads to confused buyers and mismatched expectations.
  • Books and Food are the Safest Bets: Categories like food_drink, books_general_interest, and books_technical boast the lowest 1-star rates (all under 7.5%) and massive 5-star rates (up to 73%). These are standardized, easy-to-ship items that rarely surprise the buyer in a negative way.

Review Volume vs. Score Polarization Analysis

26. Is there a correlation between the number of reviews a product receives and its average review score? Specifically, do products with a higher volume of reviews tend to have more polarized ratings (more 1-star and 5-star reviews) compared to products with fewer reviews?

SQL SQL Code
with product_reviews as (
    -- step 1: link reviews to specific products.
    -- we use distinct to prevent overcounting reviews if a customer bought multiple of the EXACT same item in one order.
    select distinct
        r.review_id,
        r.order_id,
        oi.product_id,
        r.review_score
    from 
        order_reviews r
    inner join 
        order_items oi on r.order_id = oi.order_id
    where 
        r.review_score is not null
),

product_stats as (
    -- step 2: aggregate the review counts and score distributions per product
    select 
        product_id,
        count(review_id) as total_reviews,
        avg(review_score * 1.0) as avg_review_score,
        sum(case when review_score = 1 then 1 else 0 end) as count_1_star,
        sum(case when review_score = 5 then 1 else 0 end) as count_5_star
    from 
        product_reviews
    group by 
        product_id
),

volume_buckets as (
    -- step 3: segment products into review volume buckets
    select 
        product_id,
        total_reviews,
        avg_review_score,
        count_1_star,
        count_5_star,
        case 
            when total_reviews between 1 and 10 then '1-10 Reviews (Low)'
            when total_reviews between 11 and 50 then '11-50 Reviews (Medium)'
            when total_reviews between 51 and 100 then '51-100 Reviews (High)'
            else '100+ Reviews (Very High)'
        end as volume_bucket,
        case 
            when total_reviews between 1 and 10 then 1
            when total_reviews between 11 and 50 then 2
            when total_reviews between 51 and 100 then 3
            else 4
        end as bucket_order
    from 
        product_stats
)

-- final output: aggregate the metrics by bucket to analyze correlation and polarization
select 
    volume_bucket,
    count(product_id) as number_of_products,
    sum(total_reviews) as total_reviews_in_bucket,
    cast(avg(avg_review_score) as decimal(5,2)) as avg_score_in_bucket,
    cast((sum(count_1_star) * 100.0 / sum(total_reviews)) as decimal(5,2)) as pct_1_star,
    cast((sum(count_5_star) * 100.0 / sum(total_reviews)) as decimal(5,2)) as pct_5_star,
    -- measuring polarization by combining extreme scores
    cast(((sum(count_1_star) + sum(count_5_star)) * 100.0 / sum(total_reviews)) as decimal(5,2)) as pct_polarized 
from 
    volume_buckets
group by 
    volume_bucket, 
    bucket_order
order by 
    bucket_order;

Results:

volume_bucketnumber_of_productstotal_reviews_in_bucketavg_score_in_bucketpct_1_starpct_5_starpct_polarized
1-10 Reviews (Low)31331611514.0512.0757.7769.84
11-50 Reviews (Medium)1315257114.0911.0757.1968.25
51-100 Reviews (High)9364494.1210.5157.9968.51
100+ Reviews (Very High)5089194.0511.2856.5467.82

Insights:

  • No Correlation with Average Score: Surprisingly, the number of reviews a product gets does not change its overall rating. Whether a product has 5 reviews or 500, the average score stays incredibly stable between 4.05 and 4.12.
  • Polarization is a Constant Behavior: The assumption that highly reviewed products get more polarized (lots of 1s and 5s) is false. Across every single volume bucket, roughly 68% to 70% of all ratings are extreme (either 1-star or 5-star). This tells us that e-commerce customers generally only leave reviews when they either absolutely love or completely hate an item, regardless of the product’s popularity.
  • The “Long Tail” Catalog: The data highlights how massive and fragmented the Olist catalog is. Out of the entire dataset, over 31,000 products fall into the “Low Volume” bucket (1-10 reviews), while only 50 products reached the “Very High” bucket (100+ reviews). This proves that Olist’s revenue is driven by a “long tail” of thousands of niche items rather than a few massive blockbusters.

Seller Performance Analysis

27. Can we identify specific sellers or products that are responsible for a disproportionate number of 1-star reviews? Specifically, which sellers have the highest percentage of their orders resulting in 1-star reviews, and what common issues are cited in these reviews?

Solution:

  • To analyze review polarization, customer reviews should first be linked back to the actual products. This must be done in the first CTE by joining the order reviews table with the order items table. A distinct select should be used here because sometimes a customer buys multiple units of the exact same item in one order, but only one review is left for the whole transaction. The distinct keyword will ensure that the single review is not accidentally overcounted for the product.

  • Next, the data should be grouped by the product ID to pull baseline metrics for each individual item. The total number of reviews and the overall average score must be calculated. In the same step, a simple case statement will be used to count exactly how many one-star and five-star reviews each product received. This will isolate the extreme ratings needed for the analysis.

  • Then, those product-level metrics should be used to assign each product into a specific bucket based on its review volume, such as low, medium, or high. A bucket order column with simple numbers must also be added so the final output will be sorted logically from the lowest volume to the highest, rather than being sorted alphabetically by the bucket name.

  • Finally, all the data must be rolled up by those volume buckets in the main query. The share of one-star and five-star reviews should be calculated against the total reviews in each bucket to get the final percentages. The one-star and five-star percentages will then be added together into a polarized percentage metric. This metric will be used to see if highly reviewed products actually trigger more extreme reactions compared to items that only have a handful of reviews.

SQL SQL Code
with seller_reviews as (
    -- step 1: link reviews to specific sellers.
    -- we use distinct to prevent overcounting reviews if a customer bought multiple items from the same seller in one order.
    select distinct
        oi.seller_id,
        r.review_id,
        r.order_id,
        r.review_score
    from 
        order_reviews r
    inner join 
        order_items oi on r.order_id = oi.order_id
    where 
        r.review_score is not null
),

seller_stats as (
    -- step 2: aggregate the total reviews and 1-star reviews per seller
    select 
        seller_id,
        count(review_id) as total_reviews,
        sum(case when review_score = 1 then 1 else 0 end) as count_1_star,
        avg(review_score * 1.0) as avg_review_score
    from 
        seller_reviews
    group by 
        seller_id
)

-- final output: calculate the percentage of 1-star reviews and identify the worst offenders
select 
    seller_id,
    total_reviews,
    count_1_star,
    cast((count_1_star * 100.0 / total_reviews) as decimal(5,2)) as pct_1_star,
    cast(avg_review_score as decimal(5,2)) as overall_avg_score
from 
    seller_stats
where 
    total_reviews >= 30 -- threshold: filtering out sellers with very few reviews to ensure statistical significance
order by 
    pct_1_star desc,
    total_reviews desc;

Results:

1ca7077d890b907f89be8c954a02686a1146153.512.33
54965bbe3e4f07ae045b90b0b8541f52743040.543.00
2eb70248d66e0e3ef83659f71b2443781998040.202.69
dc8798cbf453b7e0f98745e396cc5616421535.713.07
2a1348e9addc1af5aaa619b1a3679d6b511835.293.12
a49928bcdf77c55c6d6e05e09a9b4ca5983434.693.01
ad781527c93d00d89a11eecd9dcad7c1441534.093.07
bbad7e518d7af88a0897397ffdca1979682333.823.16
972d0f9cf61b499a4812cf0bfa3ad3c4792632.913.03
c60b801f2d52c7f7f91de00870882a75411331.713.32
602044f2c16190c2c6e45eb35c2e21cb511631.373.14
8444e55c1f13cd5c179851e5ca5ebd00993131.313.26
835f0f7810c76831d6c7d24c7a646d4d441329.553.34
710e3548e02bc1d2831dfc4f1b5b14d41353928.893.36
6fd52c528dcb38be2eea044946b811f8661827.273.39
82e0a475a88cc9595229d8029273f045441227.273.52
88460e8ebdecbfecb5f96018339819302536826.883.43
b2479f944e1b90cf8a5de1bbfde284d61052826.673.41
cac4c8e7b1ca6252d8f20b2fc1a2e4af752026.673.53
0b35c634521043bf4b47e21547b99ab5501326.003.36
18a349e75d307f4b4cc646a691ed42161203125.833.44
070d165398b553f3b4b851c216b8a35831825.813.32
ede0c03645598cdfc63ca8237acbe73d441125.003.52
2089a6d640999f9b9141ac719b2af596441125.003.73
a7f13822ceb966b076af67121f87b063741824.323.34
f46490624488d3ff7ce78613913a7711832024.103.66
ef990a83bbea832f36ebe81376335aa8431023.263.63
3c7c4a49ec3c6550809089c6a2ca93701423323.243.64
5058e8c1e82653974541e83690655b4a621422.583.24
5f2684dab12e59f83bef73ae57724e45681522.063.81
ea67327e24487bdfac5fbfa37ea124df32721.883.53
b17b679f4f5ce2e03ce6968c62648246551221.823.71
66dc1b1632bd6a3e508510c5b349210737821.623.57
712e6ed8aa4aa1fa65dab41fed5737e4791721.523.39
45ba18c210d42734ec52c0c1c574e9ee38821.053.87
229c3efbfb0ea2058de4ccdfbc3d784a1202520.833.63
95f83f51203c626648c875dd41874c7f531120.753.62
1835b56ce799e6a4dc4eddc053f040664228720.623.64
f7ccf836d21b2fb1de37564105216cc1681420.593.81
e96498ed8daaa3e9c23f7a62da76591c34720.593.94
d3f39f05462b79a4562d35893a28f15934720.593.65
8e6d7754bc7e0f22c96d255ebda59eba881820.453.61
891071be6ba827b591264c90c2ae8a63931920.433.78
897060da8b9a21f655304d50fd9359133176420.193.62
71039d19d4303bf9054d69e9a923669935720.003.80
40d54b51e962dbe09cabbcfd33298dee30620.004.00
7c67e1448b00f6e969d365cea6b010ab98219619.963.49
6973a06f484aacf400ece213dbf3d946961919.793.59
6b3bd31ad8fcda4b2635ec9f3ff2ecdf36719.444.00
3c03b12bab54d8b37d79d914bfdb1aa036719.443.83
36a968b544695394e4e9d7572688598f1553019.353.66
e8b3a3a38279a82f0e5d006d5e5b7d2c881719.323.81
2a261b5b644fa05f4f2700eb93544f2c521019.233.83
3fd1e727ba94cfe122d165e176ce796737718.923.78
63b464dbf392c7b80d12d932fa7cafed37718.923.78
431af27f296bc6519d890aa5a05fdb111172218.803.79
850913d59ce317156b00f3705f1c3edb32618.753.44
fffd5413c0700ac820c7069d66d98c89591118.643.85
6039e27294dc75811c0d8a39069f52c0651218.463.71
0bf0150d5b9d60d9cd2906003332f08538718.423.79
e2aee0892199b1d92530e371abd825bf38718.423.79
05f51e13da97139648b8125c31e5f51b601118.334.07
522620dcb18a6b31cd7bdf73665113a91703118.243.89
c9aafcd0621b2207c10e32c649cada4d44818.184.02
2d50d6282f8aa2257819a77bfaa0efe044818.183.70
b839e41795b7f3ad94cc2014a52f679633618.184.09
5b179e9e8cc7ab6fd113a46ca584da8133618.183.67
d5ba419e26d246a0719437cf37d9b46d33618.183.91
04308b1ee57b6625f47df1d56f00eedf941718.093.88
165fc07beebdcb6190fba8a06db2a44950918.003.80
cee48807215b30a12ca2ca10ffb5f25039717.953.77
d20b021d3efdf267a402c402a48ea64b901617.783.70
75d34ebb1bd0bd7dde40dd507b8169c3621117.743.74
f3da5b2ff499efb8d4a6d371d175d7dd791417.723.82
48162d548f5b1b11b9d29d1e01f75a61681217.653.90
7040e82f899a04d1b434b795a43b46172103717.623.60
4992e76a42cb3aad7a7047e0d3d7e729571017.543.91
81f89e42267213cb94da7ddc301651da46817.393.67
834f8533b2ecb6598dd004ff3de7203a46817.393.85
966cb4760537b1404caedd472cc610a5751317.333.85
5343d0649eca2a983820bfe93fc4d17e701217.143.83
dd533b429f380718b70ad9922c294bae41717.073.78
db2956745b3a8e9f3785c99f34b5d25e47817.023.77
751bdc4d83a466c7206cd42e8f426b0353916.983.75
e5a3438891c0bfdb9394643f95273d8e2243816.963.91
d05ae8f7a5bd1d2a690a44cd079e4e27771316.883.86
44073f8b7e41514de3b7815dd0237f4f1432416.783.76
f84a00e60c73a49e7e851c9bdca3a5bb901516.673.83
bf84056e679dbe9c69929847a40e338f36616.673.75
6ec1a01e866584bb679eb9b09834591930516.674.13
4ebdc7e6cd6102a022dadc49156d4ea830516.673.83
5656537e588803a555b8eb41f07a944b1752916.573.87
855668e0971d4dfd7bef1b6a4133b41b3095116.503.78
609e1a9a6c2539919b8205cf7c4e6ff0851416.473.81
4a3ccda38b2129705f3fb522db62ca31671116.423.88
dc317f341ab0e22f39acbd9dbf9b4a1f55916.364.15
6f892e20a171e98efe17fdb971ff319b43716.283.93
59b22a78efb79a4797979612b885db361242016.133.98
ea566164622c6b439516ab18062c42cd50816.003.78
d1c281d3ae149232351cd8c8cc885f0d2193515.983.81
23613d49c3ac2bd302259e55c06c050c691115.943.83
822166ed1e47908f7cfb49946d03c726691115.944.00
cd06602b43d8800bd0afad514919d35c44715.913.68
a2e874074c877c5a05abae80ad6e488f44715.913.95
99a54764c341d5dc80b4a8fac4eba3fb44715.913.61
dc4a0fc896dc34b0d5bfec8438291c802393815.903.75
b4a476fbd28de64b1e347abf9089366a32515.633.84
dfa0c4c6229ab200a4a1336b4d7128ff771215.584.00
11bfa66332777660bd0640ee84d47006771215.583.88
318f287a62ab7ac10b703ac37435a23152815.383.87
ce27a3cc3c8cc1ea79d11e561e9bebb61312015.273.91
f457c46070d02cadd8a68551231220dd2043115.203.85
276677b5d08786d5dce7c2149dcce48b33515.153.91
d93919c944be9cff128f6c9cb899eacb33515.154.00
4c2b230173bb36f9b240f2b8ac11786e2583915.123.98
c3867b4666c7d76867627c2f7fb22e212453715.103.91
6860153b69cc696d5dcfe1cdaaafcf62731115.074.10
d2374cbcbb3ca4ab1086534108cc3ab75328015.043.75
8a32e327fe2c1b3511609d81aaf9f0421332015.043.84
25c5c91f63607446a97b143d2d535d311602415.003.80
b14db04aa7881970e83ffa9426897925801215.003.71
1ce3ae5a399804d1a87e706f8a813c3e40615.003.75
4c03b9dd4c11ee2cb35c96c49efc942047714.894.11
aae3bfea055532c57fb453ed3ec80b3047714.893.60
dd7ddc04e1b6c2c614352b383efe2d361211814.883.85
7ad32824caee82087b3e2e5f33b1bf321482214.863.84
213b25e6f54661939f11710a6fddb8711552314.843.79
a888faf2d1baececa6baf9c3d603ee1f54814.814.04
cca3071e3e9bb7d12640c9fbe230130671610614.803.85
7a241947449cc45dbfda4f9d0798d9d061914.753.87
1900267e848ceeba8fa32d80c1a5f5a84076014.743.84
6fc26fe110feebd80a433e1f012a84f934514.713.91
3ab971ce71839580d2ae5b4e40fe804434514.713.91
70a12e78e608ac31179aea7f8422044b3134614.703.80
cfb1a033743668a192316f3c6d1d26711502214.673.96
2138ccb85b11a4ec1e37afbd1c8eda1f3965814.653.94
2528513dd95219a6013d4d05176e391a1582314.563.95
6c177e38df6d3f34182b1f1d427231bf55814.554.04
cd68562d3f44870c08922d380acae5521241814.523.94
a3e9a2c700480d9bb01fba070ba80a0e1311914.503.98
c826c40d7b19f62a09e2d7c5e7295ee23525114.494.08
f45122a9ab94eb4f3f8953578bc0c5601872714.443.96
98dac6635aee4995d501a3972e047414901314.444.14
8648b1e89e9b349e32d3741b30ec737e971414.434.10
4a3ca9315b744ce9f8e9374361493884180125914.383.83
0be8ff43f22e456b4e0371b2245e4d011612314.293.95
4c18691b6037662be2df78a765d98ab563914.293.87
50c9975695009e5e6473912e83a6d1da49714.293.94
abe42c5d03695b4257b5c6cbf4e6784e49714.293.92
ca3bd7cd9f149df75950150d010fe4a2711014.083.96
beadbee30901a7f61d031b6b686095ad64914.063.91
c33847515fa6305ce6feb1e818569f131141614.043.85
5dceca129747e92ff8ef7a997dc4f8ca3214514.023.98
09f952a5f58d2285b0372551ae8f9b0150714.003.84
4e922959ae960d389249c378d1c939f54155813.983.95
440dd6ab244315c632130ecfb63827b1931313.983.85
e9d99831abad74458942f21e16f33f921291813.953.93
e9bc59e7b60fc3063eb2290deda4cced2163013.893.99
99002261c568a84cce14d43fcffb43ea36513.893.72
06a2c3af7b3aee5d69171b0e14f0ee873895413.883.99
8160255418d5aaa7dbdc9f4c64ebda443845313.803.94
1025f0e2d44d7041d6cf58b6550e0bfa91412513.683.99
99eaacc9e6046db1c82b163c5f84869f66913.643.88
4559697a8f7e637227c2eeaed843baff44613.644.07
0b90b6df587eb83608a64ea8b390cf071692313.614.01
45d33f715e24d15a6ccf5c17b3a23e3c59813.564.02
b94cc9f10ddc85e4ba73a6f7974e710137513.514.30
41b39e28db005d9731d9d485a83b4c382002713.504.06
8b28d096634035667e8263d57ba3368c1411913.483.97
002100f778ceb8431b7a1020ff7ab48f52713.463.90
7202e2ba20579a9bd1acb29e61fe71f652713.463.87
a673821011d0cec28146ea42f5ab767f1271713.393.99
b1fc4f64df5a0e8b6913ab38803c57a91051413.333.86
dd2bdf855a9172734fbc3744021ae9b91051413.333.85
903037660cf848a717166eb7a06d616e45613.333.89
2bf6a2c1e71bbd29a4ad64e6d3c3629f30413.334.17
f00e21b1e91a79653163b7fd8f293ff130413.334.10
c8b0e2b0a7095e5d8219575d5e7e1181831113.254.08
3b15288545f8928d3e65a8f949a282911061413.213.96
96804ea39d96eb908e7c3afdb671bb9e38513.164.05
e5a38146df062edaf55c38afa99e42dc1221613.113.97
951e8cef368f09bb3f3d03c00ca4702c61813.114.08
ff1fb4c404b2efe68b03350a8dc2412246613.044.00
70c27847eca8195c983ed7e798c56743771012.994.23
a416b6a846a11724393025641d4edd5e1622112.964.15
0c7533c71df861ec58ad7ff999ed0e8d62812.904.10
fcb5ace8bcc92f75707dc0f01a27d26962812.903.87
5d3bb11474a06bdc23fb9e89f1164ee031412.903.90
e06f09ec0a4aca210779cf1cfc63cf1931412.903.97
3c4e0452bff7a2a788179488d3d7703231412.903.84
1c68394e931a64f90ea236c5ea5903001481912.843.92
3df020b72d3d44b3af9d110fa3940b6555712.734.05
9b013e03b2ab786505a1d3b5c07567541732212.724.16
f80edd2c5aaa505cc4b0a3b219abf4b81181512.714.08
a17f621c590ea0fab3d5d883e1630ec663812.704.06
b92e3c8f9738272ff7c59e111e108d7c63812.704.17
7d13fca15225358621be4086e1eb09645617112.664.02
7ddcbb64b5bc1ef36ca8c151f6ec77df3324212.653.95
9c0e69c7bf2619675bbadf47b43f655a1592012.584.05
adbc26658d6c7b4b6219f9d934598091881112.504.13
d673a59aac7a70d8b01e6902bf090a11801012.503.93
056b4ada5bbc2c50cc7842547dda6b5164812.504.17
56642bcb79900e777d68e91915cb426748612.504.06
b56906f7fd1696e043f1bcce164c487b40512.504.00
bacb1f0ed56ad24198f5810d2b3fe9a540512.504.08
ed4acab38528488b65a9a9c603ff024a32412.503.72
20cb7c2fde3e5bf10f0bbe7394e1c6a932412.504.00
232a6014e7b10cba61c6c2b2ea6bb4b032412.504.16
6560211a19b47992c3666cc44a7e94c0184423012.473.94
c31eff8334d6b3047ed34bebd4d62c361211512.403.85
aaed1309374718fdd995ee4c58c9dfcd1051312.384.07
634964b17796e64304cadf1ad3050fb72673312.364.04
bd23da7354813347129d751591d1a6e2891112.364.02
9674754b5a0cb32b638cec001178f799891112.364.08
48436dade18ac8b2bce089ec2a0412021381712.324.01
76d5af76d0271110f9af36c92573f7651301612.314.12
a420f60ff1aa9acc80d0e42959f2b31365812.313.95
00fc707aaaad2d31347cf883cd2dfe101061312.264.05
8e6cc767478edae941d9bd9eb778d77a1061312.263.94
8b9d6eec4a7eb7d0f9d579ce0b38324d49612.244.00
001cca7ae9ae17fb1caed9dfb10948311972412.183.98
92eb0f42c21942b6552362b9b114707d3384112.133.99
ce248b21cb2adc36282ede306b7660e566812.124.20
93dc87703c046b603023e75222018b4533412.123.97
2c9e548be18521d1c43cde1c582c6de81241512.104.11
688756f717c462a206ad854c5027a64a831012.053.86
a6fe7de3d16f6149ffe280349a8535a050612.003.98
b2ba3715d723d245138f291a6fe425943344011.984.10
ebd2d60905fb58271facef5596b620d3841011.904.15
1da366cade6d8276e7d8beea7af5d4bf42511.903.93
daeb5653dd96c1b11860f7220979501242511.904.05
8f2ce03f928b567e3d56181ae20ae9521521811.844.11
8b321bb669392f5163d04c59e235e06693911111.824.07
2e90cb1677d35cfe24eef47d441b7c871271511.814.12
d91fb3b7d041e83b64a00a3edfb37e4f5606611.794.10
db4350fd57ae30082dec7acbaacc17f91361611.764.14
b1a81260566c1bac3114a6d124413f2751611.764.27
23d7c96d4a1160db1c726b248601b25a51611.763.80
01cf7e3d21494c41fb86034f2e714fa160711.674.15
7aa4334be125fcdd2ba64b3180029f142152511.634.10
b18dc380845b24038cfc48006478f09943511.634.14
31344c16881c08a8a72c6d2eb29918c143511.634.35
3d8fa2f5b647373c8620330c4e077a9f43511.634.09
e63e8bfa530fb16910dd6956e592bb8143511.634.26
cc419e0650a3c5ba77189a1882b7556a174020211.614.08
218d46b86c1881d022bce9c68a7d4b153884511.604.15
729f06993dac8e860d4f02d7088ca48a951111.584.17
34056b8b55c1775a22af2331670a799c78911.544.00
ce7d1888639e6fb06b2749cbfdac1ff778911.544.17
2e1c9f22be269ef4643f826c9e650a521651911.524.17
ef506c96320abeedfb894c34db06f4783223711.493.93
4e17c65a516f69d023a2ae78b84f28d661711.484.23
ec8879960bd2221d5c32f8e12f7da71135411.434.31
ea8482cd71df3c1969d7b9473ff13abc114113011.394.00
b32be1695eb7ec5f10f72d9610a1252744511.364.07
58f1a6197ed863543e0136bdedb3fce244511.364.18
f789d2c4f2c2eb38fc4373e7a4b3526444511.364.16
f214d28e8d8e3ef068748498ccc2f81353611.324.15
06579cb253ecd5a3a12a9e6eb6bf8f4762711.294.08
46dc3b2cc0980fb8ec44634e21d2718e5155811.264.19
1f50f920176fa81dab994f9023523100140415811.254.13
0dd184061fb0eaa7ca37932c68ab91c51601811.254.03
77530e9772f57a62c906e1c21538ab823213611.214.11
b499c00f28f4b7069ff6550af8c1348a1531711.114.12
f5a590cf36251cf1162ea35bef76fe841171311.114.23
06e5eefc71ec47ae763c5c6f8db7064f901011.113.92
30a2f535bb48308f991d0b9ad4a8c4bb72811.114.29
750303a20e9c56b2a6bc45cdce0b897d54611.114.24
26d8a1c7c75d513045798992ead43aa236411.114.31
e8f6dc8e6a1dcde89d20e3995c8d90b336411.113.92
f84fa566034f5e8e880a07ec624c56af911010.994.22
6d66611d7c44cc30ce351abc49a684211641810.983.95
1838dd9b8977065acf51d95e0053ea7a64710.944.27
0adac9fbd9a2b63cccaac4f8756c1ca81101210.914.05
062ce95fa2ad4dfaedfc79260130565f55610.914.04
7e3f87d16fb353f408d467e74fbd8014921010.874.13
432c37c9dfba871172ec162e20118b8c46510.874.11
391fc6631aebcf3004804e51b40bcf1e5345810.864.03
95e03ca3d4146e4011985981aeb959b91751910.864.01
813348c996469b40f2e028d5429d34951751910.864.06
4e7c18b98d84e05cbae3ff0ff03846c283910.844.18
4d6d651bd7684af3fffabd5f08d12e5a3704010.814.20
1336efc61c316ddf92c899eb817f7cae74810.814.00
9e6229250fedbe05838fef417b74e7fb74810.813.89
76d64c4aca3a7baf218bf93ef7fa768d65710.774.23
f27e33c6d29b5138fa9967bcd445b6d5931010.754.10
4869f7a5dfa277a7dca6462dcf3b52b2112412010.684.13
5d0363b33554b373851fc1622e4d5f3c941010.644.04
ceaec5548eefc6e23e6607c5435102e71992110.554.15
0176f73cc1195f367f7b32db1e5b3aa838410.534.08
669ae81880e08f269a64487cfb2871691621710.494.22
4830e40640734fc1c52cd21127c341d42112210.434.16
0691148aee60ca47977c187804f935ae48510.424.19
7178f9f4dd81dcef02f62acdf8151e012032110.344.04
0df3984f9dfb3d49ac6366acbd3bbb8587910.344.34
7ea5bfa6c340f58f8e71fc1f0412b0d6971010.314.12
f0b47fbbc6dee9aafe415a6e33051b3f68710.294.22
620c87c171fb2a6dd6e8bb4dec959fc67397610.284.25
17e34d8224d27a541263c4c64b11a56b2532610.284.19
0cbcee27c791afa0cdcb08587a2013a839410.263.85
75fbb52eda0cbc24f479d3b2fbfa8d3e39410.264.36
138dbe45fc62f1e244378131a680152649510.204.08
2a5b78b41cd05baeac8df54c6606b92c49510.204.33
7681ef142fd2c19048da7430856b558859610.174.19
3078096983cf766a32a06257648502d159610.174.20
6a8b085f816a1f75f92dbac6eb545f8f1281310.164.11
7e93a43ef30c4f03f38b393420bc753a3353410.154.21
79ebd9a61bac3eaf882805ed4ecfa12a1381410.144.04
5a8e7d5003a1f221f9e1d6e411de7c231381410.144.14
16090f2ca825584b5a147ab24aa30c864004010.004.06
1d8dbc4f32378d715c717c1c1fc57bae90910.004.02
e59aa562b9f8076dd550fcddf0e7349170710.004.21
048c2757535328e0d7dac690ad3c0aae50510.004.18
0d85bbda9889ce1f7e63778d24f346eb50510.004.20
ad420dd0c4f92f8af951ac24b86d0cf540410.004.13
1b938a7ec6ac5061a66a3766e0e75f9030310.004.30
3c487ae8f8d7542beff5788e2e0aea8330310.004.20
86ccac0b835037332a596a33b6949ee1181189.944.19
cbd996ad3c1b7dc71fd0e5f5df9087e2161169.944.15
1e8b33f18b4f7598d87f5cbee2282cc2121129.924.37
f262cbc1c910c83959f849465454ddd3192199.904.15
0c8380b62e38e8a1e6adbeba7eb9688c182189.894.05
b33e7c55446eabf8fe1a42d037ac7d6d152159.874.16
1da3aeb70d7989d1e6d9b0e887f97c23265269.814.15
7d76b645482be4a332374e8223836592174179.773.96
7dc8c42cc750eeafea6c85712ffee9bf4149.764.20
c847e075301870dd144a116762eaff9a4149.764.17
17a053fcb14bd219540cbde0df490be0124129.684.23
01fd077212124329bac32490e8ef80d93139.684.16
2745f798279e0ed033addcc1474776d73139.683.97
72c5da29406b4234927b81855e7b64f63139.684.13
66e0557ecc2b4dbea057e93f215f68d83139.683.94
78c99c6dff4eeae5be99bf635ed21e3f3139.684.42
2a7dc43cecabf23403078e2188437d1d5259.624.10
cb3dd9ce66268c7a3ca7241ac70ab58c5259.624.17
1127b7f2594683f2510f1c2c834a486b115119.574.10
8a432f4e5b471f8da497d7dc517666e29499.574.19
53243585a1d6dc2643021fd1853d8905356349.554.13
b76dba6c951ab00dc4edf0a1aa88037e157159.554.18
9add47bf45ce8e8c7db6b9cf670b1e094249.524.17
d66c305afaec317ebee552073a6744294249.524.31
53e4c6e0f4312d4d2107a8c9cddf45cd474459.494.09
e9779976487b77c6d4ac45f75ec7afe9654629.484.22
701938c450705b8ae65fc923b70f35c77479.464.01
e26901d5ab434ce92fd9b5c256820a4e202199.414.12
17ca9b9e9b9ef8fdb529001b49ebb50f117119.404.05
7142540dd4c91e2237acb7e911c4eba2320309.384.03
c9c7905cffc4ef9ff9f113554423e671128129.384.15
3db66a856d18a9cba7c9241fc5221c509699.384.18
4917cee8d902e13428c3ec4b1ca6f3153239.384.16
33576ec5412fb5905d876f12f33bfde63239.384.09
9baf5cb77970f539089d09a38bcec5c33239.384.31
5c6d4016c2a288f074fa0848a2b653d23239.384.06
8bd0f31cf0a614c658f6763bd02dea693239.384.28
b4ffb71f0cb1b1c3d63fad021ecf93e1150149.334.23
b39d7fe263ef469605dbb32608aee0af4349.303.88
e067ad2c1c0b48758eb1b5228bcf7a685459.264.15
fa1a9dec3a9940c072684a46728bf1fc5459.264.11
2a84855fd20af891be03bc5924d2b453163159.204.25
f3b80352b986ab4d1057a4b724be19d08789.204.17
aac29b1b99776be73c3049939652091d9899.184.10
de23c3b98a88888289c6f5cc1209054a164159.154.26
8f119a0aee85c0c8fc534629734e94fd7779.094.25
b335c59ab742f751a85db9c411a867393339.093.85
dfc5fb7259bb2b599ca565e6e9448f0f3339.094.12
f7ba60f8c3f99e7ee4042fdef03b70c4221209.054.21
670c26e0f1bf8d0576271d5cfaec6d2b7878.974.27
7e1fb0a3ebfb01ffb3a7dae98bf3238d157148.924.16
afe0067131b73e40875c9b6c10bd2e214548.894.20
1554a68530182680ad5c8b042c3ab563248228.874.32
6b243f80ed07b10f0e8aa0f21a205f3c7978.864.27
6a0cbc8af2e8abd1bdfb777943d174c63438.824.06
5160d23075764e18e07c1f4a87fad7433438.824.38
a1043bafd471dff536d0c462352beb48715638.814.22
259f7b5e6e482c230e5bfaa670b6bb8f138128.704.30
bfd27a966d91cfaafdb25d076585f0da115108.704.28
381c83fdca332ea6afd896da20bf6e4a4648.704.39
382229d1e840115ffe3dbf5ff460e4174648.704.22
3bb548e3cb7f70f28e3f11ee9dce0e594648.704.20
3d871de0142ce09b7081e2b9d1733cb11070938.694.15
3bdff180c7e1f6551a643b99c265a1205858.624.09
fa1c13f2614d7b5c4749cbc52fecda94581508.614.34
da8622b14eb17ae2831f4ac5b9dab84a13251148.604.18
ccc4bbb5f32a6ab2b7066a4130f114e3186168.604.28
dd55f1bb788714a40e7954c3be6df7453538.573.74
870d0118f7a9d85960f29ad89d5d989a4748.514.09
9b00ed88b7fdb95d6ff76e27c1b52d164748.514.47
fdb9095204a334cd8872252ffec6f2db4748.514.17
e62b2d6ac10570a035a30bafcf01d2635958.473.97
81a1104df0f08b59c68aa5b03cfe398e7168.454.27
955fee9216a65b617aa5c0531780ce6012791078.374.16
f4aba7c0bca51484c30ab7bdc34bcdd110898.333.94
dbb9b48c841a0e39e21f98e1a6b2ec3e10898.334.27
3be634553519fb6536a03e1358e9fdc78478.333.95
05d2173d43ea568aa0540eba70d2ca766058.334.18
d650b663c3b5f6fb392b6326366efa9a6058.334.20
bba74270a87732727b5a3b4fd9ac1c393638.333.97
530ec6109d11eaaf87999465c6afee019788.254.24
80e6699fe29150b372a0c8a1ebf7dcc89788.254.28
0432ead42b6c8a0bdf68154add917fdf8578.244.46
900ba814c251a692506d7834c12184418578.244.19
f7720c4fa8e3aba4546301ab80ea1f1b6158.204.20
6edacfd9f9074789dad6d62ba7950b9c208178.174.36
8581055ce74af1daba164fdbd55a40de392328.164.26
efcd8d2104f1a05d028af7bad20d974b4948.164.18
00ee68308b45bc5e2660cd833c3f81cc135118.154.30
9f505651f4a6abe901a56cdc21508025395328.104.34
d94a40fd42351c259927028d163af8429988.084.18
ececbfcff9804a2d6b40f589df8eef2b9988.084.12
004c9cd9d87a3c30c522c48c4fc07416161138.074.14
058fd0aa2bfdb2274e05e1ae971dabb66258.064.39
8d956fec2e4337affcb520f56fd8cbfd212178.024.20
1b4c3a6f53068f0b6944d2d005c9fc8911397.964.22
7008613ea464bad5cb9b83456e1e6a8f126107.944.26
59fb871bf6f4522a87ba567b42dafecf126107.944.24
5b8154610ebb21fb90eb587365e673df3837.894.18
520b493b57809f446cb0a233bb3e25c76457.814.30
640e21a7d01df7614a3b4923e990d40c11697.764.30
de722cd6dad950a92b7d4f82673f8833337267.724.29
c70c1b0d8ca86052f45a432a38b73958259207.724.36
8e8a7ce9f2f970dc00e2acf6f6e199f66557.693.98
2b3e4a2a3ea8e01938cabda2a3e5cc795247.694.37
0db783cfcd3b73998abc6e10e59a102f131107.634.21
128639473a139ac0f3e5f5ade55873a5527407.594.19
fe2032dab1a61af8794248c8196565c9290227.594.38
0241d4d5d36f10f80c644447315af0bd239187.534.33
54a1852d1b8f10312c55e906355666ee200157.504.33
1c129092bf23f28a5930387c980c0dfc200157.504.24
d12c926d74ceff0a90a21184466ce1618067.504.31
b1ac6ea7895bc3dd6f0f6f4abbdd28214037.504.35
20d83f3ef0e6925fd74bfd59170babf74037.504.40
85d9eb9ddc5d00ca9336a2219c97bb13509387.474.18
066a6914e1ebf3ea95a216c73a986b916757.464.09
8cbac7e12637ed9cffa18c78752074786757.464.12
9d5a9018aee56acb367ba9c3f05d1d6a6757.464.42
7a67c85e85bb2ce8582c35f2203ad7361155867.454.24
537eb890efff034a88679788b647c564161127.454.22
f8db351d8c4c4c22c6835c19a46f01b0665497.374.24
e333046ce6517bd8bb510291d44f01306857.354.34
94144541854e298c2d976cb893b813436857.354.21
fd386aa7bed2af3c7035c65506c9b4a36957.254.35
1b8356dabde1d35e17cef975c3f827305647.144.16
1d4587203296c8f4ad134dc286fa6db04237.144.29
b561927807645834b59ef0d16ba55a248567.064.34
7d456afc660226829370f3173d14520c5747.024.37
562fc2f2c2863ab7e79a9e4388a58a14258186.984.22
aafe36600ce604f205b86b5084d3d76712996.984.40
e0a366315b1b726b6c7832d664c0f5304336.984.30
3f995f07c49d0d55a99d5c54957f7d814336.984.21
bd0389da23d89b726abf911cccc545968766.904.17
406822777a0b9eb5c50e442dd4cd3ec54436.824.30
582d4f8675b945722eda7c0cb61ba4c75946.784.15
5cf13accae3222c70a9cac40818ae839148106.764.45
6d803cb79cc31c41c4c789a75933b3c76046.674.25
c6bda72e4dbf5c5866b13cb1810c6d036046.674.38
c8b3445d737de6befde0c88ede534a5e3026.674.43
e38db885400cd35c71dfd162f2c1dbcf3026.674.47
b2ac621f0d0322434d04a12b078b93693026.674.40
edb1ef5e36e0c8cd84eb3c9b003e486d166116.634.42
5f1dc28029d2c244352a68107ec2b5426146.564.43
05ff92fedb5be47920fea08e501238b96146.564.20
d98eec89afa3380e14463da2aabaea72168116.554.33
8c16d1f32a54d92897cc437244442e1b10776.544.36
612170e34b97004b3ba37eae81836b4c10776.544.43
850f4f8af5ea87287ac68de36e29107f184126.524.29
dee656f0f566ed1aa85bd137c943f08f9266.524.37
6b90f847357d8981edd79a1eb1bf0acb4636.524.37
25e6ffe976bd75618accfe16cefcbd0d10876.484.41
7b07b3c7487f0ea825fc6df75abd658b9366.454.35
9616352088dcf83a7c06637f4ebf1c809366.454.30
96493fab2fbb13a14d0c0e8772eef5c36246.454.06
91f848e9f4be368f4318775aac7333706246.454.29
715bbd5ba4e6b74cb0d2f29eb45058b06246.454.47
b6d44737c043328708f6749c2dbe50bd6246.454.47
7b0df942f46435babab05d49b744b2c43126.454.39
4e06067cc08b3f41d837768d392c3ee33126.454.13
e8b4225284fbb02d16f200513f1f395d3126.454.19
b74d1c09cb380e1d03a6f859c6f4224b3126.454.26
0ebd97a106433a45a4aebe57c17997783126.454.48
f67efa3f0b6761102a7f8c6b7b571f5d3126.454.48
dbc22125167c298ef99da25668e1011f406266.404.27
c3cfdc648177fdbbbb35635a37472c53283186.364.47
cce6ab8d1682639fe45ab70234f1665f6346.354.41
01fdefa7697d26ad920e9e0346d4bd1b12886.254.35
b372ee768ed69e46ca8cdbd267aa7a386446.254.44
4c498c7345e89aebad651544829beca04836.254.29
6481e96574816ead57975da2c0f6d80d3226.254.19
f62d7bb4a613ec758a03342d46b580b23226.254.38
3340ef1913fb70d28420f6ceb685c3393226.254.38
37515688008a7a40ac93e3b2e4ab203f226146.194.09
6c7d50c24b3ccd2fd83b44d8bb34e07311376.194.39
ef0ace09169ac090589d85746e3e036f13086.154.37
8ae520247981aa06bc94abddf5f46d346546.154.45
1caf283236cd69af44cbc09a0a1e7d323326.064.61
974cf2cb8f4b7add98709c30df02fe103326.064.06
cc5a78bbad32776dc4e3af205218368c3326.064.45
7722b1df1b0e383e000397b2c11e3e1913386.024.23
8bb48dc19fccaa8613b6229bf7f452a28356.024.42
ba143b05f0110f0dc71ad71b4466ce928455.954.42
4371b634e0efc0e22b09b52907d9d46910165.944.45
cb8bcce248bb1fb274ba762d8b9714566845.884.25
33ac3e28642ab8bda860a2f693000e783425.884.21
b90e891671cffd9557f33a97dc5236453425.884.32
37be5a7c751166fbc5f8ccba4119e043273165.864.30
42b729f859728f5079499127a9c2ef3710365.834.47
624f4ece8da4aafb77699233d480f8ef5235.774.25
33a6f4b1e7cdc205511e76ba1b6e018610565.714.41
e49c26c3edfa46d227d5121a6b6e4d373525.714.46
6338ea67c41078a46ad99cc0096549563525.714.60
11305281b50fff20ae8bb473f8e118763525.714.31
42bde9fef835393bb8a8849cb6b7f2453525.714.26
65febd49fd28ec955651299159b1f5273525.714.31
9646c3513289980f17226a2fc4720dbd7145.634.35
527801b552d0077ffd170872eb49683b8955.624.47
70eea00b476a314817cefde4aad4f89a10765.614.38
a3a38f4affed601eb87a97788c949667251145.584.39
d6b1ce66b035a475f00c017792ff97699055.564.28
827f8f69dfa529c561901c4f2e0f332f9055.564.28
d9a84e1403de8da0c3aa531d6d108ba65435.564.52
32b8764b4ef628b53608fc34011fcc133625.564.19
5cbbd5a299cab112b7bf23862255e43e3625.564.28
240b9776d844d37535668549a396af323625.564.28
325f3178fb58e2a9778334621eecdbf93625.564.44
4b9750c8ad28220fe6702d4ecb7c898f218125.504.36
0ea22c1cfbdc755f86b9b54b39c16043239135.444.40
718539d38d07dd351c76db862760e2e23725.414.43
14d7985702e72162bbf13b6beb74ab2b3725.414.32
ff063b022a9a0aab91bad2c9088760b79355.384.31
17f51e7198701186712e53a39c5646175635.364.34
1a3df491d1c4f1589fc2b934ada68bf216995.334.18
7f7b8245c336066a1f9933c359f11d773825.264.55
ba90964cff9b9e0e6f32b23b82465f7b3825.264.42
c864036feaab8c1659f65ea4faebe1da5835.174.57
1eade46fba20122dc4aefb379f8c636b5835.174.43
5a93f3ab0ef4c84ed5e1b5dbf23978bc3925.134.69
0bae85eb84b9fb3bd773911e89288d5413775.114.27
c003204e1ab016dfa150abc119207b24196105.104.45
5c243662ce92d84573bfaff24c3e37009855.104.14
d1b65fc7debc3361ea86b5f14c68d2e24025.004.55
83deb69e889cf80f82be1dc6d5f2d4864025.004.33
d57e18d5f73c7ccb7f7339b61166898d6134.924.43
fa40cc5b934574b62717c68f3d678b6d306154.904.46
87142160b41353c4e5fca2360caf6f92307154.894.41
080102cd0a76b09e0dcf55fcacc60e0512464.844.48
cab85505710c7cb9b720bceb52b01cee207104.834.28
eeb6de78f79159600292e314a77cbd188344.824.45
aba1721a889e04decc910aa13b768ef48344.824.45
7299e27ed73d2ad986de7f7c77d919fa338164.734.42
02f623a8eb246f3c5f7c2f96462654e64324.654.56
febab0275244b9a49a623f0bd613ca2f10954.594.38
289cdb325fb7e7f891c38608bf9e096211054.554.57
b8bc237ba3788b23da09c0f1f3a3288c4424.554.32
f680f85bee2d253556ac91be391d2c824524.444.33
c66dccfb3f109511246da627dd5a24984524.444.49
abcd2cb37d46c2c8fb1bf071c859fc5b4524.444.24
fc906263ca5083d09dce42fe0224780011554.354.36
d50d79cb34e38265a8649c383dcffd486934.354.25
9de4643a8dbde634fe55621059d922737234.174.53
0ef83d7d83ed97cd2a0049ac8be5f88a4824.174.17
4da0e408c99d2fdc2126dc9fce5180604824.174.46
9d4db00d65d7760644ac0c14edb5fd869744.124.52
25cf099de44674fde97473224f9d59ab4924.084.33
729b2d09b2a0bdab221076327f13d0504924.084.49
994f04b3718c2bab35c2adfa8afecd2a4924.084.24
08633c14ef2db992c11f840f04fad4cd7434.054.36
282f23a9769b2690c5dda22e316f99419944.044.33
66922902710d126a0e7d26b0e380510615064.004.47
516e7738bd8f735ac19a010ee5450d8d7633.954.57
1f9ab4708f3056ede07124aad39a25547733.904.08
d921b68bf747894be13a97ae52b0f3867733.904.56
e24fc9fcd865784fb25705606fe3dfe75223.854.17
1bb2bdb95f4841f1bba2c0d2cd83d3c98133.704.64
a5cba26a62b8b4d0145b68b841e62e7f5423.704.52
2dee2ce60de9709b1a24083217181a1f5423.704.43
8a130737016f838139d31878787a39c95623.574.45
02f5837340d7eb4f653d676c7256523a3013.334.83
6061155addc1e54b4cfb51c1c2a32ad83013.334.67
116ccb1a1604bc88e4d234a8c23f33de6123.284.64
aced59e9b31ef866a94f9e7f29d8d4186223.234.42
cd6efc47efaabf134f8bdb654e10b4f13113.234.48
57e632711dec9ec14ca7546769483e7e6323.174.40
d566c37fa119d5e66c4e9052e83ee4ea6423.134.69
834f3294fba9f932f56edc879193f9256623.034.55
643214e62b870443ccbe55ab29a4dccf6623.034.41
5b925e1d006e9476d738aa200751b73b6623.034.61
0ffa40d54288e4f3499b8780dd0f144f3313.034.48
d13e50eaa47b4cbe9eb81465865d8cfc6722.994.82
ac3508719a1d8f5b7614b798f70af13610132.974.60
12b9676b00f60f3b700e83af21824c0e13542.964.51
a3dd39f583bc80bd8c5901c95878921e3512.864.60
fde0cc9ea29c8ccfc0a2c22256a58c713512.864.49
f181738b150df1f37cb0bd72e705b1933612.784.47
113e3a788b935f48aad63e1c41dac1bd3812.634.32
55a5b51f93f2b70ea513f5a047b0262a3912.564.46
d23019c84ffae2d5ef2270367b8605fc3912.564.41
744dac408745240a2c2528fb1b6028f37922.534.57
3f3486b61f45078d4f31ee5e43d8c5bb4012.504.33
6df688df543f90e9b38f4319e75a9d884012.504.58
23c38debaffe4a25a30fdbd9b586a13f4112.444.32
43f8c9950d11ecd03a0304a49e010da64112.444.56
c013e57c075a06e5b5c48ee03c5257194212.384.24
0ed6ce5d87fd9c69eaacaeb778d672354212.384.40
dbd66278cbfe1aa1000f90a217ca46958522.354.51
33a17d60c64393351ebf1ef860f4e0f24312.334.63
bccf933e006e9b94a6184af782963e774412.274.52
c68fb906c8f4b4b946d8386bfa6e54674412.274.52
b410bdd36d5db7a65dcd42b7ead933b84412.274.66
2ff97219cb8622eaf3cd89b7d9c098248922.254.53
83e197e95a1bbabc8c75e883ed016c474712.134.81
3504c0cb71d7fa48d967e0e4c94d59d95311.894.42
d9bd94811c3338dceb4181f3dbc0c73e5411.854.80
cc63f0dd2acba93ffed4fe9f8e0321fa5511.824.09
376a891762bbdecbc02b4b6adec3fdda5711.754.67
e882b2a25a10b9c057cc49695f222c195911.694.59
709e16e2b25c7474d980076c6bfc48066011.674.52
6cd68b3ed6d59aaa9fece558ad360c0a14921.344.46
080199a181c46c657dc5aa235411be3b7811.284.62
725c32fa80c2faacc4fc88450d27314e8711.154.52
0509040ea3fe50071181bbc359eb77388711.154.48
acce39e832338debb07b02385cde59674400.004.48
5f3ae9136c875522250f8184f253413a4200.004.64
34d1ca11b242c0fee2c834ae8d7885663900.004.33
594f9aaa48e5bf431f011ddc5669b0d53700.004.59
31561f325664a8a7aba4c8d0c3a9b3db3600.004.64
3586b8580d9c917874e053a1bb37b5ff3600.004.42
bf3c6d2a28b2b5501e6c15448982dcc93500.004.60
bd4889b5e9133b35b66e42a8665cea5c3500.004.40
48efc9d94a9834137efd9ea76b065a383300.005.00
fad44952713764836814be105382aee53200.004.28
18e694e0e48ed6f7aa3f24aade5fd6973100.004.68
0bb738e4d789e63e2267697c42d35a2d3000.004.60

Insights:

  • The 50% Failure Rate: Seller 1ca7077d... is the most toxic vendor in this dataset. Out of 114 total orders, 61 resulted in a 1-star review. Failing over 53% of the time is catastrophic for platform trust and drags their average rating down to an abysmal 2.33.
  • High Volume, High Damage: Seller 2eb70248... is a major platform risk. They process a solid amount of volume (199 orders) but maintain a 40.2% failure rate, dumping 80 one-star reviews into the marketplace.
  • The Volume Trap: Seller 7c67e144... sits lower on the percentage list with a 19.9% failure rate. However, because they handled 982 orders, they single-handedly created 196 one-star reviews. Even a “moderate” failure rate becomes a massive issue at high volumes.
  • Perfect Sellers Exist: At the very bottom of the dataset, we can see a group of sellers (like 48efc9d9... and 18e694e0...) who processed 30+ orders with exactly zero 1-star reviews. This proves that maintaining a flawless customer experience is completely possible on Olist.
  • Root Causes of 1-Star Reviews: Connecting this data to our previous geographic and logistics analysis, the most common issues driving these negative reviews are highly predictable. Customers leave 1-star ratings when packages arrive more than 3 days past the estimated delivery window, when sellers take over a week just to dispatch the item, or when products (especially in categories like men’s clothing or office furniture) are poor quality and fail to match their online descriptions.

Customer Review Impact

28. What is the effect of customer review answers on the average review score? Specifically, do products that receive responses to customer reviews (e.g., seller replies) tend to have higher average review scores compared to products that do not receive any responses? Also the average response time to customer reviews and its correlation with review scores?

Solution:

  • To analyze the effect of response times on review scores, the gap between when a review was created and when it was answered must first be calculated. This should be done in the first CTE by calculating the difference in hours between the creation date and the answer timestamp from the order reviews table. The data must also be filtered to only include records where a review score actually exists.

  • Next, those calculated response times should be segmented into logical buckets to help spot trends. A case statement must be used to categorize each response as fast, medium, slow, or very slow, based on the number of hours it took to reply. A specific category for reviews that never received a response must also be included to serve as a baseline for comparison.

  • Finally, the metrics must be aggregated based on these newly created response speed categories in the main query. The total number of reviews, the average response time, and the overall average review score should be calculated for each bucket. The distribution of extreme scores must also be captured by calculating the percentage of one-star and five-star reviews within each category. This final output will be used to determine if faster responses actually correlate with higher overall customer satisfaction and fewer negative ratings.

Show SQL Code
with review_response_times as (
    -- step 1: calculate the time it took for the customer to answer the review survey
    select 
        review_id,
        review_score,
        review_creation_date,
        review_answer_timestamp,
        -- calculating response time in hours to measure the gap
        datediff(hour, review_creation_date, review_answer_timestamp) as response_time_hours
    from 
        order_reviews
    where 
        review_score is not null
),

response_buckets as (
    -- step 2: segment the response times into logical buckets to spot trends
    select 
        review_id,
        review_score,
        response_time_hours,
        case 
            when review_answer_timestamp is null then 'No Response Recorded'
            when response_time_hours <= 24 then '1. Fast (Under 24 Hours)'
            when response_time_hours <= 72 then '2. Medium (1 - 3 Days)'
            when response_time_hours <= 168 then '3. Slow (3 - 7 Days)'
            else '4. Very Slow (Over 1 Week)'
        end as response_speed_category
    from 
        review_response_times
)

-- final output: aggregate the scores based on how quickly the survey was answered
select 
    response_speed_category,
    count(review_id) as total_reviews,
    cast(avg(response_time_hours * 1.0) as decimal(10,2)) as avg_response_time_hours,
    cast(avg(review_score * 1.0) as decimal(5,2)) as avg_review_score,
    -- getting the distribution of extreme scores for correlation analysis
    cast((sum(case when review_score = 1 then 1 else 0 end) * 100.0 / count(review_id)) as decimal(5,2)) as pct_1_star,
    cast((sum(case when review_score = 5 then 1 else 0 end) * 100.0 / count(review_id)) as decimal(5,2)) as pct_5_star
from 
    response_buckets
group by 
    response_speed_category
order by 
    response_speed_category;

Results:

response_speed_categorytotal_reviewsavg_response_time_hoursavg_review_scorepct_1_starpct_5_star
1. Fast (Under 24 Hours)2807218.903.9415.7656.19
2. Medium (1 - 3 Days)4461846.064.169.5359.16
3. Slow (3 - 7 Days)21683100.954.139.6857.33
4. Very Slow (Over 1 Week)4851551.434.0213.3856.26

Insights:

  • The “Damage Control” Anomaly: The most surprising finding is that the fastest responses (under 24 hours) actually correlate with the lowest average score (3.94) and the highest 1-star rate (15.76%). This does not mean fast replies cause bad reviews. Instead, it shows that furious customers trigger immediate “damage control.” When a 1-star review drops, sellers rush to reply right away to apologize or fix the issue.
  • The “Happy Batch” Effect: Reviews that are answered within 1 to 3 days (Medium speed) boast the best average score (4.16) and the lowest 1-star rate (9.5%). It is highly likely that sellers do not feel an urgent need to reply to positive 5-star reviews immediately. Instead, they probably batch-process their “thank you” replies a few days later when they have free time.
  • Most Sellers are Active: The vast majority of reviews (over 72,000) are answered within 3 days. This shows that the platform’s seller base is generally very active and monitors their customer feedback closely.
  • The Cost of Neglect: When response times drag on for over a week (averaging 551 hours), the 1-star rate starts to climb back up to over 13%. This suggests that true neglect—where a seller is completely disengaged from their storefront—is linked to a generally poorer customer experience and lower product quality.

Conclusion

What We Did:

  • Mapped out the exact correlation between delivery delays and average review scores to find the customer tolerance threshold.
  • Categorized products into a matrix of delivery speed versus satisfaction (e.g., “Fast but Hated”, “Slow but Loved”).
  • Analyzed review polarization across different order volumes to understand how customers rate products.
  • Isolated the specific toxic sellers driving 1-star reviews and examined how response times correlate with ratings.

What We Obtained (Key Findings):

  • The 3-Day Late Cliff: Customers will forgive a 1-to-2 day delivery delay. However, once a package hits exactly 3 days late, the average rating permanently crashes below 3 stars. At 6 days late, the customer experience is completely unrecoverable (under 2.0 stars).
  • Quality Beats Speed: Fast shipping cannot save a bad product. Categories like IT accessories and male fashion arrive quickly but get terrible reviews due to likely quality or sizing issues. Conversely, customers happily wait 13+ days for specific high-value items like PCs or musical instruments without punishing the seller’s rating.
  • The Polarization Rule: Customers generally only leave a review if they absolutely love or completely hate an item. Roughly 68% of all ratings across the platform are either 1-star or 5-star, regardless of whether a product is a niche item or a massive bestseller.
  • Toxic Outliers Destroy Trust: A very small group of high-volume sellers is responsible for a massive chunk of negative feedback. The worst offender we found had an unacceptable 53% failure rate (1-star reviews).
  • The Damage Control Effect: Sellers reply to negative reviews immediately (under 24 hours) to put out fires and manage angry buyers. Positive 5-star reviews, on the other hand, are usually answered a few days later in batches.