Categories
Optics

Rokslide Scope Drop Test Data

The Rokslide Drop Testing has prompted a lot of discussion from the community. However, a common criticism is that each test is a sample size of one and is not statistically significant. This is true, and the data set is not large enough to determine the failure rates of different scope models with any kind of statistical significance.

That said, at this point over 50 drop tests have been conducted and documented. The sample size has grown large enough that I was curious if there are brands that outperform others at a statistically significant level. I decided to do an analysis on how the brands compared to the general pool. The data set is a sample of the broad long range hunting scope market as a whole.

Nightforce and Tikka are known for doing well in the drop tests

Data Criteria

To gather data, I went through all of the drop test posts and created a dataset. This includes the brand of the scope, the model, and a percentage hit score. The percentage hit score was determined by counting how many rounds did not touch the 1.5 inch target circle with their best edge. This gives a miss percentage for each model of scope through the drop test. The ‘hit’ target 1.808 inches given that the bullets are 0.308 diameter.

This hit target is slightly larger than the typical 10 round test group that the test rifle typically fires. However, the 10 round test group would be slightly larger anyhow if it were a larger 30-50 round group. This better encapsulates the true cone of the rifle. I did not count hits scored on the zero confirmation shots, and I only looked at the drop test portion of the scope eval. I also did not count results for scopes that had the rings torqued below 20 inch lbs that shot significantly differently once the rings were retorqued to a higher specification.

The dataset is shown below:

Methodology

If you’re not a math guy don’t worry too much about the methodology. I have put my rational below for those interested. Basically the test used assumes that the variable we are studying is normally distributed and works well with different sized samples and samples with different variances.

All computations were done using R. For a test I decided to use the Welch’s Two Sample t-test. Due to the small sample size of the group, using a Z-test is not appropriate. This is due to the sample size of the brand groups is too small. A Student t-test could be used but the Student t-test assumes equal variance between the samples. In practical terms this means that the test assumes that the consistency of the variability from scope to scope model does not vary between manufacturer, which is an assumption I find unlikely. The Welch’s t-test accounts for a difference in the variance between samples and is robust with different sized samples, which we have here. The Welch’s T-test assumes normality of the distribution of the variable. In order to test the normality of the dataset I used a Shapiro-Wilk normality test, which is below.

The p-value of the Shapiro-Wilk test is 0.002451. This means that we can say with 99.75 percent confidence that the variable is normally distributed. This is good, and means that the Welch’s t-test should provide accurate analysis.

Testing

Next run a series of tests comparing one brand of scope to the broader market to see if the specific brand performs better or worse at the drop test compared to the market. Due to sample size I can only analyze some of the brands, as brands like Revic only have one scope tested, which is not large enough for an analysis.

I am going to set the significance level (alpha) at 0.10. This means that I will call results significant if there is over a 90 percent confidence in the results. This level varies by application with 0.05 being fairly common. I’m going with 0.10 because it’s fairly arbitrary and I will publish the exact confidence level below anyways.

Nightforce

First up is Nightforce. The null hypothesis is that there are no difference in durability on the drop test between Nightforce scopes and the rest of the scopes on the market. The alternative hypothesis is that the Nightforce scopes exhibit a statistically significant level of durability than the market average.

The results show that Nightforce scopes show a statistically significant average performance compared to the collection of scopes tested with 91 percent confidence. They perform better than the rest of the market.

Trijicon

Next up is Trijicon.

Trijicon scopes also perform better on the drop test than the broader market average at a statistically significant level at a 99.9 percent confidence.

SWFA


SWFA scopes also perform better on the drop test than the broader market average at a statistically significant level at a 99.8 percent confidence.

Leupold


Leupold scopes do not perform any differently on the drop test than the broader market at a statistically significant level.

Vortex

Vortex does not perform any differently than the broader market at a statistically significant level. Note that the average miss rate on Vortex scopes is much higher than average, but due to the small sample size of only two scopes tested we do not have the evidence to say that they are significantly different from the broader market.

Zeiss

Zeiss does not perform differently from the broader market at a statistically significant level. However, note the p-value of 0.1078. This means that Zeiss is very close to being statistically significantly worse than the average performance on the drop tests. However, given the dataset as it currently stands this is not enough to reject the null hypothesis that there is no difference.

Maven


Maven scopes are not statistically significantly any different from a drop test standpoint than the broader market of tested scopes.

Concluding Thoughts

In conclusion, the top 5 performing brands were Revic, Minox, Trijicon, SWFA, and Nightforce in that order. Of the top 5, only Trijicon, SWFA, and Nightforce have enough data to perform an analysis and detect a statistically significant improvement over the drop test performances of other scopes tested.

The bottom 5 performing brands were Vortex, Bushnell, Vector, Primary Arms, and Maven. None of these optics exhibited statistically significantly worse performances than the overall field of scopes tested.

Leave a Reply