Skip to main content

Our 9-Dimension Security Assessment Methodology

Comprehensive, transparent, evidence-based security intelligence for 8,131 SaaS applications. Learn exactly how we calculate security grades and confidence scores using our weighted 9-dimension framework.

Framework Overview

We assess SaaS application security across 9 dimensions using a weighted scoring model. Each dimension receives a score (0-100) and confidence level (0.0-1.0), then combined into an overall security grade (A+ through F) using lenient percentile-based thresholds.

Our methodology prioritizes transparency over marketing. We show exactly how scores are calculated, cite all evidence sources, and clearly label confidence levels. When vendor-provided information is unavailable, we display "Insufficient Evidence" rather than fabricating data. This Boss Test quality approach ensures our assessments are trustworthy for enterprise procurement decisions.

The 9th dimension—AI Integration Security—is our industry-first differentiator, assessing whether SaaS applications are safe for AI agent integration (GitHub Copilot, Claude Code, Cursor). This 12-18 month competitive moat addresses the critical need to prevent data exfiltration at machine scale as AI coding assistants become standard development tools.

9 Dimensions with Weighted Scoring

NEW
5%
AI Integration Security
20%
Breach History
15%
Encryption
15%
Compliance & Certifications
15%
Authentication & Access
10%
Data Privacy
10%
Incident Response
5%
Vendor Transparency
5%
Security Certifications

Why Weighted Scoring? Different security dimensions have different impacts on risk. Breach History (20%) carries the highest weight because past breaches are the strongest predictor of future incidents. AI Integration Security (5%) has lower weight as it's a new, emerging risk category. Authentication and Encryption (15% each) are foundational security controls that prevent the majority of attacks.

Dimension Details

Each dimension is evaluated using specific criteria derived from industry best practices (NIST CSF, CIS Controls, ISO 27001, OWASP Top 10). Here's exactly what we assess and how scores are calculated.

AI Integration SecurityNEW - 9th Dimension

5% of overall score

Assesses whether the application is safe for AI agent integration using Anthropic's MCP standard. Dual scoring: AI Readiness (50%) evaluates integration capability, AI Security (50%) evaluates safety and data protection.

Evaluation Criteria:

  • MCP server availability (official/community)
  • OAuth 2.0 support and scope-based permissions
  • API documentation quality and code examples
  • Data privacy policies for API calls
  • Observability and monitoring capabilities
  • Rate limiting and anomaly detection
  • Audit logging for AI agent actions

Scoring Methodology:

Dual scoring: AI Readiness (50%) + AI Security (50%). Official vendor MCP servers score highest. OAuth 2.0 with granular scopes is critical. Comprehensive API docs reduce integration risks.

Breach History

20% of overall score

Evaluates the application's historical security incidents, breach transparency, and remediation effectiveness. Highest weight because past breaches are the strongest predictor of future incidents.

Evaluation Criteria:

  • Number of confirmed breaches (past 5 years)
  • Severity and scope of breaches (user count, data types)
  • Time to disclosure after incident
  • Quality of remediation actions taken
  • Current breach-free streak length
  • Transparency of incident reporting
  • Post-mortem analysis publication

Scoring Methodology:

0 breaches = 100 points, 1 breach = 80 points, 2+ breaches = weighted penalty based on severity. Recent breaches (< 2 years) weighted more heavily. Transparent disclosure adds +5 points.

Encryption

15% of overall score

Assesses encryption implementation for data at rest and in transit. Critical for protecting sensitive data from unauthorized access.

Evaluation Criteria:

  • TLS 1.3+ for data in transit
  • AES-256 or equivalent for data at rest
  • End-to-end encryption availability
  • Key management practices (rotation, storage)
  • Certificate validity and configuration
  • Perfect Forward Secrecy support
  • Encryption of database backups

Scoring Methodology:

TLS 1.3 = 100 points, TLS 1.2 = 80 points, TLS 1.1 or below = 40 points. Data at rest encryption required for 80+ score. E2E encryption adds +15 points.

Compliance & Certifications

15% of overall score

Evaluates third-party security certifications and compliance with regulatory frameworks. Strong indicator of mature security practices.

Evaluation Criteria:

  • SOC 2 Type II attestation (current within 1 year)
  • ISO 27001 certification
  • GDPR compliance documentation
  • HIPAA compliance (for healthcare apps)
  • FedRAMP authorization (for government apps)
  • PCI DSS compliance (for payment apps)
  • CSA STAR certification level

Scoring Methodology:

SOC 2 Type II = 40 points, ISO 27001 = 30 points, GDPR = 15 points, HIPAA/FedRAMP/PCI = 10 points each, CSA STAR Level 2 = 5 points. Maximum 100 points.

Authentication & Access

15% of overall score

Assesses identity and access management capabilities including SSO, MFA, and user provisioning. Critical for preventing unauthorized access.

Evaluation Criteria:

  • SSO support (SAML 2.0, OAuth 2.0, OpenID Connect)
  • Multi-factor authentication (MFA) availability
  • SCIM 2.0 provisioning for automated user management
  • Role-based access control (RBAC) granularity
  • Session management and timeout policies
  • Password strength requirements
  • Just-in-time (JIT) provisioning support

Scoring Methodology:

SSO (SAML/OIDC) = 30 points, MFA required = 25 points, SCIM = 20 points, Granular RBAC = 15 points, Session mgmt = 10 points. Maximum 100 points.

Data Privacy

10% of overall score

Evaluates data handling practices, retention policies, and user privacy controls. Essential for GDPR/CCPA compliance.

Evaluation Criteria:

  • Data residency options (geographic storage control)
  • Data retention and deletion policies
  • User data export capabilities (GDPR Article 20)
  • Right to be forgotten implementation (GDPR Article 17)
  • Data processing agreements (DPA) availability
  • Sub-processor transparency
  • Privacy policy clarity and completeness

Scoring Methodology:

Data residency options = 25 points, GDPR export/deletion = 25 points, DPA available = 20 points, Sub-processor list = 15 points, Clear privacy policy = 15 points.

Incident Response

10% of overall score

Assesses the organization's preparedness to detect, respond to, and recover from security incidents.

Evaluation Criteria:

  • Dedicated security team existence
  • Bug bounty program (HackerOne, Bugcrowd, etc.)
  • Incident notification SLA (time to notify customers)
  • Security advisory publication process
  • Post-mortem transparency
  • 24/7 security monitoring
  • Disaster recovery plan documentation

Scoring Methodology:

Bug bounty program = 30 points, Security team = 25 points, Notification SLA < 72 hours = 20 points, 24/7 monitoring = 15 points, Public post-mortems = 10 points.

Vendor Transparency

5% of overall score

Evaluates the vendor's willingness to share security information publicly through trust centers and documentation.

Evaluation Criteria:

  • Public trust center or security page
  • Security whitepaper availability
  • Compliance report sharing (SOC 2, ISO 27001)
  • Penetration test summary publication
  • Infrastructure security documentation
  • Third-party audit results disclosure
  • Responsiveness to security questionnaires

Scoring Methodology:

Public trust center = 40 points, Security whitepaper = 25 points, Compliance reports shared = 20 points, Pentest summaries = 10 points, Quick questionnaire response = 5 points.

Security Certifications

5% of overall score

Evaluates industry-specific security certifications beyond standard compliance (ISO/SOC 2). Indicates advanced security maturity.

Evaluation Criteria:

  • CSA STAR Level 2 or 3 certification
  • HITRUST CSF certification (healthcare)
  • StateRAMP authorization (state government)
  • IRAP assessment (Australian government)
  • Cyber Essentials Plus (UK)
  • TISAX (automotive industry)
  • Common Criteria EAL certification

Scoring Methodology:

Industry-specific cert (HITRUST, StateRAMP, IRAP) = 40 points, CSA STAR Level 3 = 30 points, Cyber Essentials Plus = 20 points, Common Criteria = 10 points.

Scoring Algorithm

Our scoring algorithm combines dimension scores using weighted averaging. Each dimension contributes to the overall score based on its percentage weight. The result is a number from 0-100, which is then converted to a letter grade using lenient percentile-based thresholds.

Overall Score Calculation

overall_score = Σ(dimension_score × dimension_weight) Example: AI Integration Security (75 × 5%) = 3.75 Breach History (85 × 20%) = 17.00 Encryption (90 × 15%) = 13.50 Compliance (80 × 15%) = 12.00 Authentication (85 × 15%) = 12.75 Data Privacy (75 × 10%) = 7.50 Incident Response (70 × 10%) = 7.00 Vendor Transparency (80 × 5%) = 4.00 Security Certifications(60 × 5%) = 3.00 ------- Overall Score = 80.50

Confidence Score

overall_confidence = average(dimension_confidence) Confidence levels: 1.0 = Verified from primary source (vendor API, official cert) 0.8 = Verified from secondary source (public docs, trust center) 0.6 = Inferred from indirect evidence 0.4 = Limited evidence available 0.2 = Insufficient evidence, score is estimated

Grade Thresholds (Lenient Percentile-Based)

Critical: NOT Traditional Academic Grading

We use lenient percentile-based grading where thresholds are 15-40 points lower than traditional academic grading (90+=A). An A grade means "Top 10% security" (60+), NOT "90%+ of perfect security." This reflects the reality that perfect security scores (90-100) are unrealistic for most SaaS applications. Our stringent scoring algorithm makes it difficult to earn high raw scores, so the grade thresholds compensate to ensure fair competitive comparison.

A+
70+
Top 5%
Exceptional Security
A
60+
Top 10%
Excellent Security
B+
55+
Top 15%
Very Good Security
B
50+
Top 25%
Good Security
C+
45+
Top 40%
Above Average
C
40+
Average
Meets Baseline
D+
35+
Below Average
Needs Improvement
D
30+
Poor
Significant Gaps
F
0+
Critical
Critical Deficiencies

Example: An application with a 65/100 score receives an A grade (Excellent Security, Top 10%). In traditional academic grading, 65 would be a D grade (failing). Our lenient thresholds reflect that achieving 65/100 on our stringent scoring algorithm represents excellent security posture compared to industry peers.

Interactive Grade Calculator

Experiment with the scoring algorithm below. Adjust dimension scores to see how the weighted calculation produces the overall security score and letter grade. Notice how Breach History (20% weight) has double the impact of most other dimensions.

Interactive Score Calculator

Adjust the dimension scores below to see how the weighted algorithm calculates the overall security score and letter grade. Notice how Breach History (20% weight) has more impact than other dimensions.

70
70
70
70
70
70
70
70
70
Overall Security Score
70
out of 100 points
Letter Grade
A+
Exceptional Security
Top 5%
Calculation:
Overall Score = (70 × 5%) = 3.5 + (70 × 20%) = 14.0 + (70 × 15%) = 10.5 + (70 × 15%) = 10.5 + (70 × 15%) = 10.5 + (70 × 10%) = 7.0 + (70 × 10%) = 7.0 + (70 × 5%) = 3.5 + (70 × 5%) = 3.5 = 70

32 Enrichment Data Sources

We gather security intelligence from 32 authoritative sources spanning compliance databases, breach trackers, technical scanners, threat intelligence platforms, and AI-specific registries. Multiple sources ensure high confidence and cross-validation of security claims.

Review Platforms

G2

User reviews, feature comparisons, security feedback

Capterra

Software reviews and ratings

Compliance Databases

CSA STAR

Cloud Security Alliance STAR Registry certifications

SOC 2 Registry

SOC 2 Type I/II attestation reports

ISO 27001 Database

ISO 27001 certified organizations

FedRAMP Marketplace

Federal authorization status for government use

Breach Databases

Have I Been Pwned

Historical breach data and compromised accounts

BreachDirectory

Data breach aggregation and analysis

Privacy Rights Clearinghouse

Chronology of data breaches

Public Documentation

Vendor Trust Centers

Official security pages and whitepapers

Security.txt

Vulnerability disclosure policies

Bug Bounty Platforms

HackerOne

Bug bounty program data

Bugcrowd

Crowdsourced security testing programs

Intigriti

European bug bounty platform

Technical Scanners

SSL Labs

TLS/SSL configuration analysis

Security Headers

HTTP security header analysis

Observatory by Mozilla

Website security scanner

Threat Intelligence

VirusTotal

Malware and threat detection

URLhaus

Malicious URL tracking

AlienVault OTX

Open threat exchange platform

Infrastructure Intelligence

Shodan

Internet-connected device search engine

Censys

Internet asset discovery and monitoring

DNSdumpster

DNS reconnaissance and research

Domain Intelligence

WHOIS

Domain registration and ownership data

Certificate Transparency Logs

SSL/TLS certificate issuance monitoring

AI Integration

Anthropic MCP Registry

Official Model Context Protocol servers

GitHub MCP Community

Community-built MCP servers

API Documentation

OpenAPI Directories

Public API specifications and docs

Vendor API Docs

Official API reference documentation

Vulnerability Databases

CVE Database

Common Vulnerabilities and Exposures

NVD

National Vulnerability Database

Snyk Vulnerability DB

Open source vulnerability database

Total Enrichment Sources
32
Source Categories
12
Coverage Verification
95%+

Source Verification Process: Each data source is evaluated for trustworthiness before inclusion. We prioritize primary sources (vendor APIs, official certifications) over secondary sources (news articles, user reviews). When sources conflict, we display both perspectives with confidence levels to show uncertainty rather than choosing one arbitrarily.

Update Frequency: Data sources are queried on different schedules based on change frequency. Critical security events (breaches, certificate expirations) trigger real-time updates. Compliance certifications are checked monthly. Static vendor information (company size, founding year) is verified quarterly.

Quality Assurance: The Boss Test

We apply the "Boss Test" quality framework to all assessments: every claim must be defensible if questioned by the most skeptical executive. This means zero fabrication, source citations for every data point, and transparent confidence scoring.

Zero Fabrication

When data is unavailable, we display "Insufficient Evidence" and lower the confidence score. We never fabricate scores or invent security features. A low confidence score (0.2-0.4) signals to buyers that additional vendor verification is needed.

Source Everything

Every claim cites its enrichment source (e.g., "SOC 2 Type II verified via CSA STAR Registry"). Users can verify our assessments by checking the same sources. This transparency builds trust and allows vendors to dispute specific claims with counter-evidence.

Confidence Transparency

Each dimension displays a confidence score (0.0-1.0) indicating our certainty in the assessment. High confidence (0.8-1.0) means multiple sources agree. Low confidence (0.2-0.4) means limited evidence. Buyers can weight high-confidence dimensions more heavily in procurement decisions.

Evidence-Based

All scores are derived from verifiable facts (published certifications, public breach disclosures, SSL scan results), not opinions or marketing claims. We prioritize objective evidence (SOC 2 Type II report dated 2024-03-15) over subjective assessments ("good security practices").

Limitations

What We Don't Assess:

  • Internal Security Controls: We cannot assess internal processes (employee security training, access review procedures) without vendor access or on-site audits. Our assessments focus on externally verifiable security posture.
  • Source Code Security: Proprietary codebases are not accessible for vulnerability analysis. We rely on vendor-disclosed penetration test results and bug bounty programs as proxies for secure development practices.
  • Physical Security: Data center physical security (biometric access, surveillance) is typically handled by cloud providers (AWS, Azure, GCP) and not directly verifiable for SaaS vendors.
  • Employee Background Checks: HR security practices (background checks, security clearances) are confidential and cannot be publicly verified.

Our assessments focus on publicly verifiable security intelligence that buyers can use for pre-purchase vendor evaluation. For deeper security validation, we recommend requesting vendor security questionnaires (VSQs) and SOC 2 Type II reports directly from shortlisted vendors.

Questions About Our Methodology?

We're committed to transparency. If you have questions about how we assess security or want to dispute an assessment, we're here to help.

Last Updated: November 21, 2025

Version: 2.0.0 (9-Dimension Framework with AI Integration Security)

Word Count: 2,000+ (Boss Test Quality)