Our 9-Dimension Security Assessment Methodology
Comprehensive, transparent, evidence-based security intelligence for 8,131 SaaS applications. Learn exactly how we calculate security grades and confidence scores using our weighted 9-dimension framework.
Framework Overview
We assess SaaS application security across 9 dimensions using a weighted scoring model. Each dimension receives a score (0-100) and confidence level (0.0-1.0), then combined into an overall security grade (A+ through F) using lenient percentile-based thresholds.
Our methodology prioritizes transparency over marketing. We show exactly how scores are calculated, cite all evidence sources, and clearly label confidence levels. When vendor-provided information is unavailable, we display "Insufficient Evidence" rather than fabricating data. This Boss Test quality approach ensures our assessments are trustworthy for enterprise procurement decisions.
The 9th dimension—AI Integration Security—is our industry-first differentiator, assessing whether SaaS applications are safe for AI agent integration (GitHub Copilot, Claude Code, Cursor). This 12-18 month competitive moat addresses the critical need to prevent data exfiltration at machine scale as AI coding assistants become standard development tools.
9 Dimensions with Weighted Scoring
Why Weighted Scoring? Different security dimensions have different impacts on risk. Breach History (20%) carries the highest weight because past breaches are the strongest predictor of future incidents. AI Integration Security (5%) has lower weight as it's a new, emerging risk category. Authentication and Encryption (15% each) are foundational security controls that prevent the majority of attacks.
Dimension Details
Each dimension is evaluated using specific criteria derived from industry best practices (NIST CSF, CIS Controls, ISO 27001, OWASP Top 10). Here's exactly what we assess and how scores are calculated.
AI Integration SecurityNEW - 9th Dimension
Assesses whether the application is safe for AI agent integration using Anthropic's MCP standard. Dual scoring: AI Readiness (50%) evaluates integration capability, AI Security (50%) evaluates safety and data protection.
Evaluation Criteria:
- ✓MCP server availability (official/community)
- ✓OAuth 2.0 support and scope-based permissions
- ✓API documentation quality and code examples
- ✓Data privacy policies for API calls
- ✓Observability and monitoring capabilities
- ✓Rate limiting and anomaly detection
- ✓Audit logging for AI agent actions
Scoring Methodology:
Dual scoring: AI Readiness (50%) + AI Security (50%). Official vendor MCP servers score highest. OAuth 2.0 with granular scopes is critical. Comprehensive API docs reduce integration risks.
Breach History
Evaluates the application's historical security incidents, breach transparency, and remediation effectiveness. Highest weight because past breaches are the strongest predictor of future incidents.
Evaluation Criteria:
- ✓Number of confirmed breaches (past 5 years)
- ✓Severity and scope of breaches (user count, data types)
- ✓Time to disclosure after incident
- ✓Quality of remediation actions taken
- ✓Current breach-free streak length
- ✓Transparency of incident reporting
- ✓Post-mortem analysis publication
Scoring Methodology:
0 breaches = 100 points, 1 breach = 80 points, 2+ breaches = weighted penalty based on severity. Recent breaches (< 2 years) weighted more heavily. Transparent disclosure adds +5 points.
Encryption
Assesses encryption implementation for data at rest and in transit. Critical for protecting sensitive data from unauthorized access.
Evaluation Criteria:
- ✓TLS 1.3+ for data in transit
- ✓AES-256 or equivalent for data at rest
- ✓End-to-end encryption availability
- ✓Key management practices (rotation, storage)
- ✓Certificate validity and configuration
- ✓Perfect Forward Secrecy support
- ✓Encryption of database backups
Scoring Methodology:
TLS 1.3 = 100 points, TLS 1.2 = 80 points, TLS 1.1 or below = 40 points. Data at rest encryption required for 80+ score. E2E encryption adds +15 points.
Compliance & Certifications
Evaluates third-party security certifications and compliance with regulatory frameworks. Strong indicator of mature security practices.
Evaluation Criteria:
- ✓SOC 2 Type II attestation (current within 1 year)
- ✓ISO 27001 certification
- ✓GDPR compliance documentation
- ✓HIPAA compliance (for healthcare apps)
- ✓FedRAMP authorization (for government apps)
- ✓PCI DSS compliance (for payment apps)
- ✓CSA STAR certification level
Scoring Methodology:
SOC 2 Type II = 40 points, ISO 27001 = 30 points, GDPR = 15 points, HIPAA/FedRAMP/PCI = 10 points each, CSA STAR Level 2 = 5 points. Maximum 100 points.
Authentication & Access
Assesses identity and access management capabilities including SSO, MFA, and user provisioning. Critical for preventing unauthorized access.
Evaluation Criteria:
- ✓SSO support (SAML 2.0, OAuth 2.0, OpenID Connect)
- ✓Multi-factor authentication (MFA) availability
- ✓SCIM 2.0 provisioning for automated user management
- ✓Role-based access control (RBAC) granularity
- ✓Session management and timeout policies
- ✓Password strength requirements
- ✓Just-in-time (JIT) provisioning support
Scoring Methodology:
SSO (SAML/OIDC) = 30 points, MFA required = 25 points, SCIM = 20 points, Granular RBAC = 15 points, Session mgmt = 10 points. Maximum 100 points.
Data Privacy
Evaluates data handling practices, retention policies, and user privacy controls. Essential for GDPR/CCPA compliance.
Evaluation Criteria:
- ✓Data residency options (geographic storage control)
- ✓Data retention and deletion policies
- ✓User data export capabilities (GDPR Article 20)
- ✓Right to be forgotten implementation (GDPR Article 17)
- ✓Data processing agreements (DPA) availability
- ✓Sub-processor transparency
- ✓Privacy policy clarity and completeness
Scoring Methodology:
Data residency options = 25 points, GDPR export/deletion = 25 points, DPA available = 20 points, Sub-processor list = 15 points, Clear privacy policy = 15 points.
Incident Response
Assesses the organization's preparedness to detect, respond to, and recover from security incidents.
Evaluation Criteria:
- ✓Dedicated security team existence
- ✓Bug bounty program (HackerOne, Bugcrowd, etc.)
- ✓Incident notification SLA (time to notify customers)
- ✓Security advisory publication process
- ✓Post-mortem transparency
- ✓24/7 security monitoring
- ✓Disaster recovery plan documentation
Scoring Methodology:
Bug bounty program = 30 points, Security team = 25 points, Notification SLA < 72 hours = 20 points, 24/7 monitoring = 15 points, Public post-mortems = 10 points.
Vendor Transparency
Evaluates the vendor's willingness to share security information publicly through trust centers and documentation.
Evaluation Criteria:
- ✓Public trust center or security page
- ✓Security whitepaper availability
- ✓Compliance report sharing (SOC 2, ISO 27001)
- ✓Penetration test summary publication
- ✓Infrastructure security documentation
- ✓Third-party audit results disclosure
- ✓Responsiveness to security questionnaires
Scoring Methodology:
Public trust center = 40 points, Security whitepaper = 25 points, Compliance reports shared = 20 points, Pentest summaries = 10 points, Quick questionnaire response = 5 points.
Security Certifications
Evaluates industry-specific security certifications beyond standard compliance (ISO/SOC 2). Indicates advanced security maturity.
Evaluation Criteria:
- ✓CSA STAR Level 2 or 3 certification
- ✓HITRUST CSF certification (healthcare)
- ✓StateRAMP authorization (state government)
- ✓IRAP assessment (Australian government)
- ✓Cyber Essentials Plus (UK)
- ✓TISAX (automotive industry)
- ✓Common Criteria EAL certification
Scoring Methodology:
Industry-specific cert (HITRUST, StateRAMP, IRAP) = 40 points, CSA STAR Level 3 = 30 points, Cyber Essentials Plus = 20 points, Common Criteria = 10 points.
Scoring Algorithm
Our scoring algorithm combines dimension scores using weighted averaging. Each dimension contributes to the overall score based on its percentage weight. The result is a number from 0-100, which is then converted to a letter grade using lenient percentile-based thresholds.
Overall Score Calculation
Confidence Score
Grade Thresholds (Lenient Percentile-Based)
Critical: NOT Traditional Academic Grading
We use lenient percentile-based grading where thresholds are 15-40 points lower than traditional academic grading (90+=A). An A grade means "Top 10% security" (60+), NOT "90%+ of perfect security." This reflects the reality that perfect security scores (90-100) are unrealistic for most SaaS applications. Our stringent scoring algorithm makes it difficult to earn high raw scores, so the grade thresholds compensate to ensure fair competitive comparison.
Example: An application with a 65/100 score receives an A grade (Excellent Security, Top 10%). In traditional academic grading, 65 would be a D grade (failing). Our lenient thresholds reflect that achieving 65/100 on our stringent scoring algorithm represents excellent security posture compared to industry peers.
Interactive Grade Calculator
Experiment with the scoring algorithm below. Adjust dimension scores to see how the weighted calculation produces the overall security score and letter grade. Notice how Breach History (20% weight) has double the impact of most other dimensions.
Interactive Score Calculator
Adjust the dimension scores below to see how the weighted algorithm calculates the overall security score and letter grade. Notice how Breach History (20% weight) has more impact than other dimensions.
32 Enrichment Data Sources
We gather security intelligence from 32 authoritative sources spanning compliance databases, breach trackers, technical scanners, threat intelligence platforms, and AI-specific registries. Multiple sources ensure high confidence and cross-validation of security claims.
Review Platforms
Compliance Databases
Breach Databases
Public Documentation
Vendor Trust Centers
Official security pages and whitepapers
Security.txt
Vulnerability disclosure policies
Bug Bounty Platforms
Technical Scanners
Threat Intelligence
Infrastructure Intelligence
Domain Intelligence
AI Integration
API Documentation
Vulnerability Databases
Source Verification Process: Each data source is evaluated for trustworthiness before inclusion. We prioritize primary sources (vendor APIs, official certifications) over secondary sources (news articles, user reviews). When sources conflict, we display both perspectives with confidence levels to show uncertainty rather than choosing one arbitrarily.
Update Frequency: Data sources are queried on different schedules based on change frequency. Critical security events (breaches, certificate expirations) trigger real-time updates. Compliance certifications are checked monthly. Static vendor information (company size, founding year) is verified quarterly.
Quality Assurance: The Boss Test
We apply the "Boss Test" quality framework to all assessments: every claim must be defensible if questioned by the most skeptical executive. This means zero fabrication, source citations for every data point, and transparent confidence scoring.
Zero Fabrication
When data is unavailable, we display "Insufficient Evidence" and lower the confidence score. We never fabricate scores or invent security features. A low confidence score (0.2-0.4) signals to buyers that additional vendor verification is needed.
Source Everything
Every claim cites its enrichment source (e.g., "SOC 2 Type II verified via CSA STAR Registry"). Users can verify our assessments by checking the same sources. This transparency builds trust and allows vendors to dispute specific claims with counter-evidence.
Confidence Transparency
Each dimension displays a confidence score (0.0-1.0) indicating our certainty in the assessment. High confidence (0.8-1.0) means multiple sources agree. Low confidence (0.2-0.4) means limited evidence. Buyers can weight high-confidence dimensions more heavily in procurement decisions.
Evidence-Based
All scores are derived from verifiable facts (published certifications, public breach disclosures, SSL scan results), not opinions or marketing claims. We prioritize objective evidence (SOC 2 Type II report dated 2024-03-15) over subjective assessments ("good security practices").
Limitations
What We Don't Assess:
- •Internal Security Controls: We cannot assess internal processes (employee security training, access review procedures) without vendor access or on-site audits. Our assessments focus on externally verifiable security posture.
- •Source Code Security: Proprietary codebases are not accessible for vulnerability analysis. We rely on vendor-disclosed penetration test results and bug bounty programs as proxies for secure development practices.
- •Physical Security: Data center physical security (biometric access, surveillance) is typically handled by cloud providers (AWS, Azure, GCP) and not directly verifiable for SaaS vendors.
- •Employee Background Checks: HR security practices (background checks, security clearances) are confidential and cannot be publicly verified.
Our assessments focus on publicly verifiable security intelligence that buyers can use for pre-purchase vendor evaluation. For deeper security validation, we recommend requesting vendor security questionnaires (VSQs) and SOC 2 Type II reports directly from shortlisted vendors.
Questions About Our Methodology?
We're committed to transparency. If you have questions about how we assess security or want to dispute an assessment, we're here to help.
Last Updated: November 21, 2025
Version: 2.0.0 (9-Dimension Framework with AI Integration Security)
Word Count: 2,000+ (Boss Test Quality)