Evaluation
Competitor submissions will be evaluated across four objective areas:
Innovation (20%)
- To what extent does the solution rely on advanced analytics and AI?
- How unique is the solution? Does it improve upon existing models, processes. Analysis? Is the solution completely novel? Both approaches are acceptable, the application of data science techniques in innovative ways is what will be assessed.
- To what extent has the team identified other data sets and/or types of information that would be useful to further refine their solutions following the competition? Creative use of datasets, feature extraction.
Feasibility (20%)
- Is the proposed solution realistic?
- Is the solution grounded in deployable and sustainable technologies?
- How likely is the proposed solution to succeed in positively impacting health equity?
Impact (20%)
- Has the team selected a defensible health equity issue?
- Does the solution meaningfully impact the defined health equity issue? What is that impact? (Note: impact is not simply measured in absolute terms. A solution that resolves a smaller health equity issue may be more impactful than a solution that has a lesser impact on a major issue, or vice versa).
- How is the impact measured? Is the process for measuring impact repeatable?
AI Adoption (40%)
- Has the team identified potential roadblocks to implementation and developed approaches to facilitate resolution of such roadblocks?
- Can the AI be explained such that clinicians and patients and other stakeholders will understand?
- Is solution available in appropriate location with README and instructions for interacting with as described in challenge statement?
- To what extent has the team explained how the proposed AI solution will be implemented to achieve the desired results?
- To what extent has the team identified strategies and tools to explain the AI to clinicians and patients to build trust and drive transparency?