Methodology
America’s Emergency Care Environment was based primarily on the 2009 National Report Card on the State of Emergency Medicine, with minor modifications and improvements. The process used to develop the 2014 Report Card involved the steps described below.
1) Assembling the Task Force Report Card and Work Groups
The American College of Emergency Physicians (ACEP) assembled a Report Card Task Force (RCTF) in December 2011 to oversee the development of the 2014 Report Card. ACEP staff conferred with the ACEP President who appointed the Chair of the RCTF in October 2011. The qualifications of the RCTF Chair included previous experience on the 2009 RCTF, a PhD in Epidemiology, and active clinical practice in emergency medicine. The chair subsequently worked with the RCTF staff liaison to identify and recommend ACEP members who had expertise in subject areas or specific issues directly related to the Report Card to serve on the RCTF. Criteria for selection included: topical expertise, geographic location (to ensure that most regions across the country were represented), and research experience. Based on these criteria, Task Force members were appointed by the ACEP president. The RCTF was charged with:
- Selection and oversight of a contractor to conduct the data collection, analysis, writing, and design of the Report Card,
- Providing expert advice and guidance on the selection and definition of indicators that accurately reflect the subject-matter categories being considered,
- Providing guidance on weighting the indicators and creating grades, and
- Carefully reviewing all drafts.
One of the initial tasks of the RCTF was to review and confirm the critical topic areas presented in the 2009 Report Card. The 2009 Report Card marked a significant revision from the previous (2006) Report Card, and so it was critical that the RCTF confirm the specific content areas, consider other potential content areas, and review the quantitative grading weights for each topic category.
At the RCTF’s May 2012 meeting, the full RCTF ultimately decided to keep the five topic areas used in the 2009 Report Card: Access to Emergency Care, Quality and Patient Safety Environment, Public Health and Injury Prevention, Medical Liability Environment, and Disaster Preparedness. The RCTF also voted to maintain the relative weights for each of these major categories in calculating the overall grade, since the weights already reflected the importance of each category in improving and supporting the emergency medicine environment, as well as to maintain comparability across the current and previous Report Cards. The weights for each category are in Figure A-1. RCTF members were asked to volunteer to chair and/or participate in work groups for each of the five topic areas based on their area of expertise.
In order to accomplish the tasks described above, the full RCTF met in person three times and by conference call five times between its inception in December 2011 and the completion of the Report Card. In addition, there were frequent and timely communications via telephone and e-mail among the RCTF, the work groups, and the contractor during this period.
For many of the deliberations described in the following sections, the contractor worked directly with the work group members when making decisions specific to their subject areas (e.g., adding or removing indicators, assigning weights to the individual indicators). When necessary the contractor and/or work group leader would consult with the Task Force Chair or the full RCTF. These exchanges between the contractor and the work groups typically took place via e-mail or phone. For more complicated decisions, such as finalizing indicator weights, the Task Force either discussed the issue during in-person meetings or via telephone until the group generally reached consensus. Some specifics around the timing and frequency of communication are included in the sections to follow.
2) Selecting Specific Indicators
During the summer of 2012, each work group met via webinar to discuss the 2009 indicators, to propose new indicators, and to consider retiring indicators for which data were no longer available or that were no longer pertinent to the overall Report Card. The selected contractor, Altarum Institute, contributed background research on the feasibility of measuring potential new indicators consistently on the state level, and shared these findings with the work groups. Based on this information, the work groups reconsidered the current and new indicators, modified definitions when necessary, and finalized the draft indicator list for each section. The draft sets of indicators were then presented to the full RCTF at an in-person meeting in October 2012 and tentatively finalized (with a small number of indicators pending a few additional questions regarding data availability).
Overall, the vast majority of the 2009 Report Card’s 116 indicators were maintained, with only 8 indicators removed or replaced due to lack of current data, 3 indicators redefined, and 4 indicators removed due to a high proportion of positive responses in the 2009 Report Card. The work groups also added a number of indicators in each topic area: 4 in Access to Emergency Care; 9 in Quality and Patient Safety Environment; 2 in Medical Liability Environment; 12 in Public Health and Injury Prevention; and 7 in Disaster Preparedness.
The selection of new indicators ultimately depended not only on their relative importance as determined by the RCTF, in consultation with ACEP Sections, Committees, and topic area experts as needed, but also on the availability of data. Therefore, for data element inclusion, they needed to be: 1) relevant, 2) reliable, 3) valid, 4) consistent across the states, and 5) current (collected within the past 3 years).
Similar to the 2009 Report Card, it was necessary to conduct a survey of EMS and Disaster Preparedness state officials in order to acquire data to inform indicators in the Quality and Patient Safety and Disaster Preparedness topic areas. Unlike 2009, two surveys were developed (instead of one) and fielded in the first quarter of 2013. Work group members, with as needed consultation from topic experts, reviewed and approved the surveys. The ACEP Survey of EMS Practices and Policies contained 11 questions and was sent to EMS directors in the 50 states, District of Columbia, and Puerto Rico. The overall response rate was 100%. The ACEP Survey of Disaster Preparedness Practices and Policies contained 19 questions and was first sent to Assistant Secretary of Preparedness and Response (ASPR) grantees throughout the states via a listserv. Individual follow-up was made with Disaster Preparedness officials in the states that did not respond to the initial request. The overall response rate for this survey was also 100%.
Overall, the RCTF maintained and/or developed 136 indicators across the 5 topic areas.
3) Assigning Indicator Weights
Once the set of indicators for each topic area was finalized, the work groups addressed the issue of importance of the individual indicators by assigning each indicator a weight within the category. In doing so, the workgroups considered the weights used in 2009, in part to maintain comparability between Report Cards, along with the deletion and addition of new indicators within each topic area. Each of the five broad topic areas consists of sub-categories that were developed in 2009 and assigned sub-category weights. The workgroups independently discussed and proposed maintaining the same sub-category weights that were used in the 2009 Report Card. Within each of these subcategories, weights were assigned to each indicator reflecting the overall importance of that indicator within the sub-category. Again, these decisions were made during conference calls between the work group members and Altarum Institute. The draft weights assigned by the work groups were then presented to the full RCTF during a webinar/conference call in February 2013. Once finalized, these individual weights were used to score and grade the states within each category.
In addition to approving the indicator sets and weights proposed by the work groups, the RCTF was also responsible for revisiting the scoring and grading scheme used in the 2009 Report Card. Ultimately, the RCTF determined that the same methods should be applied as were used in 2009 to ensure consistency over time in the grading calculations.
4) Comparing and Scoring States
The indicator weights added up to a total of 100 points for each of the categories. The percentage of available points scored by each state was calculated by comparing the states on each indicator, assigning them a fraction of the indicator’s weight, and summing these values. The scoring convention used was largely dependent on the three types of data elements included in the Report Card: continuous, categorical, and binary. This scoring convention is described below:
- For continuous indicators, the state’s values were ranked best to worst and assigned a fractional rank from 51 (best) to 1 (worst). These values were used to apportion each state a fraction of the indicator’s total weight. For example, using an indicator weighted as 5 percent of the category, a state assigned a rank of 15 out of 51 (50 states plus the District of Columbia) would receive 1.47 points out of 5 (15/51 x 5). In the case of a tie, each state was assigned the best rank among the tied states. In other words, if the 14th, 15th, and 16th worst states in the ordered list were tied, each would be assigned a value of 16 and allotted an identical number of points.
- For categorical indicators, states were not ranked against one another but rather assigned a fraction of the total possible points scored. For example, for an indicator worth 5 percent of the category, a state that scored 6 out of a possible 6 points would receive 5 points (6/6 x 5).
- For binary responses, the state received either the full weight or none of the weight.
In addition, missing data were handled in one of two ways depending on the data source. For data that were collected from publicly available sources, missing data did not count against the states. The RCTF believed this would place too great an emphasis on missing data that may have been the result of inadequate data collection efforts (not the fault of the states) or two few cases resulting in unreliable estimates. For this reason, not all states had the same denominator (or maximum total points possible). If a state was missing data from a publicly available data source on a particular indicator, the weight for the criterion was excluded from its denominator. For example, if a state was missing data on an indicator worth 5 percent of the category (for instance, HIV diagnoses disparity ratio), then its denominator would be 95, not 100.
On the other hand, missing data on data elements that were collected from the ACEP state survey did count against the states. If a state health official did not provide a response to a survey question that was answered by the vast majority of other state respondents (after multiple requests by e-mail and telephone), the weight for that indicator was still included in the state’s denominator and the numerator (or points earned) was equivalent to a zero. The rationale for treating missing responses in this way was that responses to these questions should be known, available, and tracked by the state or state health and preparedness officials. For this reason, the Task Force felt that such responses should be counted against the non-responding state.
The number of points earned was then summed (numerator) along with the number of possible points (denominator) across each topic area’s indicators. The percentage of points scored was calculated for each of the states by dividing the number of points earned by the number of possible points. These values were ranked and used to calculate the state’s grades described in the next section. The state rankings for each of the categories can be found on the state pages adjacent to their grades.
5) Assigning Grades Using a Modified Curve
State level grades
This section describes the methodology used to calculate category-specific and overall grades at the state level – the same methods that were employed in the 2009 Report Card. The basis for all calculations related to category-specific grades is the percentage of points that states scored for each category. As described in the previous section, the denominators (or possible points earned) may vary across states because of missing data.
Category-specific grades. Overall, the category-specific grades were based on the number of standard deviations each state’s score fell from the maximum values. Increments of 0.25 standard deviations were used based on conventions developed for and used in the 2009 Report Card. Since grading is done on a curve, and no state scored the maximum possible number of points, the ‘A+’ and ‘A’ categories were collapsed into one group and presented as a straight ‘A’.
Below is a step-by-step description of how the category-specific grades were calculated.
Step 1. Using the percentage of points scored for each category, the maximum value and the standard deviation were calculated based on the mean.
Step 2. The letter grades, including pluses and minuses, were calculated based on the number of standard deviations that each state’s score fell from the maximum value (as listed in Figure A-2).
Overall grades. States’ overall grades were calculated as a weighted average of the grades in each category. Similar to calculating a high school grade point average, letter grades for each category were converted to numbers, then multiplied by their relative weights (contribution to the overall grade), as previously described, and then summed. The total numeric values were then converted back to letter grades.
National level grades
Category-specific grades. The national level grades are based on population-weighted averages for each of the categories. The steps taken to determine national level grades in each category are:
Step 1. Calculate a weighted percent of points scored for each state, by multiplying each state’s percent of points scored in the category by the percentage of the U.S. population that resides in the state.
Step 2. Calculate a weighted national average by summing the total number of points scored across the states and dividing by 51 (50 states plus D.C.)
Step 3. Applying the same methodology used above for the states, calculate how many standard deviations this national average fell from the maximum state value in each topic area, and use that number to determine which letter grade should be assigned.
Overall grades. The overall grade for the nation was calculated using the same methodology described above for the overall state grades. The overall grade for the nation is a weighted average of the nation’s category-specific grades.