Institutional Research FAQ
- What's our enrollment?
- How do we track Carroll transfers to Maryland Public four-year colleges and universities?
- What is the source for tracking Carroll transfers to Maryland Public four-year colleges and universities?
- What is the Community College Survey of Student Engagement?
- What is the National Community College Benchmark Project?
This simple question requires a complex answer, because enrollment can be measured in a number of ways. Decisions have to be made such as whether to count credit students, noncredit students or both. Are you interested in enrollment in a particular term (the fall semester, e.g.) or for an entire year? Do you want to know how many different individuals enrolled, how many class seats were filled, how many credits were attempted, how many contact hours of instruction took place, how many full-time students attended, or another measure of enrollment? These clarifying questions are necessary, particularly when data are being collected from more than one institution. Enrollment comparisons where different definitions have been used are meaningless and misleading. Never provide enrollment data to external parties if they cannot provide you with a precise definition.
A common enrollment measure for credit enrollment is fall headcount. This is the number of different people ("heads") enrolled for fall term classes as of the official, statistical "freeze" date. A specific, agreed-upon date for counting enrollment is necessary since enrollment fluctuates due to add/drops, withdrawals, late-start classes, open-enrollment classes, and other enrollment activity. In Maryland, the official freeze date is after 20 percent of the scheduled class time has elapsed. In a standard 15-week fall or spring class, this falls at the end of the third week so the phrase "third week enrollment" is often used to mean the official, 20-percent figure.
Non-credit, continuing education courses typically have more viability in course duration and start and end dates than credit courses. So an equivalent to a "fall third week" figure is less meaningful for noncredit enrollment. Noncredit enrollment is often reported as a "year-to-date" figure as of a particular date compared to the same date a year before. Data are usually reported in a number of course registrations or full-time-equivalents (FTEs). FTEs are also calculated and reported for credit enrollment, so FTEs are a convenient measure for compiling and analyzing credit and noncredit enrollment, and for calculating an institution's total enrollment. FTEs are also used in Maryland as part of the community college funding formula.
Full-time-equivalent (FTE) enrollment converts part-time student attendance into its equivalent in full-time students. There are a number of different formulas for calculating FTEs. The basic concept starts from the idea that a full-time credit student would enroll in ten courses during a year. Assuming a typical three credits per course, this equates to 30 credits annually. So one formula for calculating credit FTEs is to take total annual credit hours and divide by 30.
A three-credit course traditionally met for three hours a week for 15 weeks, for a total of 45 hours. Ten three-credit courses would meet a total of 450 hours. So another way of looking at an FTE is to equate it to 450 hours of instructional time. This is how an FTE is calculated for noncredit, continuing education courses. An FTE can be generated in many different combinations. One student enrolled in an extended training program of 450 hours would generate an FTE. Thirty students enrolled in a 15-hour class would generate an FTE. Ten students enrolled in three classes consisting of 15 hours each would generate an FTE. The median duration of a continuing education course is 12 hours.
The different measures of enrollment are all valuable, depending on the purpose of measurement. Headcount says how many different people were enrolled. Course enrollment says how many seats or class openings were filled. Credit hours reflect progress towards degree completion. Contact hours are meaningful in facility and space utilization studies and in faculty workload analyses. Full-time-equivalent is a way of accounting for full- and part-time students and allows comparisons and aggregation of credit and noncredit enrollment.
An example illustrates different measures of credit enrollment:
|Student||Course(s) Taken||Credit Hours||Weekly Contact Hours|
|Andrew||MAT 097||4 *||4|
The above three students represent a headcount of 3, accounting for 8 course enrollments, generating 30 credit hours, in class or laboratory for 38 instructional contact hours. The 30 credit hours equate to one FTE.
Annual headcount and FTEs are regularly reported for credit and noncredit enrollment. To coincide with other annual data, they are usually reported for fiscal years beginning on July 1 and ending June 30. The annual headcounts are "unduplicated," meaning that an individual is counted just once regardless of how many courses they may have enrolled in during the year. Thus, you cannot add term headcounts (fall, spring, etc.) to arrive at annual headcount.
Although credit students account for three-fourths of the college's total FTEs, the college enrolls more noncredit students than credit students. This reflects the relatively shorter duration of the average noncredit course compared to the average credit course. Credit courses generate more FTEs per student than noncredit courses.
We track the number of students enrolled at Carroll Community College in a fall semester who are reported enrolled at a Maryland four-year college or university the following fall. The data reveal fall-to-fall enrollment movement within the public sector of Maryland higher education. (For example, 67 students reported on the Carroll Community College fall 2001 third-week enrollment file submitted to MHEC were on the fall 2002 enrollment file of Towson University.) The source is Undergraduate Transfers, Maryland Public Institutions of Higher Education, published annually by the Maryland Higher Education Commission. The other institutions not individually identified due to consistently low enrollments of Carroll students include Bowie State University, Coppin State College, St. Mary's College of Maryland, Morgan State University, the University of Maryland at Baltimore, and the University of Maryland Eastern Shore.
3. What is the source for tracking Carroll transfers to Maryland Public four-year colleges and universities?
The source is Performance of Maryland Community College Transfer Students at Public Four-year Campuses published annually by the Maryland Higher Education Commission. The data may underestimate the number and percentage of students who have earned a bachelor's degree, since only graduation from the first institution to which a community college student transferred is reported. Students who moved on to another campus and graduated are not reflected in these data. The data are part of the Transfer Student System (TSS); transfer entrants are identified by the reporting four-year campus as students transferring in a minimum of 12 credit hours and listing Carroll Community College as the last institution they attended prior to transfer. Since some students transfer with fewer than 12 credits applicable to their baccalaureate degree, total transfers are undercounted.
The Community College Survey of Student Engagement (CCSSE) provides information about effective educational practices in community college, with the aim of helping institutions promote improvements in student learning and persistence. CCSSE defines student engagement as the amount of time and energy that students invest in meaningful educational practices. CCSSE believes that student engagement captures a key measure of institutional quality.
The CCSSE instrument was adapted from the National Survey of Student Engagement (NSSE) with the permission of Indiana University. The NSSE was developed in 1999 for use in four-year colleges and universities. CCSSE is coordinated by the Community College Leadership Program at the University of Texas at Austin.
CCSSE began in 2001 with a pilot study, followed in 2002 with a broader field test, and had its first national administration in 2003. Carroll participated in CCSSE's second national administration in spring 2004. A total of 152 community colleges in 30 states participated. Seventy-five colleges were classified as small colleges, with credit headcount enrollment less than 4,500. Nationally, a total of 92,301 students submitted usable surveys. The small college sample totaled 32,842.
CCSSE has several advantages compared to institutionally-developed surveys. The large national sample provides norms and benchmarks to help colleges interpret their performance. The psychometric properties of the survey have been extensively tested to ensure that the instrument is statistically valid and reliable. Sampling has been centrally controlled so that institutions can be confident that the results generalize to their population of students and are comparable to results from other institutions. Through the use of correlation matrices, exploratory factor analysis, confirmatory factor analysis, and multiple group analysis, five meaningful composite benchmark scores were developed.
The National Community College Benchmark Project (NCCBP) is a national data collection effort coordinated by Johnson County Community College in Kansas. The NCCBP provides national and peer college effectiveness indicator data in a variety of assessment domains enabling participating institutions to interpret and benchmark their performance. Initial implementation began in March 2004. Carroll participated for the first time in the 2005 data collection. A total of 109 community colleges participated nationally in 2005; Carroll was the only Maryland college to do so.
Participating colleges varied greatly in size, demographics, resources, and other ways; caution must be used in interpreting Carroll data against these national medians. As national participation grows and the sample of colleges similar to Carroll increases, peer college medians will also be presented.
NCCBP definitions for indicators may differ from definitions used in other studies reported by Institutional Research.