Professor Srinivasan S. Iyengar
Professor Iyengar's research efforts are on the interface of chemistry, computational physics, and applied mathematics. Dealing with the development of new theoretical methods and the subsequent implementation of these into efficient computational models. The methods are derived with an aim to help solve problems in biophysical chemistry, atmospheric chemistry and the area of nano-material science.
The main themes for the adaptation and new development envisioned in CALM are outlined briefly below.
The bulk of the present questions in CALM are single-stage; less than 5% of the total questions in CALM are multi-stage. Development of critical thinking skills requires presentation of concepts individually followed by presentation of more complex questions which link related concepts. Development of good questions that tie together concepts is understandably more time consuming and resource intensive We anticipate increasing the percentage of multi-stage questions to 30% of the total pool of questions.
We intend to extend CALM's capabilities by introducing adaptive learning that involves whole sets of questions. This development would allow presentation of a series of related problems to students in sequences that reflect:
As the program is more fully developed, the pool of questions in CALM will be organized by both concept and level of difficulty. For example, under the topic of Electrochemistry one has the concepts of: balancing oxidation-reduction reactions by the half-reaction method, the electromotive force, the calculation of the standard cell potential, the Nernst equation, the relation between the E and DG, and the relationship between E and the equilibrium constant K.
For each concept, problems of different levels of difficulty will exist, e.g. sets of easy, moderately difficult, and challenging problems. Initially, all students are presented with problems of average difficulty. To proceed from one concept to another, a student must successfully complete one question of a specified difficulty level. As they continue to work, they will be presented with problems whose level of difficulty will reflect their previous success in CALM. If the student is unsuccessful at answering a given problem, he/she is presented with an easier problem on the same concept. Successful completion of that problem results in the student being presented with a more difficult problem until a problem of the specified difficulty level is successfully passed. Then, he/she can either request a question of equal or greater difficulty level to improve their understanding of the present concept or turn to the next concept. As they start work on the next concept, he/she is initially presented with a question of the same difficulty level as the last question successfully answered on the previous concept.
This adaptive capability would effectively tailor the difficulty level of the problems presented to the individual student, thus challenging the more capable student while helping the more challenged student. The requirement of a specified difficulty level ensures that weak students finally possess sufficient mastery to proceed. (The initial difficulty level of problems will be assigned by a group of instructors. This initial level will be modified as we obtain data on the students' successes in solving particular problems.) An additional benefit of generating such sets of inter-related questions is that one can stress the inter-relation of concepts to a significantly greater degree than is presently found in textbooks.
At the present time, only student accesses and successes in solving problems are recorded. Tracking student pathways through multi-part problems as well as recording student responses on less complex problems requires a more sophisticated database management system. Such a database will allow us to accumulate a complete history of a student's interaction with any particular question. Analysis of collected data will provide us with the difficulty level of questions, etc.
CALM was developed from the outset with a "learning" as opposed to a "testing" pedagogy in mind. Within this context, students are allowed to try and answer a question multiple times (until the deadline) without penalty so as to arrive at the correct answer. However, we now believe it valuable to supplement the existing "homework mode", with a "testing mode" in which students would only have one opportunity to correctly answer a question and complete the test in a specified time period. As in homework mode, questions would be algorithmically generated and individualized to a student. We envision testing mode as a pre-test that students use to assess their preparedness for an upcoming exam. While this mode cannot supplant administered, proctored exams, we believe it is an important component presently missing from CALM which bridges the present gap between homework and exams.
A significant obstacle to the efficient generation of CALM questions is the lack of a tool that facilitates the programming of problems. In constructing the existing set of chemistry problems, we have learned that the code needed to "translate" questions into problems delivered to students requires far more programming knowledge (Perl, Java, etc.) than most instructors can be expected to acquire. Nor is it feasible to tie up a talented programmer's time with the task of typing in content supplied by instructors. Generation of science problems requires a combination of programming and content knowledge that both programmers and most faculty lack. An authoring interface is needed to simplify the design of problems for faculty/teaching assistants who are computer literate but do not have extensive programming experience.
This need for a authoring interface is particularly evident when it comes to the construction of the 'directed-learning', multi-part questions. We envision that the first of several steps in this development is a graphical interface that instructors can use to write multi-part questions and describe the relationship between the different parts of a given problem. In effect, the instructor using this interface would map out a flow-chart like structure between questions which comprise parts of the larger question. The instructor also has to define how a student progresses through such a diagram based upon the particular answers that the students provide. This interface would simplify the design of problems, particularly for instructors with limited programming experience. The requirements/capability of this interface is not simply an HTML editor as the dynamic relationships between parts of the question must be specified. At present we have begun initial development of such a graphical user interface (GUI). While the most complex questions available in CALM might not lend themselves to an authoring tool, we have determined that it is feasible to develop authoring tools that allow development of many of the existing types of CALM questions.
At present, students work cooperatively (on their own initiative) to solve problems. We intend to develop a new problem-solving environment to encourage and monitor such cooperation. Members of small groups (~4 students) would first work individually on a set of individual but related problems. Then, as part of a 'jigsaw strategy' 18, group members would work together on a common, complex problem whose solution requires correctly understanding the concepts presented in the individual problems. Because CALM tracks the performance at both individual and group stages of work, students are held accountable for their own work. Individual accountability is one of the basic principles of collaborative learning. The fact that solving the second-stage problem depends on their work on the individual questions builds in another characteristic of sound collaborative learning -- positive interdependence. 18
As student groups work cooperatively on the more complex problem, they might utilize electronic conferencing tools 19,20 to facilitate their discussion. In large scientific research and in international corporations, asynchronous and geographically dispersed collaboration is increasingly common. While such problems would comprise a minority of the problems the students will encounter, nevertheless, they form an important part of the learning experience.
It is also important to extend the ability of CALM to allow free-form textual answers. An example of this kind of question might be: "What happened when 1.0 M solutions of CaCl2 and Na2CO3 are mixed?" While this question could be posed as a multiple-choice question, that is not the most effective means of assessing student understanding. A great deal of science is descriptive; omission of this capability would be a significant deficiency. We have no intention of developing an automated means of grading such free form textual answers; since this is inconsistent with the detailed assessment of problem solving skills. This feature of CALM helps faculty assess student understanding by collecting and presenting the student responses in such a way so as to facilitate their review, critiquing, and grading by the instructor.
As described below we intend to implement CALM in both mainstream physics courses, as well as biology courses. Doing so would allow us to track a significant number of students who are either taking these courses contemporaneously or subsequently. When students are taking these courses contemporaneously, tracking the students would allow us to recognize common problems. For example, a student who has difficulty in "setting up a problem" in one subject is likely to face the same difficulty in another subject. A student careless about units in chemistry is likely to be careless about units in physics. We propose to develop the capability of CALM to categorize the errors made by a particular student and present a summary of these errors to instructors. A student's summary CALM performance would be available to all instructors across disciplines. Remedying a weakness in one discipline is likely to help the student in many disciplines. For students who take courses using CALM in subsequent semesters, it would be possible to track the evolution of a student's performance with the development of the appropriate assessment tools.
It should be understood that development of new content, development of an architecture capable of sequences of adaptive questions, and assessment tools as described above are inextricably intertwined. Often development in one are will expose previously unforeseen opportunities or drawbacks in another area.