Skip to navigation – Site map

HomeNumérosVol. XXXIV N° 2ArticlesSuccess: B2 or not B2- that is th...

Articles

Success: B2 or not B2- that is the question

(The Ello Project - Étude longitudinale sur la langue orale)
Réussite : Être, ou ne pas être B2 : telle est la question (Le projet Ello - Étude longitudinale sur la langue orale)
Dan Frost and Jean O’Donnell
p. Pagination en cours

Abstracts

Much of the syllabus in Applied Foreign Language (LEA) departments in French universities is devoted to LSP (Languages for Specific Purposes), especially language for business. We were aware of the lack of valid data concerning the level of the students’ language, both on arrival at the university and upon completion of their Bachelor’s degree. We noticed a discrepancy between the level expected of the students by the ministry (B2 according to the CEFRL (on arrival, Goullier 2007) and the level which we observed in the classroom (A2-B1 on arrival), especially regarding oral production. The same ministerial texts specify C1 on graduation, but what tools are available to measure our students’ oral production? In this paper, we will present the Ello project (Étude longitudinale sur la langue orale) which compares self-, peer and expert assessment of oral production in English (video recordings) for an entire cohort over three years. The results of these assessments, together with the data obtained from questionnaires and interviews, enable us to explore the concepts of success and failure of our students.

Top of page

Full text

Introduction

1Like many research projects in this field, the impetus for the Ello project (Étude longitudinale de la langue orale) came from observations in the classroom and teachers’ comments in meetings and staff rooms. In the Applied Foreign Languages (LEA, Langues étrangères appliquées) department in the University of Savoie Mont Blanc, France, the following points were repeatedly evoked: the level of the students’ language skills did not correspond to teachers’ expectations, many students were not involved personally in improving their language skills, a lack of awareness prevailed not only among students but also among colleagues about the potential use of the Common European Framework of Reference for Languages (CEFRL) and there was no available reliable data to track the progress of our students’ language skills over the three years they spent studying for a degree. Moreover, as considerable emphasis is placed on oral production within the syllabus in LEA, this study was an attempt to address this particular issue.

2Ello, carried out with local university funding, was consequently launched to address a wide range of pedagogical and research issues by creating a longitudinal database of students’ oral performances. Pedagogically this database would be invaluable for learners and teachers alike. Learners would become aware of the level of their performances when confronted with accessible samples. Teachers could analyse these samples to gauge the effect of the courses taught. It would also provide crucial information from an institutional perspective enabling courses to be adapted or tailored according to our students’ outcomes and needs. From a research perspective the concepts of success/failure and motivation as well as the students’ perception of these concepts in relation to their oral performances in English over their three-year language could be explored. We aimed to contribute to furthering research in the field of oral production, as there is a limited number of large-scale longitudinal studies currently available.

3In this paper, after a brief discussion of the theoretical background and practical context to this study, we will outline the experimental protocol adopted. We will then present and analyse some of our results, mainly from the first cohort of students between 2011 and 2013.

1. Background and context

  • 1 http://www.education.gouv.fr/cid80650/lancement-de-la-conference-nationale-sur-l-evaluation-des-ele (...)

4The Internet, especially in its current form, increasingly dominated with user-generated content, has led to our personal and professional successes and failures being displayed in the public arena. The use of evaluations for individuals and for institutions is increasingly prevalent, such as published league tables for universities, etc. In France, the marking system out of twenty and the huge reliance on marks in general in education, is currently being called into question by the French Ministry of Education itself1.

1.1. Assessing oral skills

5To measure one’s success in acquiring a second language there are several broad yardsticks which both learners and teachers of foreign languages instinctively use. Although not always explicitly defined, one is a personal perception of the model of the native’s language skills (Luoma 2004: 10). This can be the almost unattainable ideal to which both parties may frequently aspire in the language learning and teaching process. However the text describing the CEFRL global scales clearly states that the highest level, C2, does not refer to native-like control of the language but rather to: “the degree of precision, appropriateness and ease with the language which typifies the speech of those who have been highly successful language learners” (Council of Europe 2001: 36). Moreover, as John Osborne (2011a) interestingly points out in a study comparing native and non-native fluency in a controlled situation, even fluent native speakers can hesitate more than non-native speakers of the given language.

6The other instinctive yardstick is to gauge one’s level based on how one speaks. Until recently, when it came to assessment/evaluation, speaking had received the least attention in terms of weighting within the French educational system. However with the professional requirements of a globalized workplace and an increasing reliance on technology, there is a growing interest for the assessment/evaluation and teaching of this skill and in the research of its possible components.

7Referred to variously as speaking, oral production, speaking proficiency, fluency, speaking skills, oral expression communicative language competence or speech, the definitions and limits of this skill are not clearly defined. What comprises fluency for example has long been a subject of debate but it is generally agreed that the temporal, phonetic and acoustic features of speech be it native or non-native, are usually associated with this concept. Accuracy and lexical components are also considered as essential elements. Matti Koponen and Heidi Riggenbach (2000) point out that the concept of fluency is not used consistently. Sandra Götz (2013) proposes a model of fluency in English speech that takes in productive fluency, perceptive fluency and nonverbal fluency. John Osborne (2011b: 276) sets out to “identify which features, or combination of features, are common to more fluent speakers, and which are more idiosyncratic in nature”. For many, fluency is a component of a more general skill or proficiency. Alex Housen and Folkert Kuiken (2009) try to determine what makes an L2 user or a native speaker a more or less proficient language user by dissecting various researchers’ attitudes to the CAF (Complexity Accuracy and Fluency) triad. Nivia De Jong et al (2012) for example, analyse the componential structure and seek to produce a definition of the construct of second language speaking proficiency.

1.2. The CEFRL

8The CEFRL is often thought of as being primarily a tool for measuring language proficiency, whereas this is in fact the last objective mentioned in the first paragraph of the CEFRL text (Council of Europe 2001). The CEFRL is not a perfect tool, however it is gaining ground not only in Europe, but throughout the world (Brudermann et al. 2012; Krumm 2007; McBeath 2011; McNamara & Elder 2010; Meija 2012; Noriyuki 2009). A central part of the CEFRL is of course the descriptors for describing language performance. These descriptors are themselves simply guidelines and we are encouraged to adapt them and rewrite them according to the context in which we work. As was apparent at the conference hosted by the University of Antwerp in 2013 entitled “Language Testing in Europe - Time for a New Framework?” where several of the CEFRL’s authors were present, most of the criticisms levelled at the framework are due to teachers’ ignorance of how to use it as it was designed. It is a framework, not a set of stone tablets; it exists primarily to help language professionals and language learners achieve their goals more successfully, to help us to think about how and what we teach and learn.

1.3. France within the European context: some figures

9Although the following studies do not measure oral production, we consider them relevant because they deal with French learners’ levels in English using the CEFRL; they give us a partially complete profile of French learners of English which the present study aims to complete.

  • 2 http://ec.europa.eu/languages/policy/strategic-framework/documents/language-survey-final-report_en. (...)

10The European Survey on Language Competence, published in 20132 is one of the few large-scale studies that is related to the level of language learners’ skills. It was carried out among 53,000 16-year old pupils in 16 countries in Europe and concerns their listening reading and writing skills. If we compare the results obtained by French pupils (Hilton 2001) and by those in Europe (overall) with regard to English at a stage in their schooling when they are supposed to have reached B1 we can see that for:

11- listening, 14% of French pupils have reached B1 or above compared with 33% in Europe;
- reading, 13% of French pupils have reached B1 or above compared with 30% in Europe;
- writing, 16% of French pupils have reached B1 or above compared with 33% in Europe.

  • 3 http://www.capital.fr/content/download/959695/5258801/version/1/file/Abstract%20-%20Observatoire%20 (...)

12A French survey by ETS Global (among a reduced sample of 2150 French high school students) found that for listening and reading skills in English, only 28.5% of students had reached B2 or above, i.e. the level required upon completing the Baccalauréat, despite having had 1000 hours of language instruction3.

13At university level in France, a study at Nantes University (Narcy-Combes & McAllister 2011) determined that the overall level of linguistic skills in English among a sample (159 students out of 660) of incoming 1st year LEA students was as follows:
- A1: 3%, A2: 26%, B1: 43%,
B2: 23%, C1/C2: 5%.

14On a more modest scale, an in-house study (Hilton 2001) related to listening and vocabulary revealed the following levels in English for LEA students commencing first year LEA at Savoie University:
- A1: 29%, A2: 33%, B1: 25%,
B2: 12%, C1 1%, C2: 0%.

15For non-specialist students at French universities studying English, Pierre Frath (2012) comes to the general conclusion that the level of most student’s English is A2-B1 and that there is little improvement during their years while studying.

16The consensus is that success as defined officially does not appear to be a frequent phenomenon when it comes to learning English in a French educational context; however, these results were obtained from single assessments and did not include oral production. One of the main objectives of Ello is to fill this gap in research data.

17The status of LEA departments within the landscape of language teaching in France is rather a moot point. LEA departments typically teach two foreign languages applied to a particular domain and the proportion and nature of the language content varies from course to course. In the 2011 document drawn up by the Commission formations under the aegis of the SAES and comprising representatives of several French LSP/LAP associations, LEA is not given a mention, whereas the terms Lansad (Langues pour spécialistes d’autres disciplines), LSp (Langues de spécialité) and ASp (Anglais de spécialité) are clearly defined. In a recent article, Michel Van der Yeught echoes the opinion of others in the field, stating:

  • 4 Our translation for : « Les synergies entre les LEA et le LANSAD sont nombreuses, mais les LEA ne s (...)

There are many similarities between LEA (Applied Foreign Languages) and Lansad (Languages for Specialists of Other Disciplines ≈ LSP / LAP) however LEA is not part of Lansad as LEA departments primarily provide language degrees and as the professional domains to which these languages are applied are quite varied and do not result in teaching languages for specific purposes.4 (2014: 17)

1.4. Pedagogical considerations: noticing and motivation

18While this study is in part an attempt to address the lack of data on oral production in this context, there are also pedagogical considerations. As our research questions in the next part make clear, we believe that our protocol may have a positive impact on the students’ motivation. Key to this belief is the notion of noticing.

19When language learners are asked to watch their own oral performances in a foreign language and assess those performances, learners are affected in several ways, both on a cognitive and an affective level; we hope this will also have an effect on their performance over time. On a cognitive level, being aware of one’s performance may include the phenomenon referred to as “noticing” (Schmidt 1990, 2001, 2010) and has been much written about and studied (see Truscott 1998 for a critical review of noticing in SLA), including in a French university Lansad context (Guichon & Cohen 2012). The level to which a learner’s attention must be drawn to a particular problem is an open question, but by being asked to assess what they can do according to the CEFRL descriptors for oral production and interaction, we hope this will have a positive impact. One may therefore talk not just about noticing one’s own language errors (or as we prefer to say to our subjects, “noticing what you are good at and noticing where you may improve”), but “noticing the gap” (Ellis 2003), i.e. the gap between a learner’s current performance and the performance they / others desire of them. It is noticing this gap which is the first step to developing the metacognitive skills and strategies which are essential for learning languages effectively (Oxford 1990; Ellis 2003).

20“Noticing the gap” is a term which may also be applied to affective factors. In a self-assessment study involving learners watching videos of their own performances and seeing them evolve (or not) over time, the impact on their motivation is a factor which is very evident in the qualitative data we have collected, as we shall see in the next section. As Zoltán Dörnyei’s Process model of motivation (1998, 2001) and his Motivational self-system model (2005, 2009) make clear, motivation varies over time. Elizabeth Campbell and Neomy Storch’s (2011) study explores this theme and they discuss how noticing the gap between a learner’s actual L2 self and one’s “ideal L2 self” leads to changes in motivation with time. Most importantly learners who are able to visualise their future ideal self clearly have greater motivation, despite the potentially demotivating factors associated with negative language-learning experiences in their current learning environment. In the French context, Claire Tardieu (2009) argues for the importance of developing the motivation and confidence of learners in their own competencies in all fields and specifically in language learning, as this can lead to positive learning outcomes.

2. Research questions

21As we saw above, the study began with a set of problems (the lack of an accurate and consistent measure of the students’ level, a need to improve oral English skills beyond B2, etc.) which we have attempted to address with a variety of assessment tasks and other measures.

22The study has both pedagogical and research objectives. Firstly, our pedagogical aims are the following:

  • increase students’ awareness of level and progress through using the CEFRL;

  • increase students’ autonomy and investment in their own learning;

  • improve our colleagues’ awareness of the CEFRL;

  • encourage success by enabling students to improve their oral production through noticing their strengths and identifying areas they can build on.

23The research questions which we explore in this study are as follows:

  • R1. What are the students’ actual levels in oral production.

  • R2. How will expert, peer and self-assessments compare?

  • R3. How does our protocol impact on students’ attitudes and motivation and their perception of success?

3. Methodology

3.1. Why a longitudinal study?

24The research protocol used in this study bears many of the hallmarks of an action-research programme (Cartroux 2002). It is also a longitudinal study and does not involve the development of pedagogical resources per se. A longitudinal study protocol was chosen in order to measure the changes in students’ performance, attitudes and motivation over the three years The decision to apply the protocol to an entire cohort was mainly an ethical consideration, as it would be unfair to give students different opportunities and teaching in these areas, but the question of having as large a corpus of data as possible was also a consideration. We are interested to see whether the subjects can accurately assess their own performances and those of their peers; we expect their performances to improve over three years, but also we may hope that with time, their ability to self-assess and assess their peers will improve. The students have their own perceptions of whether they are succeeding or failing which usually (although not always) correspond to their institutional grades. The motivation of any given individual varies over time (see the Process model of motivation, Dörnyei 1998, 2001, 2005, 2009): the current paper addresses the motivation of the subjects at a given time and future publications will examine this aspect in more depth.

3.2. The use of the CEFRL

25In this study we chose to use the CEFRL to examine the success and perceived success of the subjects based on the assessment of their oral proficiency for two reasons.

26Firstly, as it stands, in our university and in universities and schools all over France, grades are all out of twenty and there is little or no agreement on how these grades mean in terms of actual language level. Moreover, a student may compensate failed modules by averaging them with higher marks in other modules, be those modules other languages or even non-linguistic subjects. We therefore consider, both as teachers and as researchers, that the CEFRL is an important and useful tool for measuring success in language learning and for achieving success for learners, for teachers and for all professionals for whom measuring language proficiency is necessary. Secondly, apart from the fact that transcribing and analysing the componential structure of hundreds of oral recordings is extremely time-consuming and not a feasible solution in our current situation, the holistic-style descriptors of the CEFRL which require one to attribute an overall level to a given recording lend themselves to assessing in a classroom situation as they do not involve complicated coding or statistics.

27As we saw in the previous section, the descriptors are intended to be modified according to the context in which they are used. This was the approach adopted for the Ello project, using as a starting point the descriptors developed by our colleagues on the WebCEF project (Bijnens 2009).

28Finally, although we will refer to the level of a given production in this article, we are constantly aware of the fact that for each student that is assessed, the video recording of the production is a sample of a performance at a given time or “usage on particular occasions” (McEnery & Wilson 2001) and might not for various reasons reflect the true level of competence of the student under scrutiny.

3.3. Data collection

  • 5 The Ello study was in its third year at the time this paper was submitted.

29The study is planned to follow three consecutive cohorts for three years each, making a total of five years for data collection5. Each subject makes two recordings per year: one monologue and one interaction. A typical cohort is composed of approximately 160-200 subjects participating in the first year, 80-90 in the second year and 60-70 in the third year, which produces the following corpus over the full five years:

Table 1. Number of students and number of recordings per cohort and per year.

N° of

subjects

N° of

subjects

N° of

subjects

N° of

Videos

N° of

videos

N° of

videos

YEAR

1st year students

2nd year students

3rd year students

monologues

interactions

TOTAL

2011

129

129

65

194

2012

194

90

284

142

426

2013

157

75

61

293

147

440

2014

100

70

170

85

255

2015

70

70

35

105

TOTAL

n = 480

n = 265

n = 201

946

473

1419

30There are two data collection phases each year, the first centred around making the recordings and the second centred around the grading, with questionnaires on language ability and motivation at the beginning of the first year and a questionnaire on attitude and perceptions accompanying each grading phase at the end of each year.

31In order to obtain speaking samples from the participants, two different activities were selected. The “monologue” consists of describing a short video (a television advertisement) with a storyline, and the “interaction” in pairs consists of a conversation on a topic of interest to this age group. Both tasks proved to be successful in eliciting oral production in the WebCEF research program.

32Students were filmed using webcams and digital microphones using the Windows Movie Maker software in their usual multimedia classroom setting with their regular teachers. All recordings were uploaded to Moodle at the end of each session and all recordings were also backed up on an external hard drive. The only exception to the recording protocol was for the 28 subjects who were studying abroad for their third year; we asked these subjects by email to record only monologues and not interactions. From a technical perspective, the quality of these recordings made using laptops, tablets or smartphones was often better than the videos which were made in the multimedia laboratories using 8-year-old computers. The “phase 2” for Erasmus students (grading and questionnaires) was completed using Moodle as per the rest of the cohort.

33Complementary information, to be used at a later date and not within the scope of this article, was obtained for each student and includes online language-motivation and language-profile questionnaires, Oxford Placement Test listening test, Cambridge FCE listening test and the Dialang placement, listening and vocabulary tests.

34Prior to assessing the videos, during a two-hour session students were familiarised with the CEFRL scales and practised applying them. During a further two-hour session each student evaluated a series of recordings using the scales: a) his/her monologue b) his/her interaction c) his/her partner’s interaction d) five monologues representative of the first five levels of the CEFRL. The students were not informed that the samples represented the first five levels. The data obtained from these evaluations and from those of the experts was subjected to a preliminary analysis.

35The expert assessments were carried out initially by a group of six teacher-researchers, including the authors, in order to achieve a consensus on the application and reliable use of the descriptors. For the remainder of the project, each video was assessed by the authors.

36In order to explore some of the points which were raised by the quantitative and qualitative data obtained from the various questionnaires, semi-structured interviews were carried out on a sample of the subjects after the first three years of the study. The interviews were conducted in French by junior members of staff not known by the subjects, the atmosphere was informal and each interview lasted about ten minutes. Of the nine subjects chosen, three were A2, three were B1-B2 and three were B2 or above at the beginning of the study. The subjects were often very candid about their performances, opinions and attitudes, both positive and negative.

37The entire data collection process is composed of the elements described below.

38Data collection phase 1 (1st year students and 2nd year students: September; 3rd years: December):
- Questionnaire: language-learning experience (1
st years only)
- Questionnaire: language-learning attitude and motivation (1
st years students only)
- Dialang: listening and vocabulary tests (1
st year students only)
- Oxford Placement Test and Cambridge FCE listening test (1
st year students only)
- Video recordings (1
st 2nd and 3rd year students)

39Data collection phase 2 (end of spring term):
- Grading videos (1
st, 2nd and 3rd year students):
*Expert assessment (all monologues and interactions)
*
Self-assessment (subjects grade their own monologues)
*Self-assessment (subjects grade their own interactions)
Peer assessment (subjects grade their interaction partners)
- Peer assessment (subjects grade 5 sample monologues A1 - C1
- Questionnaire: “How this makes me feel” (1
st, 2nd and 3rd year students)
- Semi-structured interviews of a sample of 9 subjects (for later case studies)

40In the next section, we will present and discuss some of our more salient results, both quantitative and qualitative, in attempt to shed some light on the concepts of success and failure within the context of this study.

4. Results and discussion

4.1. Quantitative data

41The quantitative data related to the following results concerning the monologues will be presented as follows:

  • expert assessments for  1st year cohorts in 2011 and 2012;

  • longitudinal data from 2011 to 2013; 1st year, 2nd year and 3rd year cohorts over their three-year degree;

  • samples of trajectories from 2011 to 2013;

  • and comparison between self-, peer and expert assessments for 1st year cohort in 2011.

42Figure 1, below, represents the profiles of 1st year students upon arrival in 2011 and 2012 respectively and their level in spoken English (monologue) according to expert assessments.

Figure 1. 2011 and 2012 1st year students’ levels: monologues (expert assessments). 2011, n=104. 2012 n=149.

Figure 1. 2011 and 2012 1st year students’ levels: monologues (expert assessments). 2011, n=104. 2012 n=149.

43Clearly, most of the students failed to meet ministerial guidelines, i.e. they were below B2 according to expert assessments: in 2011 89% were B1 or below, 78% in 2012. More than 50% are A2 or below, a figure which is surprising when one considers not only that the subjects have studied English for at least 8 years at school, but also that they have chosen to study languages at university level. Stated more positively in terms of success, 11% had reached the level (B2) as stated in the official ministry’s guidelines and 23% in 2012.

44It must be noted that this study only examines oral production and therefore paints a slightly more negative portrait of the students’ level than we may find in other French studies examining all the skills. However, our results are in keeping with the results of other European studies (such as The European Survey on Language Competence mentioned above), where French learners are consistently placed among the least successful linguists in Europe.

Figure 2.  2011-2013 students’ levels: monologues (expert assessments), full cohort. 1st year 2011, n=104. 2nd year 2012, n=80. 3rd year 2013, n=61.

Figure 2.  2011-2013 students’ levels: monologues (expert assessments), full cohort. 1st year 2011, n=104. 2nd year 2012, n=80. 3rd year 2013, n=61.

45Figure 2 represents the breakdown of the level of students’ oral monologues according to expert assessments for one cohort over a complete three-year degree cycle from 2011 to 2013. Although once again, levels reached do not correspond to official guidelines, there is a gradual positive shift over this period i.e. 42% reached B2 in 3rd year.

  • 6 6^6 i.e. 6*6*6*6*6*6 = 46656.
  • 7 6! (factorial 6) i.e. 6*5*4*3*2*1= 720.

46This impression of overall gradual improvement leads us to analyse the trajectories of the cohort over the three-year period. How many students actually improved? How many stagnated? Did any regress?  Theoretically with six levels, taking all possible combinations and permutations of succeeding, stagnating and regressing into account, there are potentially 6466566 trajectories. There are 7207 potential trajectories for improving. Of the 42 student/subjects for whom we had a complete dataset from 1st year through to 3rd year, there were in all 17 different trajectories. The most frequent were the following:
- B1 → B2 → B2 (6 subjects)
- B1 → B1 → B1 (5 subjects)
- A2 → B1 → B2 (4 subjects)

47From the above we can see that the most frequent case (six subjects) is to start out at B1 in 1st year, remain at B1 in 2nd year and then progress to B2 in 3rd year. The second most frequent case (five subjects) is to remain at B1 throughout the three years. The third most frequent trajectory was to move from A2 in 1st year to B1 in 2nd year and then to B2 in 3rd year (four students). Of the remaining 14 trajectories, there was overall a pattern of progression while one regressed. Interestingly the rare C1s upon entry in 1st year (2 students) were also C1 in 3rd year. Experts assessed one of the latter 3rd year performances as C1+ but although the “+” category was used for ratings we did not take into consideration for our calculations in order to provide clearer overall portraits of the subjects. It must be remembered that the CEFRL document suggests the inclusion of “+” from A2 – B2 only. It may be argued that the “+” does not have the same value at the top end of the scale, because it takes longer to progress at more advanced levels.

48The next step was to compare the expert assessments of the monologue with self-assessments. For the 1st year students (2011-12), 46% of the self-assessments were exactly the same as those of the experts. A total of 93% fell within the same level or to within one CEFRL level of those of the experts in 1st year and 97% for 3rd year. It would seem that students are competent users of the CEFRL in particular when it comes to assessing their own performances and might become slightly more proficient over time.

Figure 3. 2011 and 2012. 1st year students’ levels: monologues (expert, self-predicted and actual self-assessment).

Figure 3. 2011 and 2012. 1st year students’ levels: monologues (expert, self-predicted and actual self-assessment).

49Comparing the students’ perception of their level before and after viewing their performance revealed that initially they slightly overestimated their level and consequently adjusted them. Figures for the 1st year students 2011 monologue show that 21% (self-predicted) thought they had reached at least B2 before they viewed their performance. After viewing the recordings this dropped to 13% (self-real). However experts judged only 11% (expert) were at least B2. 64% (self-prediction) thought they were at least B1 before viewing themselves and this dropped to 54% (self-real) after viewing. Experts found 49% were at least B1. Figures for the 1st year students 2012 monologue show that 15% (self predicted) thought they had reached at least B2 before they viewed their performance. After viewing the recordings this dropped to 11% (self-real). However experts judged 23% (expert) were at least B2. 64% (self predicted) thought they were at least B1 before viewing themselves and this dropped to 48% (self-real) after viewing. Experts found 52% were at least B1. The self-real assessments for 2011 were still slightly inflated whilst those for 2012 were rather harsh. No strict pattern emerges over the two years for the level of adjustment.

Figure 4. 2011. Correct peer assessment of 5 monologues A1-C1 n=74 (assessors).

Figure 4. 2011. Correct peer assessment of 5 monologues A1-C1 n=74 (assessors).

50As with self-assessment, peer assessment also proved accurate. For the 1st year students (2011-12), 87% of the peer assessments (samples of five levels) fell within the same level or to within one CEFRL level of those carried out by the experts. In 3rd year (2013) this figure was 88%. More precisely for the 1st year students (2011) 57%, 53% and 49% respectively gave exactly the same levels as experts for A2, B1 and B2 samples. At both ends of the scales, peer assessments were the least accurate with 70% rating the A1 subject as A2 and 16% rating the C1 as C2, so subjects appear better able to assess peers within or close to their own level.

4.2. Qualitative data

51The qualitative data in this study comes from two sources: questionnaires and semi-structured interviews. All subjects completed a language profile and a motivation questionnaire on beginning the study on entry into the first year, but the questionnaire which interests us in this paper is the one which was administered immediately after the subjects watched their own performances and those of their peers in years one, two and three of their progression through the three-year licence degree course. The data with which we are concerned comes from the following items:

  • 8. This process (listening to ourselves speaking English in a monologue and an interaction, evaluating ourselves, comparing ourselves with others) is useful;

  • (7-point Likert scale: Well below average / slightly below average / below average / average / slightly above average / above average / well above average);

  • 9. Why? (vous pouvez répondre en français si vous le voulez) (open answer);

  • 12. How do you feel when you listen to your own videos? (Vous pouvez répondre en français si vous le voulez) (open answer);

  • 13. Comments/commentaires? (Vous pouvez répondre en français si vous le voulez) (open answer).

  • 8 NB. We used the term “awareness” here, because a subject may be aware of an issue, but this is not (...)

52Let us begin with item 9, which depends on the answer to the previous item. All of the results were put in a text file which was run through a concordancer to determine which words came up the most frequently. These words were then grouped into three categories: “can’t do”, “can do” and “awareness”8, according to the inherent semantics of the key words and the context in which they occurred. As mentioned above, the marking system in France is based on a mark out of twenty: the pass mark is usually ten (la moyenne) and very often the mark is obtained by simply deducting marks from twenty. With many students in France, especially language learners, this negative marking scheme leads to much reticence when it comes to using language, especially in oral production (Taillefer 2007, Frost & O’Donnell 2013). It is therefore interesting to note that the subjects found the experience more positive than negative, as can be observed in figure 5 below.

Figure 5. Number of positive / negative / “awareness” (“noticing”) words from subjects’ answers to item 8 and 9, (“How do I feel…?” questionnaire): why they do / do not find this procedure useful.

Figure 5. Number of positive / negative / “awareness” (“noticing”) words from subjects’ answers to item 8 and 9, (“How do I feel…?” questionnaire): why they do / do not find this procedure useful.

53The words in question are detailed in table 3 below.

Table 3. Word lists from subjects’ answers to item 9, (“How do I feel…?” questionnaire): why they do / do not find this procedure useful.

can't do

can do

awareness

mistakes

26

improve

37

rend

7

bad

13

Progress

31

rendre

6

difficile

9

better

27

realize

4

ashamed

7

plus

20

reflect

4

erreurs

7

améliorer

9

aware

3

horrible

6

meilleur

6

comprendre

3

bas

5

progrès

7

comprends

3

awful

3

motivation

7

critical

3

weaknesses

3

amélioré

3

realise

3

fautes

3

improved

3

réflexion

3

stressed

3

evolution

2

reflète

3

disappoint

3

 

 

 

152

 

42

disappointed

2

 

 

 

déteste

2

hate

2

94

54The words are a mixture of French and English as we left it up to the subjects to choose so that they could express their thoughts and feelings more easily. The variety of words to express their negative feelings was impressive, but the total number of positive words (in positive contexts) was still considerably higher. A representative comment on why most subjects found the protocol we adopted useful is the following statement:

  • 9 Students’ comments are left in their original version, with possible language mistakes therefore.

“It permits us to notice our mistakes, so we can improve better by knowing what is wrong.”9

55There is clearly noticing as defined by Schmidt at work, at least in the perception of the many subjects who made similar comments.

56For item 12 in which we openly asked the subjects about their feelings, we compiled a list of positively and negatively connoted words and ran them through the concordance, taking the context into account. The results may be seen in table 4 below.

Table 4. “How do you feel when you listen to your own video?” (“How do I feel…?” questionnaire, item 12).

bad

31

awful

7

ashamed

23

horrible

7

good

19

weird

7

progress

25

honte

7

improve

16

happy

5

mistakes

16

ridiculous

5

disappointed

15

shame

5

mal

9

stress

5

difficile

8

ameliorer

5

hard

8

amelioré

5

57It is quite remarkable the number of times that the words “bad” and ashamed” appear in the answers and how few positive words we found at all. But on a much more encouraging note, many subjects reported that the performance which they perceived as poor, often accompanied by feelings of shame, spurred them on to greater things. The following is a typical comment:

“I feel embarassed and unconfortable, but it helps me to reach an higher level.”

58The subjects who made this sort of observation were not only noticing the points where they need to improve, but also, as Dörnyei puts it, the gap between their current L2 selves their ideal L2 selves. In accordance with Campbell and Storch’s findings discussed in section 1, this visualisation of their future L2 selves, according to them, increased their motivation to improve.

59When we ran the data from the final item (item 13 - “comments”) through the concordancer, the most frequent content-words to come up were quite a mixed bag, as may be seen in table 5 below.

Table 5. Word frequency list compiled from subjects’ answers to the item “comments” (“How do I feel…?” questionnaire, item 13).

level

28

progrès

10

good

25

bad

9

improve

22

vocabulary

9

progress

18

vocabulaire

9

better

17

bon

7

useful

14

utile

7

evaluate

12

évaluer

7

60Regarding our third research question (R3. How does our protocol impact on students’ attitudes and motivation and their perception of success?), on balance, there were far more positive terms than negative ones, and again, many subjects expressed the idea of increased motivation due to a potentially negative experience thanks to noticing the gap between current and future L2 selves, as evinced by the following comment:

“This exercise is very terrible for us but I think it's very useful to improve our English again and again.”

Conclusion

61Although this study is limited to one particular context, we feel that the size of the cohorts and its longitudinal nature lend the results a little more weight than many smaller scale studies assessing speaking. We also feel that the added aspect of students actually seeing themselves enhances the effects of noticing as discussed in part 1.

62The first and most obvious conclusion from our data is the yawning gap between the level that French students should have according to the Ministry of Education on completing their Baccalauréat, especially those intending to study applied foreign languages, and the actual level we observed in the field of oral production in this context. As in many French universities, our department offers elective courses for weaker language students to help bring them up to speed, but perhaps we should consider making these courses obligatory, based on entrance test results, as do some universities. And what of the stronger students? Our data shows that they make little if any progress, so perhaps we are doing them a disservice. Or perhaps it is time to be slightly less ambitious and more modest when setting our goals. For example we could aim to ensure that a student moves up at least one level of the CEFRL scales over the 3 years spent in LEA.

63We can say that the subjects in this study have achieved success on several fronts; they have become familiar with the CEFRL scales and can apply them very accurately not only to their own performance but to those of their peers. They have also become successful assessors. This finding supports what many experts in the field have found (see e.g. O’Sullivan 2012), which is that with appropriate training, language learners are much better at assessing their own level than some people think. But perhaps most important are the effects of this process on the subjects’ motivation; despite many negative reactions upon viewing their performances, there is very often enhanced motivation to improve when subjects notice not only their strengths and weaknesses, but also the gap between their current L2 selves and the L2 selves they would like to become.

64The data which we have compiled over the three years of this study has provided an enormous and multifarious corpus. Much of the data warrants further exploration. First, throughout the preliminary analysis of our data, several interesting issues arose which require further investigation. There appears to be a “comfort zone” within which subjects are more accurate assessors. This zone would seem to be around the A2/B1 level – not surprisingly the levels within which the majority of students are situated when starting out, so perhaps they best recognise and can identify with the characteristics of performances similar to their own. Next, the concept which the authors term “partner clemency” is also worth exploring – although not mentioned in this paper, in the peer evaluations on interactions, we noted at times a tendency to over-evaluate between partners which we will investigate further. Finally, we did not have sufficient subjects to confirm the hypothesis that subjects with high-level performances (C1) upon arrival in first year do not progress greatly beyond this point throughout their three years. But the samples at our disposal and the results of the semi-structured interviews certainly give this impression. The corpus of oral productions is also an extremely valuable resource for comparing to other data and for calibrating other assessment tools for various aspects of spoken discourse and we are currently working on several such projects.

65As the data collection phase of this study draws to a close, we are acutely aware why more studies of this sort are not undertaken. Issues such as the amount of time we have devoted to compiling the data, the technical problems we have encountered and managing a large corpus including so many video files would make us think hard before undertaking such a study again, at least not without considerable funding and more manpower. However we will certainly incorporate many elements of this study into our teaching as the results we obtained through training our subjects on how to assess using the CEFRL and requiring them to watch and assess their own performances were extremely positive for nearly all the subjects. We would therefore encourage any teachers who have access to computers, microphones and webcams - or failing that, simply ask students to use their own laptops, tablets or smartphones, to do likewise. The students will become more aware of their own strengths and weaknesses, will be motivated to improve and will become more autonomous and more effective language learners.

Top of page

Bibliography

Bijnens, H. 2009. (ed.), WebCEF. Collaborative Evaluation of Oral Language Skills through the Web. Leuven: AVNet, K.U.Leuven.

Brudermann, C, C. Demaison & F. Benderdouche. 2012. « Le CECRL : un outil pour construire une politique des langues ? Retour d’expérience sur l’évaluation et la certification à l’UPMC (2009/2011) », Recherche et pratiques pédagogiques en langues de spécialité, 31/3 : 31-41.

Campbell, E. & Storch, N. 2011. “The changing face of motivation: A study of changes to second language learners' motivation over time,” Australian Review of Applied Linguistics 34/2: 166-192.

Catroux, M. 2002. « Introduction à la recherche-action: modalités d’une démarche théorique centrée sur la pratique », Les Cahiers de l’APLIUT, 21/2 : 8-20.

Council of Europe. 2001. A Common European Framework of Reference for Learning, Teaching and Assessment. Cambridge: Cambridge University Press.

De Jong, N.H., M.P.  Steinel, A.F. Florijn, R. Schoonen & R.H. Hulstijn, 2012. “Facets of speaking proficiency”, Studies in Second Language Acquisition, 34 : 5-34.

Dörnyei, Z. 1998. “Motivation in second and foreign language learning”, Language Teaching, 31: 117-135.

Dörnyei, Z. 2001. Teaching and Researching Motivation. Harlow: Longman.

Dörnyei, Z. 2005. The Psychology of the Language Learner. Mahwah, New Jersey: Lawrence Erlbaum Associates.

Dörnyei, Z. 2009. “The L2 motivational self-system”. In Z. Dörnyei, & E. Ushioda (eds.), Motivation, Language Identity and the L2 Self. Bristol: Multilingual Matters, 9-42.

Ellis, R. 2003. Task-based Language Learning and Teaching. Oxford: Oxford University Press.

Frath, P. 2012. « Évaluation des étudiants non-spécialistes en langues à l’aide du CECR », Les Langues Modernes, 1 : 56-64.

Frost, D. & J. O’Donnell. 2013. “Combatting the ‘can’t do mentality’: expert, peer & self-assessment in a French university context.” In J. Colpaert, M. Simons, A. Aerts & M. Oberhofer (eds.), Proceedings of the 2nd International Conference “Language Testing in Europe: Time for a New Framework?” May 2012. Antwerp: University of Antwerp, 104-109.

Götz, S. 2013. Fluency in Native and Nonnative English Speech. John Benjamins: Amsterdam.

Goullier, F. 2007. « Le Cadre européen commun de référence pour les langues, instrument de normalisation ou document instrumentalisé pour une normalisation de l’enseignement et de l’évaluation ? », Les Cahiers de l'APLIUT, 26/2 : 12-22.

Guichon, N. & C. Cohen. 2012 “Enhancing L2 learners' noticing skills through self-confrontation with their own oral production performance”, Les Cahiers de l’APLIUT 31/3 : 87-104.

Hilton, H. 2001. « La Compétence lexicale des étudiants français en anglais L2 : quelques éléments ». Atelier « Acquisition en milieu universitaire », Congrès de la SAES (Montpellier, 5 mai 2001).

Housen. A. & F. Kuiken. 2009. “Complexity, accuracy, and fluency in second language acquisition”, Applied Linguistics, 30/2: 461-473.

Koponen,  M. & H. Riggenbach. 2000. “Overview: varying perspectives on fluency”.  In Riggenbach, H. (ed.), Perspectives on Fluency. Ann Arbor: Michigan University Press, 5-24.

Krumm H-J. 2007. “Profiles instead of levels: The CEFR and its (ab)uses in the context of migration”. The Modern Language Journal 91/4: 667-669.

Luoma, S. 2004. Assessing Speaking. Cambridge: CUP.

McBeath, N. 2011. “The Common European Framework of Reference for Language; learning, teaching, assessment”. Arab World English Journal 1: 186 - 213.

McEnery,  T. & A. Wilson. 2001. Corpus Linguistics. Edinburgh: Edinburgh University Press.

McNamara, T, & C. Elder. 2010. “Beyond scales”. In A. Liddicoat & A. Scarino, Languages in Australian Education: Problems, Prospects and Future Directions. Cambridge: Cambridge Scholars Publishing, 193-201.

Mejía, de A-M. 2012. “Perspectives from Colombia”. In M. Byram (ed.), The Common European Framework of Reference: The globalisation of language education policy, 23. Clevedon: Multilingual Matters, 140-157. Narcy-Combes, M-F. & J. McAllister. 2011. “Evaluation of a blended language learning environment in a French university and its effects on second language acquisition”. ASp, 59: 115-138.

Noriyuki, N. 2009. « L’impact du Cadre européen commun de référence pour les langues dans l’Asie du Nord-Est: pour une meilleure contextualisation du CECR ». Revue japonaise de didactique du français, 4/1 : 54-70.

O’Sullivan, B. 2012. “Assessing speaking”. In C. Coombe., P. Davidson, B. O'Sullivan & S. Stoynoff (eds.). Second Language Assessment. Cambridge: Cambridge, 234-246.

Osborne, J. 2011a. “Oral learning corpora and assessment of speaking skills”. In A. Frankenburg-Garcia, L. Flowerdew & G.Aston (eds.). New Trends in Corpora and Language Learning. London: Continuum, 181-197.

Osborne, J. 2011b. “Fluency, complexity and informativeness in native and non-native speech”. International Journal of Corpus Linguistics 16/2: 276-298.

Oxford, R. 1990. Language Learning Strategies: What Every Teacher Should Know. Boston: Heinle & Heinle.

Schmidt, R. 1990. “The role of consciousness in second language Learning”. Applied Linguistics, 11: 129-158.

Schmidt, R. 2001. “Attention”. In Robinson, P. (ed.). Cognition and Second Language Instruction. Cambridge: Cambridge University Press, 3-32.

Schmidt, R. 2010. “Attention, awareness, and individual differences in language learning”. In W. M. Chan, S. Chi, K. N. Cin & al., Proceedings of CLaSIC 2010, Singapore, December 2-4. Singapore: National University of Singapore, Centre for Language Studies, 721-737.

Taillefer, G. 2007. « Le défi culturel de la mise en œuvre du Cadre européen commun de référence pour les langues : implications pour l’enseignement supérieur français ». Les Cahiers de l’APLIUT 16/2 : 33-49.

Tardieu, C. 2009. « Corriger ou évaluer », Recherche et pratiques pédagogiques en langues de spécialité : Cahiers de l’APLIUT, 28/3 : 9-25.

Truscott, J. 1998. “Noticing in second language acquisition: a critical review”, Second Language Research, 14/2 : 103-135.

Van der Yeught, M. 2014. « Développer les langues de spécialité dans le secteur LANSAD – Scénarios possibles et parcours recommandé pour contribuer à la professionnalisation des formations », Recherche et pratiques pédagogiques en langues de spécialité : Cahiers de l’APLIUT, 27 33/1 : 12-32.

Top of page

Notes

1 http://www.education.gouv.fr/cid80650/lancement-de-la-conference-nationale-sur-l-evaluation-des-eleves.html.

2 http://ec.europa.eu/languages/policy/strategic-framework/documents/language-survey-final-report_en.pdf

3 http://www.capital.fr/content/download/959695/5258801/version/1/file/Abstract%20-%20Observatoire%20TOEIC%202009%20des%20niveaux%20d%27anglais%20en%20France.pdf

4 Our translation for : « Les synergies entre les LEA et le LANSAD sont nombreuses, mais les LEA ne sont pas assimilables au LANSAD car elles restent des formations de langues et parce que les domaines professionnels auxquels elles sont appliquées sont assez divers et n’aboutissent pas à une formation en langues de spécialité. »

5 The Ello study was in its third year at the time this paper was submitted.

6 6^6 i.e. 6*6*6*6*6*6 = 46656.

7 6! (factorial 6) i.e. 6*5*4*3*2*1= 720.

8 NB. We used the term “awareness” here, because a subject may be aware of an issue, but this is not necessarily “noticing” as defined by Schmidt 1990, 2001, 2010.

9 Students’ comments are left in their original version, with possible language mistakes therefore.

Top of page

List of illustrations

Title Figure 1. 2011 and 2012 1st year students’ levels: monologues (expert assessments). 2011, n=104. 2012 n=149.
URL http://journals.openedition.org/apliut/docannexe/image/5195/img-1.png
File image/png, 27k
Title Figure 2.  2011-2013 students’ levels: monologues (expert assessments), full cohort. 1st year 2011, n=104. 2nd year 2012, n=80. 3rd year 2013, n=61.
URL http://journals.openedition.org/apliut/docannexe/image/5195/img-2.png
File image/png, 28k
Title Figure 3. 2011 and 2012. 1st year students’ levels: monologues (expert, self-predicted and actual self-assessment).
URL http://journals.openedition.org/apliut/docannexe/image/5195/img-3.png
File image/png, 22k
Title Figure 4. 2011. Correct peer assessment of 5 monologues A1-C1 n=74 (assessors).
URL http://journals.openedition.org/apliut/docannexe/image/5195/img-4.png
File image/png, 26k
Title Figure 5. Number of positive / negative / “awareness” (“noticing”) words from subjects’ answers to item 8 and 9, (“How do I feel…?” questionnaire): why they do / do not find this procedure useful.
URL http://journals.openedition.org/apliut/docannexe/image/5195/img-5.png
File image/png, 9.7k
Top of page

References

Bibliographical reference

Dan Frost and Jean O’Donnell, Success: B2 or not B2- that is the questionRecherche et pratiques pédagogiques en langues, Vol. XXXIV N° 2 | 2015, Pagination en cours.

Electronic reference

Dan Frost and Jean O’Donnell, Success: B2 or not B2- that is the questionRecherche et pratiques pédagogiques en langues [Online], Vol. XXXIV N° 2 | 2015, Online since 30 June 2015, connection on 28 March 2024. URL: http://journals.openedition.org/apliut/5195; DOI: https://doi.org/10.4000/apliut.5195

Top of page

About the authors

Dan Frost

Dan Frost studied has studied languages and linguistics at the York in the UK and at Strasbourg and Aix-en-Provence in France. His doctorate is in English for Specific Purposes and Teaching Theory. He taught English in the UK, Thailand and Sweden before settling in France, where after teaching in secondary schools and then ESP & EAP for 10 years in the IT department at Grenoble IUT. He was a senior lecturer in the Applied Foreign Languages department of the University of Savoie in Chambéry, France from 2010 to 2014 and now works at the University of Grenoble. His main research interests are teaching pronunciation, oral English and computer-mediated learning.
daniel.frost@u-grenoble3.fr

By this author

Jean O’Donnell

Jean O’Donnell studied in St Patrick’s College, Maynooth, Ireland from where she graduated with a primary degree in mathematics and languages (French/Irish) and the national Irish teaching qualification for second-level education. This was followed by a period in France as an assistante, maître de langue and ATER. She obtained a degree in English, a Masters in Applied Languages, the CAPES in English and a French doctorate in Applied Linguistics from the University of Stendhal, Grenoble 3. She is now a senior lecturer (Maître de conférences) in the Department of Applied Foreign Languages (LEA) at the University of Savoie in Chambéry, France. Her main research interests are language testing and computer-mediated language teaching and learning.
jean.o-donnell@univ-savoie.fr

Top of page

Copyright

The text and other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Top of page
Search OpenEdition Search

You will be redirected to OpenEdition Search