Assignment #3
TV as Language Teacher?
Please note: the computer programs used in this exercise will only work on the PC operating system; Mac users should collaborate with others with a PC system
You may work in groups for this assignment, but individual work is encouraged so that you get hands-on experience with the program.
There are two goals to this exercise. The first is to develop an appreciation of the complexity of vocabulary: function words (the, of, at, that), commons content words ( hat, sleep, soda, room), academic words (define, analyze, process, interpret). (In the future, we will also visit another category, disciplinary words [circumference, genre, conjugation, inflation, force]). The second is to give you a valuable software tool to help you analyze the vocabulary components of texts. We will conduct this exercise in the context of asking the question: what kind of exposure to English vocabulary does watching TV provide, and does the type of program that students watch matter in supporting their English development?
The first thing to do is to is to go to the class syllabus and download the file labelled Nation vocabulary program (click to begin download). This is a zipped folder that contains the following, which you should unzip. Save the folder in your directory of choice. The folder once unzipped should contain the following::
Now we are ready to fly.
First, try a frequency analysis of words in one of the texts. Choose any that suits your fancy. Open the FREQUENCY program by double-clicking on the icon, which should get you into the program. Go to FILE => Open and search in the DATA folder for a file you would like to analyze, and choose it. This should place the file name in the location in the box Files to Process. Before you hit the Process Files button, you need to tell the program to save the results into a file [forgetting this step will result in an error!]. So, go back to FILE=> Save and give it whatever name you like, e.g., Glenn Frequency Test1, where the output will be saved. Now hit Process Files, and the right part of the dialog box should tell you that the job is finished. Your results should be in your destination file (Glenn Frequency Test1 or whatever you indicated earlier), which can be viewed through your word processor.
Inspect the output by opening the output file (e.g., Glenn Frequency Test1). There should be a list of words with their frequency of occurrence, sorted by frequency rank. You do not need to print this out, but scroll up and down the file, noting what words appear frequently, and what words occur just once or a few times.
Do another frequency run, using one of the Art Mann shows [Drunken Beach Bask?]. You should clear the left hand side of the dialog box before selecting the file. You should also give a new destination file name, again, by going to FILE=> Save and give it a new name, e.g., Drunken1 and hit Process Files and inspect the contents of the output.
Compare the frequency outputs of the Glenn Commission and the Art Mann show. What can you say about the frequencies of the different words in the two shows? What can you say about the less frequent words? List some (up to 3) conclusions about the vocabulary in the two different shows, and put write them down on a single page (powerpoint slide, or whatever). Bring this to class.
Now, try the RANGE program. This program counts frequencies of words and word families (variants of the same word, e.g., analyze, analysis, analytic) in the text file that are contained in the BASEWRD files. [The BASEWRD files we are using are academic words, but note that you can tailor this file to suit any purpose, for example, you can create your own list of discipline-specific words, or vocabulary words that you plan to teach in ELD].
Open the RANGE program, and as in the frequency analysis, choose a file to analyze (for starters, choose the Glenn commisssion file which would have the thickest amount of academic words), and designate a destination file where the output will be saved, and hit Process Files. The destination file will add "_range" to the name you give it. Open up this file, and take a look. The first set of information in the output tells you the file that was read, i.e., that it processed the Glenn Commission press conference.txt file. It then tells you how many lines it read, and how many words it counted (should be 11376). Then it tells you the it searched and counted the occurrence of words and families from three BASEWRD files.
Then there is information about how many words it found from each word type, families, and not from the families on the BASEWRD files. [Word type distinguishes between related words within the familiy, e.g., it separately counts analysis from analytic, whereas word family collapses across these related words. The first list of frequencies reports on TYPES, and begins with:
Types Found In Base List One
This tells you how many types from BASEWRD1 were found. Note that it counts constitutents and constituencies separately. It then does the same for BASEWRD2 and BASEWRD3.
Scroll down further to
LIST OF FAMILY GROUPS
Pay attention just to the column that says FAFREQ. This shows the frequency of occurrences of family members of the word (you can look at the manual for additional information if you are really curious -- we will get to F1, which should be the same as FAFREQ, later when we compare texts).
The program then spits out an alphabetized list of words that were not in the 3 BASEWRD files.
A convenient feature of the RANGE program is that it compares multiple files. So, we can compare the academic word characteristics of the Glenn commission, and one of the Art Mann Presents. To do this, clear the dialog box, and choose the two (or more) files that you would like to compare -- these files should again appear in the left portion of the dialog box as you select them. The output now should have columns that enable you to compare across the files (F1, F2). You should readily see the differences in the frequencies of academic words in these programs. Play around with comparing different text files, and if you like, find some texts from your own data or from the web What can you say about the use of academic words in the programs? Write this down, and bring the page to class.
One more feature, which you are welcome to try: You can change the number of BASEWRD files that the program inspects. Notice on the dialog box for the program that there is a setting for the number of BASEWRD files it analyzes, where it says "Number of Baseword Files". The default is 3, i.e., the program counts words in the first 3 BASEWRD files, i.e., BASEWRD1, ..2, ..3.but you can set it for up to 10. Try re-setting and re-running the program. The output should be longer, counting through all 10 BASEWRD files.
Based on this exercise, do you recommend watching TV to your students for purposes of English language development?
In a future exercise, you will be asked to analyze for academic words on some of your own textbook materials, so do not delete this folder of programs.
Evaluation Criteria:
Evaluation
Criteria
|
Score
(5 pts. each)
|
Comments
|
Relevant conclusions about the frequencies of vocabularies in two shows compared |
|
|
Relevant conclusions about use of academic words in two or more shows compared. |
|
|
|
|
|
Last updated January 25, 2006 by Kenji Hakuta