Author Archives: Maria Gouskova

Gouskova 2021

Gouskova, Maria. 2021. Phonological asymmetries between roots and affixes. Submitted to the Blackwell Companion to Morphology, Eds. Peter Ackema, Sabrina Bendjaballah, Eulàlia Bonet, and Antonio Fábregas.

This review surveys the phonological asymmetries between roots and non-roots (affixes, clitics). It starts with an extraphonological, structural definition of roots, and considers those non-phonological properties that are phonologically relevant: they are easily borrowed, and they are most deeply embedded. The empirical portion of the review concentrates on templaticism and size restrictions, asymmetries in segmental contrast/inventories, the properties of multi-root words (compounds), and accentual characteristics that differ between roots and affixes. The theoretical section surveys theories that account for these properties: Prosodic Morphology, Positional Faithfulness, the cycle and its analogs, and Anti-Faithfulness. I then critically review several recent and not-so-recent proposals that blur the line between affixes and roots, using the ‘root’ designation diacritically or recasting diacritic distinctions as structural distinctions. The
concluding section discusses the role of roots in phonological learnability.

Comments Off on Gouskova 2021

Filed under Uncategorized

Gouskova and Bobaljik 2021

Gouskova, Maria and Jonathan David Bobaljik. 2021. Russian baby diminutives: Heading toward an analysis. Manuscript, NYU and Harvard. [pdf]

The Russian suffix -onok has two functions. First, it can be a size diminutive in nouns denoting baby animals. Second, it can be an evaluative with a dismissive/affectionate flavor. Various grammatical properties of this suffix differ between the two uses: gender, declension class, and interaction with suppletive alternations, both as target and trigger. We explore a reductionist account of these differences, on the assumption that there is a single vocabulary item that may realize either a head or a non-head morpheme. In doing so, we attempt to spell out theoretical assumptions that would be needed to reduce the observed grammatical differences to this structural distinction, and to situate our account with relation to other current approaches to diminutives.

Comments Off on Gouskova and Bobaljik 2021

Filed under Uncategorized

Sounds of the World

These are some resources for phonetics students who want to know what languages have certain sounds, how these sounds are produced, and where in the world the languages are spoken.

  • World Atlas of Linguistic Structures: this is a resource on linguistic typology–classification of languages according to various characteristics. There is a page listing features of interest, and the atlas can be searched for specific language names, as well. Here is the page on the velar nasal, for example:

As with any typological resource, it is a good starting point, but you should always look at primary sources for further research.

  • The UCLA Phonetics Vowels and Consonants page: A classic resource that goes with Peter Ladefoged’s books A Course in Phonetics and Vowels and Consonants. For many languages, there are audio files of minimal pairs illustrating unusual contrasts. The audio was often recorded in the field so the quality is sometimes fuzzy. A newer version of the same materials can be accessed on Keith Johnson’s website.
  • International Dialects of English Archive: this has recordings of English speakers reading the same two texts. For American dialects, there are multiple speakers from each state, and their age and some other demographic information is given:

screenshot of dialect archive page

  • Articulatory IPA: A great collection of short MRI, ultrasound, and schematic videos illustrating various sounds.

  • Illustrations of the IPA: From the Cambridge University Press Journal of the International Phonetics Association, a series of articles that do sketches of individual languages’ sound systems. Search the journal contents by language name or sound type. Many of the articles are open source, and they come with audio files of high quality that go with the transcriptions in the book. To see the audio files for an article, click on its “Supplementary Materials” tab.
  • UPSID: the UCLA Phonological Segment Inventory Database. This is one of the older databases, with just 451 languages, but it is supposed to be balanced geographically and genetically (that is, related languages are not overrepresented). It’s a good starting point for researching the typology of sound inventories.

Comments Off on Sounds of the World

Filed under Uncategorized

PhoNE workshops

PhoNE (Phonology in the NorthEast) is the current incarnation of a series of annual workshops, mostly on phonology, which have been meeting on the East Coast for over two decades.

Historically, the names were acronyms based on the participating schools:

RUMMIT was the Rutgers-UMass-MIT phase of the meeting. This name was used from 2009 until 2014 or so.
UMMM was the UMass-MIT Meeting on phonology, a.k.a. MUMM. These names were used 2008-9.
HUMDRUM stood for “Hopkins, U of Maryland, Rutgers, Umass”. This name was used in 2000-2009.
RUMJCLaM was the “Rutgers-UMass Joint Class Meeting”. Before that, RUMD. These names were used in the 1990’s.

Here are the locations and dates of previous meetings. Corrections are welcome, and thanks to Juliet Stanton for help in tracking these down!

2020: NYU
2019: Yale, April 13
2018: MIT, March 31
2017: UMass, April 8
2016: NYU, April 9
2015: Yale, April 2
2014: MIT, April 26
2013: UMass, April 6
2011: Rutgers, May 16
2010: MIT, December 4
2009: UMass, November 1
2009: MIT, May 9
2008: UMass, November 22
2008: MIT, March 29
2008: Rutgers, April 26
2006: Hopkins
2006: MIT
2005: UMass
2004: Rutgers
2002: UMass
2001: Hopkins
2000: Rutgers
1999: Rutgers
1998: UMass (RUMJClaM)
1997: MIT (as Bay and Berkshires Phonology)

Comments Off on PhoNE workshops

Filed under Uncategorized

Features in phonology

This is a 30-page overview of phonological features, which I wrote for the phonology classes I teach at NYU. It is intended to be accessible to both undergraduates and graduates; I usually ask the undergrads to read sections 1-4 and 9, and the grads to read the whole thing. If you would like to cite this review in your work, refer to it as follows:

Gouskova, Maria. 2016. Features in Phonology. [pdf] Ms., New York University.

Comments Off on Features in phonology

Filed under Uncategorized

Drawing linguistic structure trees

The LaTeX way

I use LaTeX to make my trees—the qtree package for simple trees, and xyling for anything more complicated like prosodic structures, Hasse diagrams, etc. For detailed instructions and links, see the LyX Wiki for Linguists. This page explains not only how to do syntax trees but also how to draw moraic structures, how to include IPA in a syntax tree, and so on.

Standalone applications

There are several standalone programs, both webapps and desktop apps, that you can use to enter tree structures, and they will render the structures as pictures for you. I like  phpSyntaxTree: Given the input [S [NP [N Trees]] [VP [V grow] [PP in apps]]], this produces a PDF with the following image.

php syntax tree demo


















It also supports Unicode now so you can use IPA in your trees as well as have math symbols in your labels.

For desktop apps specific to your OS, do a search–there is definitely stuff out there for Windows and Macs, although there may no longer be support for the apps because they tend to be a labor of love sort of thing.

The Fingerpainting Method

In Libre Office, Microsoft Office, and no doubt other programs, you can draw the trees by tabbing the words into position and adding the lines using the line drawing tool. Here is what I got when I did this in Libre Office. (In order to make it look like that, I had to individually right-click on every line and change the color to black, because it defaults to blue and I couldn’t stand that and I could not be assed to figure out where the defaults are.) So, as you can see, the method is slow and ugly, but it has some benefits.

libre office line drawn tree

  1. It works when you do not have an internet connection.
  2. The tree is entirely contained in your document and uses the same fonts as your text, so even though you get an ugly tree, the fonts match
  3. There is no need to worry about embedding fonts in your PDF file, unlike the Arboreal method (next).
  4. You do not have to learn even the rudimentary bracketing syntax that phpSyntaxTree and others require, so if you are really afraid of any sort of structure and notations, this method is for you. Then again, if you are this afraid of structure, you should probably give up because linguistics might not be for you.




Arboreal and Moraic

This is an old approach to drawing trees. You tab or space your words into position and switch to the font, which provides characters that look like lines at different angles. There’s a triangle or two. The output looks more consistent than the fingerpainting method above. Notice that the screenshot on the right has the red spellchecker underline–it’s because the lines are actually characters, and the LibreOffice spellchecker treats them as (ill-formed) text. The red lines would not be visible in the PDF.






This method has many downsides, however.

  1. The fonts are proprietary and cost $20 each.
  2. These predate Unicode so modern computers don’t know what to do with them unless they have been converted to pictures first. So these fonts need to be explicitly embedded in the PDF or else they render as capital Gs, etc.
  3. The limitations of the characters restrict what types of trees you can draw—for example, the triangles only come in a few sizes.
  4. I spent about 10 minutes henpecking around the keyboard looking for the right characters to show you, and the font kept switching back to the default text font. The key map on my OS did not even know how to render these characters, so I was flying blind.


Comments Off on Drawing linguistic structure trees

Filed under international phonetic alphabet, trees, tutorials

International Phonetic Alphabet fonts and keyboards


This page explains how to set up International Phonetic Alphabet IPA fonts and keyboard layouts on your computer.

Keyboard layouts allow you to enter IPA fonts directly by hitting key combinations, as explained below. I work with the IPA a lot, so I find  keyboard entry indispensable.

Which font to get?

  • Most modern computers should already have some fonts that can display IPA characters. But what you sometimes see is some characters appearing in a different font than the rest of the text, like this:


  • This is because many fonts only have a partial Unicode character set, which covers the standard (Latin alphabet) ASCII set but not much more. Your computer will substitute a more comprehensive font if its default font lacks the IPA characters. On Mac OS, this is Lucida Grande, for example.
  • I like Linux Libertine and Linux Biolinum. The fonts are freely distributed under a GNU Public License, and they work on any OS.


  • Another popular font is Doulos SIL. See the SIL webpage for details. I think it’s kind of ugly.

How to use the fonts: Web interface

  • This is the slow but universal way. It even works on iOS and Android.
  • Go to You can point and click on any IPA symbol with your mouse or finger, and the character will appear in the text box.
  • From there, you can copy and paste the transcriptions into your document writing program, then change the font of the doc to Linux Libertine if you like, and you’re done.


OS-specific Methods

These are much faster and more efficient. Invest some time into learning them and you will save yourself time in the long run.


  • Check out this page, ipa4linguists. It explains how to use the Character Map, and also covers the various quirks of Windows.
  • I found one IPA keyboard layout for Windows. I cannot vouch for this thing, but if it works, that is by far the most efficient way of working with IPA fonts (see the next section on Macs).

Mac OS

The IPA-SIL keyboard layout

  • These instructions should be current as of Mac OS 10.8x and 10.9x.
  • I use the IPA-SIL keyboard layout, which I am providing it for download here since SIL is no longer distributing or supporting it. The zip file includes a PDF with more detailed documentation.
  • The keyboard layout allows you to type in IPA without using “dead keys” (keystroke sequences that turn into a single character, for example, typing “i” and then “=” gives you “ɪ”). Ain’t nobody got time for that.
  • Here is the workflow:
    • hitting Cmd+Space switches the input method from English to IPA-SIL.
    • Once in IPA-SIL mode, you can type normal lowercase Latin characters without doing anything.
    • If you press the Shift key while typing s, i, f, d, t, q, n, you get  ʃ, ɪ, ɤ, ɾ, ð, θ, æ, ŋ respectively (The keyboard map shows you which keys do what, though you do not need to use the keyboard map to enter the keys):

What your keyboard will type in IPA-SIL when "Shift" is depressed

What your keyboard will type in IPA-SIL when “Shift” is depressed

  • Pressing Alt+Shift accesses these:

Alt+Shift on the IPA-SIL keyboard layout

Alt+Shift on the IPA-SIL keyboard layout

  • And Alt alone accesses these:

Alt on the IPA-SIL keyboard

  • For anything else, you can use the Character Viewer on Mac OS. Just open it from your Input Methods menu (which can be enabled from Settings>Keyboard).

Installing the IPA-SIL Keyboard

  • There are more detailed instructions in the PDF inside the zip file linked above.
  • The basic procedure is:
    • put the IPA-SIL.keylayout file into Users/yourusername/Library/Keyboard Layouts. Put the IPA-SILicns icon file there as well.
    • then enable the IPA-SIL keyboard by going to Settings>Language & Text > Input sources. Scroll down the long list of keyboards until you see IPA-SIL; if you don’t see it, you might need to restart the machine. Then check it.
    • Check on “Show input menu in menu bar”.
  •  Mac OS likes to take away features, especially ones that allow you to customize the system. If you cannot see the Library folder inside your Users/yourusername directory, go to your favorite search engine and look for the solution that is specific to your version of Mac OS.


  • The only method I was able to get going on Linux is ipa-x-sampa. It’s reasonably user-friendly and does not require a million steps that result in failure, like SIL’s Keyman thing. In order to use ipa-x-sampa, you need to enable IBus, and then
    apt install ibus-table-ipa-x-sampa

    If that fails, there is also Character Map type thing that you can use.

  • For some non-IPA characters that linguists use, it was suprisingly difficult to figure out how to enter them on Linux. The crucial bit (on Linux Mint, anyway, but probably others too) is to enable a “Compose Key” in your keyboard layout. Go to Preferences>Keyboard>Layouts, select “English (US)”, then Options, then find “Position of Compose Key”. I chose Caps Lock for mine. What this does is allow you to type various characters like é and ü by pressing the Compose Key together with ‘ or “, etc. A list of all the default key combinations enabled in Linux can be downloaded as a tab-separated text file here.

iOS and Android

  • Go to your “App store” or whatever and search for “IPA keyboard”, or “IPA phonetics”.

Comments Off on International Phonetic Alphabet fonts and keyboards

Filed under international phonetic alphabet, tutorials

Praat tutorial


  • Praat is a freely available program written by Paul Boersma and David Weenink.
  • It is primarily intended for acoustic analysis of speech, but it has some additional functions such as speech synthesis and some constraint-based grammar learners. It can even run some basic perceptual experiments.
  • The program is very powerful and has many features, with new ones being added all the time. There are only a few features that a beginning phonetician would need; this tutorial covers them.

Installing Praat

  • Go to and follow the instructions for your operating system.
  • Mac users–drag the file into your Applications folder. You may then add a link to the program onto your dock so you can enjoy looking at this icon every day.


  • Not enough space on your disk? This is an issue that Chrome OS users sometimes report. (Chrome OS stores everything in the cloud so the machines often have very little physical storage.)
    • Try to free up space on your machine. I would start with your browser’s data, which can be huge.
    • Chrome OS runs Praat inside a Linux installation, which you might already have. If not, see the instructions here for how to do it.
    • My Linux installation of Praat is 25MB, plus another 25 for the no-GUI version (my installation is on Linux Mint, not inside Chrome OS, so your mileage might vary).

Praat basics

The two windows

  • When you open Praat, two windows appear: the Objects window and the Picture window. You won’t need the Picture window most of the time, so close it. We’ll come back to it at the end of this tutorial.
  • The Objects window starts out empty, but once you open sound files and manipulate them, it will contain sounds, spectrograms, text grids and any other objects that you work with:

The Praat object window on a Mac.

The Praat object window on a Mac.

  • Important: the objects in the Object window are temporary and only exist in Praat’s working memory. If you change the content of an audio file using Praat, it won’t automatically save the changes. If you try quitting without saving the objects, Praat will prompt you to do so.

Opening, playing, recording, and editing audio files in Praat

Opening an existing sound file

  • Open Praat, click on “Open”, then “Read from file”. You will see a “Sound” object appear in the window, which you can then “View and Edit”.
  • In Mac OS, you can also drag your audio file or files onto the Praat icon. All of the files will then appear as sound objects in the list at once. See if your OS supports drag-and-drop opening of files.
  • Depending on the length of the recording, you will see either the waveform with an empty window below it or a waveform above the spectrogram.

Converting stereo to mono

  • If you are seeing two waveforms, your file is in stereo (was recorded with two microphones):


The two lines of black squiggles labeled “Channel 1” and “Channel 2” are your two stereo channels.


Here, we extracted just one channel (the top one, recorded with the “left” microphone). Now we have a mono sound.

  • For speech analysis, you do not need stereo, since the vast majority of humans have only one mouth.
  • To get a mono file, you can extract one of the audio channels, like this:
  1. Return to the Objects window.
  2. Select the stereo Sound object.
  3. Click on “Convert”.
  4. Select “Extract one channel”. Unless the two channels are really different from each other, you can just accept the default, “1/left” channel.
  5. The new object will have the same name as the old but with “_ch1” appended at the end. Don’t forget to save it if you want to use it again.

Recording an audio file right into Praat

  • You can record right into Praat, as long as your computer has a built-in microphone. Most likely the recording will not be of awesome quality, but it’s fine for practicing with the program.
  • To record a sound, click on “New>Record Mono Sound”, and hit “Record” in the window that opens. You can accept all the defaults in that window.
  • One tip about recording: if you are using your laptop, you might not know exactly where the microphone is on it. I have no idea where the mic is on my laptop, actually. I just leaned in and talked close to the laptop. Here is the resulting recording of me saying a sentence in Russian, [napʲisənə lʲdotʲexnʲikə pʲatʲ ras] “The word ‘ice technology’ is written five times.”


A waveform and spectrogram of a sentence I recorded straight into my MacBook Air using Praat.

Recording: a note about clipping

  • When you record audio for speech analysis, you want the signal to be as loud as possible without exceeding the range of your microphone’s sensitivity.
  • Look at the black number in the upper left-hand corner of the screen, next to the waveform. Your recording should get as close to 1 as possible, but the waveform should not protrude above it. If the amplitude of the recording exceeds the range of the microphone, you get clipping.
  • A clipped recording is missing parts of the signal, and it sounds awful. Avoid.
  • Here is what clipping looks and sounds like. I had to pretty much yell at my laptop to get this to happen, so you’ve been warned. Your Praat recording widget has a meter display that stays green while you’re in good range and turns red when you are in the clipping range.

The last "clipping" is clipped. See how the waveform extends outside the waveform window?

The last “clipping” is clipped. See how the waveform extends outside the waveform window?

Playing audio

  • Once you have a Sound object, you could just hit “Play”. Usually, we want to play only portions of a file, sometimes repeatedly as we try to transcribe or determine the boundaries of a segment.
  • To play portions of a file, click on “View & Edit”, and make a selection with your mouse.
  • The playback options are in the “View” menu. Yep. I actually had to look for this just now because I usually play back the selection using the Tab key. Tab will also stop playback. Shift+Tab plays the visible window.
  • For Mac users: I’ve used a Mac for over a decade now but I still cannot keep track of the little symbols that apps use for keys. Here is a reference.

Editing a file

  • There are many things you can do to edit a file. Perhaps the most basic function, and one that you might find useful long after this class ends, is to cut out parts of a file.
  • First, open the Sound object of your file in the View & Edit window.
  • Make a selection you want to keep, using the mouse.
  • If you want to make a really neat cut, you can “Move start of selection to nearest zero crossing”–this is an option at the bottom of the “Select” menu. Then do the same for the end of the selection. What this will do is adjust the selection so that it starts and ends with a silence (zero amplitude).
  • Then click on “File”, and you have several options here.
    • You can put the selected sound into its own sound object, if you want to keep doing things to it (“Extract selected sound”, either preserving the time markings from the original file or resetting them to zero seconds).
    • You can also save the file to disk. There is a range of options, but a .WAV extension is the basic one.
  • The options above do not alter the original file or the Sound object in Praat’s memory.
  • If you want to modify the Sound object or the file, you can cut a portion of it out–useful if you have a long period of silence, or if you want to make someone say “got” instead of “Scott” or whatever. This is done via “Edit>Cut”.
  • Once you cut a portion out, it is placed on your clipboard (computer’s working memory); if you then save the Sound object to the original file again, the file will be permanently altered. If you do not want that, save it under a new name instead.
  • You’ll see other options in the menus, which are more or less self-explanatory. Feel free to play around with them, and remember that nothing is permanent until you save to disk.

Viewing spectrograms, pitch tracks, formants

  • Praat can only display spectrograms for relatively small chunks of audio, so if you want to see a spectrogram for a word, zoom in on it.
  • You can select a part of the recording with the mouse, and then use the View menu to zoom to that selection. The View menu is fairly self-explanatory.
  • There are keyboard shortcut hints in the View menu and many other places in Praat! Use them. I use Cmd+N to view the selections on Mac OS.
  • Here is a waveform and a spectrogram of a female Russian speaker (not me) saying [napʲisənə lʲdotʲexnʲikə pʲatʲ ras] “The word “ice technology” is written five times.” This sentence is a bit over 2 seconds long.


A waveform and a spectrogram of a 2-second Russian sentence.

Making a spectrogram look good

  • If you are working with a fresh install of Praat, your spectrograms most likely will look a lot more gray than the ones you see above. This is because the dynamic range is set very high in Praat by default–at 70 db. You want something like 30-50 for a recording that has some background noise.
  • The obligatory metaphor: Dynamic range refers to how low the cut-off is for the volume of frequencies that the spectrogram visualizes. The lower the number, the less you see. Think of it as taking pond water out of a bucket. The deeper you dip, the more muck you’ll scoop up. If your pond (=recording) is very clean, then you can dip pretty low (i.e., have a high number dynamic range). If your pond is mucky and dirty, then you better skim from the top (i.e., have a low number in your dynamic range).
  • Of course, just because you are skimming from the top doesn’t mean you have clean water. Here is what the laptop audio I recorded looks like with the defaults. You can clearly see two bands of air conditioner noise, the lower of which is around 2400 Hz. This kind of noise really interferes with acoustic analysis of speech:

The same recording of me saying that "ice technology" sentence, with a default dynamic range of 70 db. The two bands of noise are from the air conditioner in the background.

The same recording of me saying that “ice technology” sentence, with a default dynamic range of 70 db. The two bands of noise are from the air conditioner in the background.

  • To set the dynamic range, click on “Spectrum>Spectrogram settings”. Change it in 5 db increments until it looks good.
  • You can also change how high the frequencies go in the spectrogram display. The default is 0-5000 Hz. You can expand it quite a bit–some fricatives have noise at frequencies above 12000 Hz.

Viewing pitch tracks, intensity, and formants

  • Pitch.
    • This is pretty simple. While you have the Sound object open, click on “Pitch>Show pitch”. You will see a curvy blue line appear in the spectrogram window.
    • In Pitch Settings, click on “drawing method” and select “speckles”. I think it looks better than Praat’s default, “automatic”.
  • Intensity.
    • Click on “Intensity>Show intensity”. A yellow line will appear in the spectrogram window.
  • Formants.
    • Praat can also show you formants, and you can probably figure out the procedure for those on your own.
    • There is one thing you will have to change in the Formant settings depending on whether you are looking at a male or female voice: the maximum formant should be set for 5500 Hz for female speakers, and 5000 Hz for male ones.
    • These formant dots are estimated by Praat; you cannot always trust them.
  • Pulses.
    • This method visualizes glottal pulses that show up in voicing. If you turn “view pulses” on, you’ll see vertical blue lines wherever Praat thinks the glottal pulses occur.
  • Here is the Russian word [bʲitonəmʲiʂalkə] ‘concrete mixer’ with the pitch track, intensity, formants, and pulses turned on. You would rarely need to see all of these things at once, this is just for demonstration.


Pitch track: blue line, intensity: yellow line/green numbers, formants: red dots, glottal pulses: blue vertical lines in the waveform window.

Annotating an audio file with TextGrids

  • A TextGrid object allows you to mark certain periods or time points in a sound file.
  • You can have several tiers in a TextGrid: one to mark word boundaries, another to mark consonants, vowels, whatever you want.
  • You can type into the TextGrid using IPA fonts. See this page for more information on how to set up your computer so that you can do this painlessly and quickly.
  • Praat distinguishes between “point tiers” and interval tiers.
  • To create a TextGrid, start from the Objects window. Select your sound object and click on the “Annotate” button to the right.
  • You’ll see this window. Why the program suggests “Mary John bell” as the default tier names is a mystery to me.

Default TextGrid dialog

Default TextGrid dialog

  • You can either name all your tiers at once, as shown here, or name the first one and add more later.
  • I named my three tiers “word, segments, vowels”–you see them in the screenshot below.
  • Now comes one of the Praat gotchas: “View & edit with sound” is highlighted, and you would think that this would allow you to view your sound file and edit the TextGrid at the same time, but no. Clicking on that button just tells you that in order to do what you want to do, you have to select both the sound and the TextGrid in the objects window and click on the “View & Edit” button.
  • You can select the TextGrid and Sound objects with the mouse or with your keyboard keys. On a Mac, Shift + arrow (up, down) will let you select two adjacent objects in the window.
  • If you have more than one object in the list, make sure you select the TextGrid that goes with your sound file!
  • Once you are in TextGrid edit mode, you can add text on tiers, copy interval boundaries from one tier to another, and navigate between tiers and between intervals using either the mouse or just your keyboard–make sure to poke around the “Select”, “Interval”, and “Boundary” menus to see all the options.


A TextGrid with three interval tiers, labeled in the International Phonetic Alphabet.

  • Make sure you save your TextGrid when you are done. By default, the TextGrid will be given the same name as your sound file, and the extension is .TextGrid.
  • Advanced note for the computationally curious: open a TextGrid in a text editor such as TextWrangler, and you’ll see that it’s just a Unicode text file with detailed information about the time points when a tier begins and ends, and its label and type. It looks like this:
File type = "ooTextFile"
Object class = "TextGrid"

xmin = 0 
xmax = 1.2121237048836804 
tiers? <exists> 
size = 3 
item []: 
    item [1]:
        class = "IntervalTier" 
        name = "word" 
        xmin = 0 
        xmax = 1.2121237048836804 
        intervals: size = 2 
        intervals [1]:
            xmin = 0 
            xmax = 1.1468403366828597 
            text = "bʲitonəmʲiʂalkə" 
        intervals [2]:
            xmin = 1.1468403366828597 
            xmax = 1.2121237048836804 
            text = "" 

That Picture window

  • Finally, we get to the mysterious Picture window. The point of the Picture window is to make professional, publication-quality images from your spectrograms, waveforms, and whatever other aspect of speech that you use Praat to visualize.
  • Whenever you see a “Draw” or “Paint” option associated with an object, it refers to the Picture window.
  • For example, open a sound file and click on the Spectrum menu–you’ll see “Paint visible spectrogram” as an option. The same “paint” option is available for intensity, pitch, formants, and other views.
  • To make a spectrogram picture with a pitch track overlaid on top, I “painted the visible spectrogram” and then “painted the visible pitch” while unchecking the “erase first” box. This superimposes the pitch track on top of the spectrogram picture.
  • Poke around the menus, check out the options, and see what “Garnish” does.


Screen shot of the picture window in action


A nice picture of the word [bʲitonəmʲiʂalkə], with a speckled pitch track superimposed in black.

Beyond basics

  • To get a sense of the full power of this program, you can just look at the various collections of Praat scripts that people have made available.
  • Praat uses its own scripting language, which is based on the commands in the program’s menus.
  • You can automate a lot of tasks:
    • Record a word list, cut it up into smaller files at silences automatically and label all the smaller files from a text file you specify.
    • Normalize the intensity of a bunch of different audio files, so they all sound approximately equally loud
    • If you have to label a lot of audio files, you can automate opening and TextGrid creation.
    • You can also automate the collection of durations, intensities/pitch at various time points, Praat-estimated formant values, and so on.
  • To get a sense of all the options, do a web search for “Praat scripts”. I really like Mietta Lennes’ page, but there are many others, such as this Google Sites archive.
  • There is also the actual Praat Help, which you can search.

Comments Off on Praat tutorial

September 3, 2016 · 14:53

Git configuration

Coming to GitHub from Dropbox, Google Docs, etc. can be a rough ride, since some aspects of the system are a bit counterintuitive. There are numerous git cheat sheets out there, so I recommend you look for them. I like these:

The basic thing to keep in mind is that the system allows you a lot of control over what gets synced and when, and things aren’t called what you might expect them to be called. Also, a lot of things that you might expect to happen in a single step require multiple steps (e.g., there is no “sync” in the command line version; you have to commit, pull, and push separately).

Initial git setup

Once you have created a directory called “git”:

$ git init

Set up SSH and add an SSH key for the new machine:

On a Mac:

$ ssh-keygen -t rsa -b 4096 -C ""

When prompted, agree to the location for saving the key:


Check ssh-agent and enter password when prompted:

$ eval "$(ssh-agent -s)"
$ ssh-add ~/.ssh/id_rsa

Copy the SSH key from terminal to the GitHub site. On Mac OS, do this:

$ pbcopy < ~/.ssh/

On Linux, they have this xclip thing, but the SSH key is just a block of meaningless text so you can use your preferred text editor to copy and paste.

$ xclip -sel clip < ~/.ssh/

Once this is done, test that SSH is working:

 $ ssh -T

Cloning a repo

Once you have git set up on your machine, navigate to the git directory and copy (clone) your repositories into it one by one. The repos have to exist, and the easiest way to make them is through the GitHub website.

$ pwd
$ git clone

Some useful global settings

The global settings are kept in the git config file.

$ vim ~/.gitconfig

Add the following to the git config file

 $ git config --global "Your Name"
 $ git config --global
 $ git config --global core.editor vim

This sets up Text Wrangler as your diff tool, so you can compare versions and decide which to keep:

     tool = "twdiff" 
     prompt = false 
[difftool "twdiff"]     
     cmd = /usr/local/bin/twdiff --wait --resume "$LOCAL" "$REMOTE" 
     tool = "twdiff" 
     prompt = false 
[mergetool "twdiff"]     
     cmd = /usr/local/bin/twdiff --wait --resume "$LOCAL" "$REMOTE"

For Linux, a good GUI diff tool is meld:

    editor = vim
    tool = "meld"
    prompt = false
    tool = "meld"
    prompt = false

In addition to this, each repository should have a .gitignore file. You can look around for other people’s .gitignore files, but here’s an example of the contents of some of mine:


Standard commands

Find out what’s new, if anything:

$ git status

^ All this does is compare the current state of the stuff inside your repository to the previously synced version; it does not actually connect to the internet. For that, you’d want to do a dry run (see below).

If checking git status turns up an untracked file or several, that means you haven’t committed certain changes, so do this:

$ git add /path/toyour/
$ git commit -m 'removed a TypeError bug from'

You can commit them individually or do the following. If you take this step, make sure your .gitignore file has at least some stuff in it–otherwise all sorts of garbage will turn up in the master version of your repository.

$ git add --all

If you run git status again, you will see that the path to (or all the files, if you did –all) appears as “new file”, as “changes to be committed”. Once you’ve committed, git status will tell you that there is nothing to commit. This does not mean your stuff is online yet, because that requires a separate step:

$ git pull origin master

^ This gets stuff from repo into your local git copy. Do this first before attempting to push to it, in case new changes have been introduced by others.

$ git push origin master

^ This puts stuff into the official master copy of the repository online.

$ git rm path/filename

^ This removes the file from the list of files that git will track. It does not remove the file from your repository; if you want it there but untracked, you might want to put the partial path to it in your .gitignore file.

$ git add path/filename

^ If a new file has been added or a file has been modified, this will add it to the git status record so it will be included in future sync attempts.

$ git mv path/filename

^ This is the way to move things around to organize them. If files are moved using the regular file manager methods instead, you can end up with duplicate files on unsynced machines. See “counterintuitve notes” below.

$ git diff 

^ This is only relevant if you are collaborating with people on code; they might have introduced a change that conflicts with yours. You have to reconcile the differences using a merge tool (see the git config stuff earlier). The other time this can arise is if you have multiple machines and didn’t pull from the repository before adding changes; the conflicts will show up as if two people made changes. If you have used Dropbox or similar in the past, this should look familiar.

Branches: creation and deletion

Add a new branch called “test”:

$ git branch test

Switching between branches:

$ git checkout test
$ git checkout master

Syncing the new branch to the remote server:

$ git push -u origin test

Deleting a branch locally:

$ git branch -d test

And on the git server:

$ git push origin --delete test

Undoing a push

$ git revert HEAD

This undoes the last commit.

Further counterintuitive notes

GitHub tolerates directory structures but does not directly support them. That means it will respect paths when syncing repository content but it doesn’t have a native way to create folders or move files up or down. If you want things to appear in folders, do it through your OS, then do the “git add” or “git rm” steps as applicable above.

“Commit” and “push” are separate steps (unlike the little “sync” button in the GitHub Desktop thing). Commit just records changes in the local config file and in the versioning system; this is why it requires a comment. “Push” actually puts things in the repo, via SSH in my case. Ditto for “pull”.

To see what the differences are between the server repo and your local repo, do a dry run:

git fetch && git diff --stat origin/master

If GitHub is refusing to let you commit or push even though the file differences have been reconciled, git add the file. That should fix it.


Comments Off on Git configuration

Filed under git

Running R on multiple cores, Mac OS

If you do something computationally intensive, such as fitting a hierarchical/mixed effects model with random slopes in the lme4 package, you might find that R takes hours and sometimes even days just to tell you that it didn’t converge. In my struggles with R, I figured out this way to run several models at a time on several CPU cores. Here is how I did it.

When invoked from, R runs on just one CPU at a time in Mac OS. But if you run R from the command line, you can assign different R processes to different cores:

  1. Open Terminal. (Macintosh HD>Applications>Utilities>
  2. Start screen by typing screen at the command prompt.
  3. Start R by typing R at the command prompt on the screen emulated terminal. You might have to hit space to get to the prompt itself–check the screen manual for more.
  4. Paste in your R commands from wherever you keep them. Alternatively, run an R script using the source() command. Here’s a small example:
  5. setwd("/blah/blah/blah/place_you_want_your_output/")
    exp = read.csv("your_dataframe.csv") #Make sure it's in the working directory
    Sys.time() #this tells you when R started running the model
    model1<-lmer(blah blah); Sys.time(); save(model1, file = "model1.Rda") #this is your huge fully crossed model.

  6. Since R can take a while to fit an lmer model (I've had models run for 91 hours before failing to converge!), you might want to let R run in the background while you are doing other things. Running R in screen allows you to do that. Disconnect from the screen while R is running by hitting Ctrl+A and then Ctrl+D.
  7. You can reconnect to the R screen by entering screen -R at the command line.



(These instructions were current as of R 2.14 on Mac OS 10.6.8, and my iMac has a 3.06 GHz Intel Core i3 processor and 4 GB of 1333 MHz of RAM. If you know that something has changed, please tell me!)

Once your .Rda file is saved, you can open it in R to inspect the model using summary(model1). If you get a message about non-convergence, use the model you did get to decide which random slopes to remove. Here is how to decide:


sort(sapply(ranef(model1)$subject, sd))
sort(sapply(ranef(model1)$word, sd))


Take the random effect term with smallest standard deviation out of the model and try running the model again.

Since there is a chance that your next model won't converge, either, you can run multiple instances of R on the same Mac by repeating the steps in 1-6 for different models. When you run the screen -R command, you'll see that you have multiple screens running; connect to each of them separately by using the screen ID number you see.

You can of course connect to your Mac remotely using SSH and connect to the R-running screens to check on whether the models are still running, or use top to check how much CPU % your various instances of R are using.

Comments Off on Running R on multiple cores, Mac OS

Filed under R, tutorials