An Application of PyQt: Part 1

I have been working with a research group in physical therapy and exercise science at the university to develop a collection/file management application for their existing python test interface. The existing interface uses Pygame to capture reaction and action times to supplied stimuli.

Previously they were typing in file names, e.g. ‘subject4condition1trial2.csv’. From my experience with participant data collection I’ve realized  the fewer mistakes you make allowable the better.  With this in mind I have suggested a semi automated collection with the usual warning messages: overwrite, fields missing, etc.

To implement this I wanted to try PyQt. I have used Qt in the past hooking into a C++ application and figured PyQt should work pretty well for this project.

My quick get it going process was to use the Qt Design Editor to quickly generate the .ui file. I used Qt Open Source which you can get here. The Qt Design Editor allows for drag and drop placement of your typical UI elements. Below I show the starting template “Dialog with Buttons Bottom” with a list widget dropped in.

Screenshot from 2016-06-17 16:34:09

And then we can add some selections to the mix by double clicking the listwidget box and using the opening dialog box shown below.


You can now view your ui through Form > Preview in [one of the available dialog styles]


Next post I’ll show the .ui to .py operation and connecting things to make them do some work.

Permutation Test Example

The permutation test is a statistical test for outcome differences (continuous dependent variable) between groups (categorical independent variable). For example, the hypothesis may be “Do the number of minutes of exercise per week (continuous numerical values) differ between men and women (categorical groups)?”

The permutation test is an alternative to say a student t-test. The benefit of the permutation test is it requires no assumptions about the variable distribution, e.g. the outcome variables come from a normal distribution, as you generate a  distribution from the data.

So how do we generate that distribution? The permutation test is essentially mixing up the group labels of the data. If there were no difference between the two groups, the observed outcome differences should be pretty likely if we randomly mix up our labels.

To get a sense of this you can play around with the app below or for a little better visibility here. (This post was partly created so I could try out embedding a shiny app in a WordPress post).

Try playing around with how the mean differences change with the number of labels swapped and how the p-values, roughly the estimate of likelihood of the outcome, change with more or less samples.