Technical conferences live and die by their community, and their content. A community conference like PASS Summit succeeds when it helps its presenters.
Several weeks ago, PASS advertised a survey, asking the community what topics and sessions they would be interested in. The survey link was sent out with the PASS Connector to 122.5K subscribers, and there were 85 responses. That's roughly typical for large email surveys.
The raw survey results are available. However, one limitation with survey responses is sampling bias: survey results don't reflect the audience they represent because some types of people respond more than others.
For example, 10.4 % of PASS Summit 2014 attendees were consultants, and yet only 3.6% of the survey responses were. To compensate for this, I biased (weighted) the survey results to be representative of the distribution of Summit attendees. In this example, consultant responses would be weighted by 2.88 (10.4% / 3.6% = 2.88).
The weighted survey responses are on GitHub.
On a scale of 1-4 (4 meaning ideal interest), the most popular session content is for new SQL Server features, demos, and script examples.
BI Information Delivery
BI Platform Architecture
Looking at the data, I find a few interesting narratives:
There are many, many different lessons and conclusions you can draw from this data, so I'll let you speculate about the rest. Enjoy!Permalink
In my last blog post, I explored some of the patterns found when looking at attendance of PASS Summit 2014 sessions. Attendees also left feedback...
Note: due to PASS policy, I am not allowed to release session rating information that can identify a particular session or speaker. I have done my best to anonymize session data while retaining its full analysis information.
The way to give feedback for Summit sessions this year was using an online form, built into the PASS Summit app. People attended sessions 36,445 times and filled out 5,382 feedback surveys, for a response rate of 14.8%. That's a pretty low percentage, and I've heard that's partly because of spotty Wi-Fi and cell connectivity.
How much can we trust this data? How closely does it reflect reality?
We Don't Know
This is Statistics 101: sample sizes and populations. If we assume that the feedback is broadly typical of all attendees, then our margin of error is 1.62% (given 99% confidence).
The true margin of error is higher. The people who provide feedback are often the ones who loved a session, or hated it. Session feedback is anonymous, and without demographic and longitudinal data for each response, there's no way to know.
If I was a dyed-in-the-wool statistician, I'd stop here. Instead, I'll continue with the assumption that the data represents all attendees' opinions.
What's the Distribution of Feedback for Each Question?
Presenters get high marks.
Session speakers are often keenly interested in their ranking. Did they get the #1 most highly-rated spot, or the #3?
Due to privacy concerns, I can't release ratings with session names or speakers. However, I can tell you the percentile rankings.
|Percentile||Overall||DBA Track||BI Info Track||BI Platform Track||AppDev Track||ProfDev Track|
Note: rankings do not include the environment scores, since that is outside of a speaker's control
A few weeks ago I asked folks on Twitter what questions they had that could be answered from session feedback.
A few things to remember:
Environment Score and Speaker Performance
Is there a correlation between the "environment" score given by attendees and the speaker rating?
There's a weak correlation, (R2 = 0.377). There are also many potential reasons for this.
Enough Material and Session Length
Is there a correlation between the enough-material question and the session length?
I don't know. There's no information about which sessions ended early or late, unless you want to measure them using the online session recordings. There's not enough information when comparing regular sessions to half-day sessions to derive anything useful.
Cynicism and Timing
Do certain time slots produce higher scores?
There's no real correlation between time slots and scores. There is some variation of scores between times, but there's no pattern I can find to it.
Speak Up, Speak Up
Do certain times of day have higher completion rates for feedback?
Feedback is higher in the morning, but the pattern isn't consistent. There's also an outlier for the vendor-sponsored sessions that include breakfast.
The Packed-Room Effect
Does a packed room (room % full) correlate with higher or lower ratings overall? No! The correlation is very weak (R2 = 0.014), and it's not significant (p-value 0.09).
The Bandwagon Effect
Do popular sessions (total session attendance) correlate with higher scores?
Sort of. The linkage is very weak, with a correlation of 0.031 (p-value 0.012)
Is past performance an indicator of future success? In other words, did repeat performers improve, stay the same, or get worse?
Let's compare each year's data with the speaker's overall score:
Most speakers stay within 0.5 of their average score. There are very few wild improvements or wild disappointments. However, a 0.5 difference is the difference between a 4.95 rating (amazing) and a 4.45 (in the 40th percentile).
This work is not done. Data always leads to more interesting questions. There are many places to take this information, including:
There's no reason I should be the only person looking at this data. The #sqlpass community is large and technically savvy. To that end I've made almost all of the raw data public. The only piece missing is the speaker ratings for individual sessions and speakers; that has been anonymized as much as possible at PASS's request.Permalink