CODS Data Challenge

In this data challenge, we released AMEO 2015, our data set which profiles undergraduates with varied backgrounds on the jobs and salaries they get on graduating.

The following tasks were posed -
  • Predictive Modeling - predicting an undergraduate's annual salary from historic data
  • Recommendation - identifying the key set of parameters in her profile to earn a better salary
  • Visualization - understanding what factors in the labor market determine an undergraduate's salary and where and what jobs they get.
For more info, see here
To see how participating teams fared, see the contest leaderboard

Venue Information

Venue Bally's Atlantic City Hotel
Date November 14, Saturday.
Time* 8:30 AM - 01:00 PM
Register You would need to register with ICDM to participate in this workshop. Please register yourself by clicking here.

* Please note: Other workshops will end by 12:30 PM. We plan on extending it by half an hour to have an awesome session!

ASSESS 2014 White Paper

This white paper is an outcome of the ASSESS workshop, which was held at KDD 2014. The paper primarily discusses why assessments are important, what is state of art and what goals should we pursue as a community. It is a brief exposition and serves as a starting point for a discussion to set the agenda for the next decade. The full paper can be accessed here.

Introduction

Assessing educational achievement and providing feedback to learners is crucial to emerging course work and labor market-making systems. Automating the assessment and feedback mechanisms on open-ended tasks (e.g., short-answer and essay questions) will allow both learning and recruitment to be opened up to a much larger set of people that currently miss out due to resource constraints. Although automated or semi-automated assessment/feedback remains a nascent field, there have been recent advances drawing on techniques from data mining, knowledge discovery, machine learning, and crowdsourcing for peer grading. Notwithstanding, the technical requirements for accuracy and expressiveness for the many purposes for assessment (some high-stakes and some low-stakes; some formative and some summative) are not well-defined.

In this workshop, we will bring together a diverse group of researchers and industry practitioners in data mining, machine learning, and psychology to (1) discuss current state of the art in automated assessment, (2) identify a vision for future research, and (3) lay out a vision for the steps required to build a good automated or peer-grading based assessment. The organizers hope that a unified framework for thinking about assessment and feedback emerges by the end of workshop.

Call for papers

We invite submission of papers describing innovative research on all aspects of assessment of educational achievement and providing feedback. Position papers and papers inducing discussion are strongly encouraged, but should base themselves on fresh insight and/or new data. Topics of interest include, but are not limited to:

  • Automated and semi-automated assessments of open-responses
  • Assessment by crowdsourcing and peer-grading
  • Automatic feedback generation
  • Automatic problem (item) generation/design, calibration, modeling
  • Assessment design, test blue-print, rubric design, validity and reliability
  • Test integrity, equity, proctoring and high-stake testing
  • Non-cognitive assessments in personality, behavior, motivation, etc.
  • New areas: gamification, simulation based assessments
  • Insights derived by large scale assessment data towards learning, recruitment, performance prediction, etc.

Papers submitted should be original work and different from papers that have been previously published. Submission to the workshop does not preclude submission to other venues in future or in parallel. The authors may publish the work submitted here or its extensions at other avenues.
Accepted papers shall be included in the proceedings of ICDM 2015 and will be provided a speaking slot at the workshop. Others would be invited to present their work as posters.

Submission Guidelines

Please strictly follow the submission guidelines as mandated by ICDM 2015.
Papers should be at most 10 pages in the IEEE 2-column format.The reviewing will be triple blind.

More information on the guidelines can be found here - http://icdm2015.stonybrook.edu/content/submission


Paper submissions should be made here.


Important Dates

Paper submission deadline August 7, 2015, 23:59 Pacific Standard Time
July 20, 2015, 23:59 USA Eastern Standard Time
Acceptance notification September 01, 2015
Final paper due September 20, 2015
Workshop November 14, 2015

Data Release

A data set (acquired by Aspiring Minds) shall be released during the workshop. It shall attempt to assess a relevant open-response format problem. Details to follow soon.


Sponsors

Papers

  • Identifying Students’ Mechanistic Explanations in Textual Responses to Science Questions with Association Rule Mining   [PDF]
    Yu Guo and Wanli Xing

  • Using and Improving Coding Guides For and By Automatic Coding of PISA Short Text Responses   [PDF]
    Fabian Zehner, Frank Goldhammer and Christine Sälzer

  • Including Content-Based Methods in Peer-Assessment of Open-Response Questions   [PDF]
    Oscar Luaces, Jorge Diez, Amparo Alonso-Betanzos, Alicia Troncoso and Antonio Bahamonde

  • Education, Learning and Information Theory  [PDF]
    Bryan Hooi, Hyun Ah Song, Evangelos Papalexakis, Rakesh Agrawal and Christos Faloutsos

  • Temporal Models for Predicting Student Dropout in Massive Open Online Courses   [PDF]
    Fei Mi and Dit-Yan Yeung

Spotlight

  • Impact of Automated Scoring and Feedback on Scientific Argumentation in Earth Science: Through the Log Data Analysis   [PDF]
    Mengxiao Zhu, Liyang Mao, Lydia Ou Liu and Amy Pallant

  • Predictive Decision Support System using Data Model Combination of Logistic Regression and Decision Tree Algorithms for Student Graduation Determination
    Ace Lagman

Program Schedule
Workshop has been extended by half an hour.

Time Event Description
8:30 - 8:40 AM Introduction 10 minutes, organizers
8:40 - 09:20 AM Invited Talk - 1 Computer science in assessments [PPT]
Richard Baraniuk, Rice University
9:20 - 10:00 AM Invited Talk - 2 Psychology in assessments  [PPT]
Rich Roberts, Chief Scientist, Professional Examination Service
10:00 - 10:15 AM Coffee Break
10:15 - 11:05 AM

Session 1


Using and Improving Coding Guides For and By Automatic Coding of PISA Short Text Responses   [PDF]
Fabian Zehner, Frank Goldhammer and Christine Sälzer

Temporal Models for Predicting Student Dropout in Massive Open Online Courses   [PDF]
Fei Mi and Dit-Yan Yeung

Including Content-Based Methods in Peer-Assessment of Open-Response Questions   [PDF]
Oscar Luaces, Jorge Diez, Amparo Alonso-Betanzos, Alicia Troncoso and Antonio Bahamonde
11:05 - 12:00 PM Industry Panel Dee Kanejiya, Cognii
Farzad Eskafi, Sparkit [PPT]
Varun Aggarwal, Aspiring Minds
12:00 - 12:40 PM

Session 2


Identifying Students’ Mechanistic Explanations in Textual Responses to Science Questions with Association Rule Mining   [PDF]
Yu Guo and Wanli Xing

Education, Learning and Information Theory  [PDF]
Bryan Hooi, Hyun Ah Song, Evangelos Papalexakis, Rakesh Agrawal and Christos Faloutsos

Impact of Automated Scoring and Feedback on Scientific Argumentation in Earth Science: Through the Log Data Analysis (spotlight presentation)   [PDF]
Mengxiao Zhu, Liyang Mao, Lydia Ou Liu and Amy Pallant

12:40 - 12:50 PM

Aspiring Minds' predictive modeling contest announcement

12:50 - 01:00 PM

Concluding session

Conducted by the organizers to summarize and set out a basic formulation for a white paper

Speakers

Dr. Baraniuk is currently the Victor E. Cameron Professor of Electrical and Computer Engineering at Rice University. His research interests lie in the areas of signal processing and machine learning, especially techniques involving low-dimensional models.

In early 2012, Dr. Baraniuk launched OpenStax College, a non-profit initiative to develop high-quality free college textbooks for some of world’s most at-risk students. Currently, Dr. Baraniuk is developing advanced machine learning algorithms and a software platform for a personalized learning system called OpenStax Tutor that integrates text, video, simulations, problems, feedback hints and tutoring and optimizes each student’s learning experience based on their background, context and learning goals. [PPT]

Richard D. Roberts is Vice President and Chief Scientist at the Center for Innovative Assessments at Professional Examination Service. He was previously the Managing Principal Research Scientist at ETS (Educational Testing Service).

His main areas of specialization are assessment and human individual differences, and he has conducted research on cognitive abilities, emotional intelligence, personality, health and well-being, motivation, aging, and human chronotype (morningness-eveningness). Dr. Roberts has published a dozen books (for MIT, Oxford, and APA Press) and over 150 peer-review articles or book chapters on these topics, with a near equal number of presentations around the world. He has also received significant grants and contracts (from corporations, foundations, the military, and governments, totaling several million dollars), as well as several professional honors, including two ETS Presidential Awards and two PROSE awards from the Association of American Publishers.[PPT]

Panelists

Dee Kanejiya is the Founder and CEO of Cognii. Cognii is a leading provider of Artificial Intelligence based educational technology for interactive learning and assessment in K-12, Higher Ed and Corporate training. Cognii's Virtual Learning Assistant provides automatic assessment of open-response answers and engages learners in real-time natural language tutoring conversations to guide them towards concept mastery. Dee has over 15 years of experience in technology and business development in the areas of artificial intelligence and natural language processing industries. Most recently, he was part of the core technology teams at Nuance and Vlingo that pioneered the virtual personal assistant technology for smartphones. He studied PhD in Electrical Engineering at IIT Delhi and carried out post-doc research at Carnegie Mellon University and Karlsruhe Institute of Technology, Germany.

Farzad H. Eskafi is an entrepreneur with 12+ years of experience in the education and SaaS industry. Currently, he is the co-founder and CEO of an NSF-funded company called SparcIt. SparcIt is an analytic company and the pioneer of automated psychometric creativity assessment. SparcIt's studies have shown that creative employees are the source of innovation within organizations.[PPT]


Organizers

Advisory Council

Program Committee

News

1 December - CoDS Data Challenge, 2015

2 November - White paper on ASSESS 2014 released.

14 October - Venue details updated.

2 September - Paper decisions announced.

21 July - Paper submission deadline extended to 7th August.

25 June - Less than a month away until the submission deadline. Hurry! Click to submit

10 April - Call for papers announced. July 20 submission deadline.

30 March - Workshop accepted at ICDM 2015!

30 March - Successful workshop held at KDD 2014! See details here.