About

The EZ-MMLA toolkit has been developed by the Learning, Innovation and Technology lab at the Harvard Graduate School of Education. We are using open-source algorithms created by others to collect multimodal data from video and audio feeds. We did not create these algorithms; the source code can be found on each tool's page. We thank the creators of these models for sharing their work. This website makes it easy to collect multimodal datasets, for researchers and people wanting to learn how to analyze multimodal data.

If you use this toolkit for your project, please use the following citation:

Who We Are

  • Bertrand Schneider | Project Lead
  • Javaria Hassan | Developer
  • Coby Sheehan | Developer
  • Spencer Tiberi | Developer
  • Jovin Leong | Developer
  • Addison Zhang | Developer
  • Nila Annadurai | Developer
  • Nina Chen | Developer
  • Li Sun | Developer
  • Bach Nguyen | Developer
  • Jose Garcia-Gonzalez | Developer

ACM Reference

BibTex

  • @inproceedings{10.1145/3448139.3448201,
  • author = {Hassan, Javaria and Leong, Jovin and Schneider, Bertrand},
  • title = {Multimodal Data Collection Made Easy: The EZ-MMLA Toolkit: A Data Collection Website That Provides Educators and Researchers with Easy Access to Multimodal Data Streams.},
  • year = {2021},
  • isbn = {9781450389358},
  • publisher = {Association for Computing Machinery},
  • address = {New York, NY, USA},
  • url = {https://doi.org/10.1145/3448139.3448201},
  • doi = {10.1145/3448139.3448201},
  • booktitle = {LAK21: 11th International Learning Analytics and Knowledge Conference},
  • pages = {579–585},
  • numpages = {7},
  • keywords = {Computer Visions, Data Collection Toolkit, Multimodal Analytics},
  • location = {Irvine, CA, USA},
  • series = {LAK21}
  • }