To reach communities not able to easily access watersheds learning experiences. Stroud launched its Mobile Watershed Education Lab kitted out and ready to come to learners where they are. The brightly festooned trailer truck features images and resources from the Macroinvertebrates.org.
In addition to reaching underserved schools and communities, the mobile lab expands public engagement at parks and festivals. It’s outfitted with a full set of instructional materials, such as microscopes and equipment for monitoring water quality to create hands-on experiences that build awareness and connections to local waterways.
Read more at
Read more at https://stroudcenter.org/press/watershed-on-wheels-makes-a-splash-world-water-day/
by Alice Fang
After working on Macroinvertebrates.org for two summers and four semesters, it feels crazy that I’ll be leaving the project. As my very last blog post, I’ll be recapping some of the (final) work that I did this semester.
One major artifact I worked on was an online poster for the CitSciVirtual 2021 Conference! With Marti and Camilla, we submitted a poster about Macroinvertebrates.org as “A Digital Tool for Supporting Identification Activities During Water Quality Biomonitoring Trainings.” You can access the poster here.
As part of management work for the mobile app, I condensed and revised the descriptions for each insect order, making the content more novice-friendly and less technical. I also dug around iNaturalist for some Creative Commons License images of every insect order, finding hero images for the overview page of each order. I'll be leaving behind my master-list spreadsheet of every specimen's common and scientific names (and more) and hopefully it will be useful for future bug designers as well!
Pollution tolerance has always been a tricky part of the database—how reliable is it at the order level? What's the best language to use: sensitive/insensitive to pollution, or tolerant/intolerant to pollution? It's still not the most resolved design, but I proposed a layout where pollution tolerance sits at the bottom of the overview page—if you read the paragraph description, it leads you into the information about pollution tolerance, providing you more context on perhaps why or how the insect is sensitive or insensitive. (Also, some header tweaking—good headers are super useful in providing context!)
Family-level pollution tolerance is more complicated—what do these pollution tolerance values even mean? I've always wanted to use a data visualization to represent the ranges of sensitivity, and maybe this is a component that can be iterated on in the future :-) The following example was quickly mocked up in Figma for demonstration purposes, and not the best data viz—but some representation of the numerical ranges that allows someone to visually compare from region to region would be useful I think! (Again, more informative headers! "Regional Pollution Tolerances")
Another major bit of work that I started was trying to figure out how to incorporate the few non-insects, which are used in water monitoring and training, into the structure of the mobile app. The challenge here is that the non-insects aren't grouped as nicely under one Class like the insects are (under Class Insect); instead, they are representatives of several different Classes. Neither the site nor the app currently support Class-level categorizations, and so I had to decide how much of the existing architecture could be shifted to show these separations. In the end, to simplify the content, I removed the nested 'family' page and only showed the non-insect order-level page—the database also has very little content in the first place for non-insects.
The following are some design recommendations for work that I did in December and February.
Working on Macroinvertebrates.org has been such fulfilling and rewarding work, and I'm honored that I got to contribute so much to an open-sourced educational tool. I've learned a lot these past 3(?) years, and tried a lot of things, and realized I'm a lot better at spreadsheets than I previously knew. This project has been such a formative part of my undergrad experience, which I'll carry with me as I start a fellowship at the New York Times this June. I'll definitely miss Marti and Chris and the rest of the (very small) team!
by Dominique Aruede, CMU Cognitive Psychology and Human-Computer Interaction
Although I've had other roles this semester, I've been focused on the development of the Quiz section of the app. I had a separate assignment through my independent study to iterate on the quiz prototype. The following details my process:
The app contains an interactive subsection which we named "Quiz." Quiz contains helpful tools for reviewing insect information (taxonomy, fun facts, etc.) like flashcard decks and short, image-based multiple choice question sets. The flashcard decks are the first component of the prototype to reach its user testable version in development. We wanted to figure out what categories of information presented on the backside of each flashcard would best facilitate learning. To clarify, I did not test for level of learning and transfer itself, but rather, I tested for level of understanding and comfort with two different degrees of content specifications. How scaffolded and robust does the flashcard information have to be for users to feel comfortable to rely on this tool (Quiz) as support for practice with macroinvertabrate ID activities?
I ideated on two possible flashcard designs and conducted a series of AB tests plus online surveys with six users. Version A implemented solely a text-based copy from the desktop site describing order-level features on the macroinvertebrate of interest; the feature names were clickable and revealed scrollable zoomed-in images of the feature on many different families within the order to show variety. Version B implemented both this interactive copy and an additional annotated illustration which highlighted the same features backed by the context of the entire animal (the annotations are also clickable, leading to the same formatting of visual examples)—images below. I asked the question, what flashcard elements do users prefer when learning to ID?
I also adapted the AB test with thinkaloud procedures some to make the data a bit more robust. I incorporated prompts and exit interviews into the study to answer the questions, how do users imagine using this in their lives; what other features might they want? what could make the user experience more seamless for a user?
Mockup Versions and Final Prototype Iterations Developed
Below are moving examples of both flows users were taken through during testing. They were not shown the full scope of the quiz, only the front and back of the flashcards in Version A and Version B. Version A is on the left and Version B is on the right.
I recruited for this test through email and also reached out over social media, depending on what was appropiate for the type of user. We sampled with a range of users: novice types (that is a student in middle school or high school), amateur types, educator/trainer types, and volunteer/designer types. Not everyone I reached out to responded back, and so my final sample consisted of two amateur types, one educator/trainer type, one novice, and two designer types (n=6).
I began each AB test by presenting two flows to the particpant. First they would start on the front side of a flashcard in Version A, and then they would be able to view the backside and direct me on where to click next, if they wanted to see the next card, and so on. Then we repeat this for version B. During this process, we embedded think aloud prompts to guide participants to comment on any possible confusion or areas of opportunity. Finally, participants were asked to complete a survey.
Findings with Design Implications
Both of the amateur type participants and the educator/trainer type called in to the interview over Zoom. The novice type called in over FaceTime, and the two designer types participated in person. I had taken notes and during each interview in a pre-made data collection sheet on the comments and reactions of individual participants. There were no transcripts generated for these interviews, but the interviews conducted over Zoom were all visually and auditorily recorded. I then went back and consulted the videos again to fill in information I missed while I was conducting the intwerview. The next step was to transfer every individual data point to a synthesis workspace in Miro and organize first by participant, labeling each data point with design cues that are good identifiers for sorting later (i.e. "suggestion," "preference," "Version A," "Version B," etc.). Finally, I built an affinity diagram, grouping by concern, summarizing the painpoint/comment, and developing suggestions or design implications for future designers on this project to refer to. I identified three main concern areas. Below are the findings:
Flashcard MVP for the 'Review Orders' Learning Goal
Taking all of this into account, I developed an interactive prototype in Figma, incorporating the bare minimum edits that were suggested through the synthesis (aka, fleshing out Version B with the proper annotations for the illustrations, and fixing some some small UI thing slike the help button).
I also prototyped an alternative way to display the images, using a bigger aspect ratio and drawing from the design of the lightbox feature in the Field Guide, a suggestion first proposed by my colleage, Estelle Jiang. I also prototyped the possible flow that's created when the bookmark button is pressed on a card: a new deck with only the bookmarked cards will be presented upon completion of the whole deck for the user to review once more or as many times as they like. Once the user indicates that they've mastered them, they will see an end screen with options. One note is that this version of flashcards contains a 'back' button near the bottom of each flashcard, but that is strictly for prototyping purposes because Figma doesn't recognize the difference between left and right swipes, it just registers a general swipe. In the real beta, left swipe means 'Card Mastered' and right swipe means 'Undo Last Swipe' or 'Go Back,' and of course, bookmark button means 'Study Again.' So there will be no need for a physical back button. Below is the exemplary demo of all that I've described.
While creating this MVP, I realized that certain important content is missing in order to create a usable beta. I detailed out what's missing and what assets will need to be generated in a preadsheet available to all project team members. I recommend that new designers on the team take a look because I left comments in it about already curated content and plans that might be useful for designing the rest of the Quiz section, including other learning goals. You can find supporting links to everything covered in this process blog below.
HCII AB Usability Testing of Macroinvertabrates.org Native App and Figma Prototype
Authors: Dominique Aruede (Carnegie Mellon University), Marti Louw (Carnegie Mellon University)
The opportunity this project presents is to support practicing and learning identification (ID) of macroinvertebrates. The app is a companion to the macroinvertebrates.org website and works in-field without an internet connection on mobile devices for a diverse set of users. The app has an interactive subsection of the prototype which we named "Quiz." Quiz contains helpful tools for reviewing insect information (taxonomy, fun facts, etc.) like flashcard decks and short, image-based multiple choice question sets. We ideated on two possible flashcard designs and conducted a series of AB tests plus online surveys with six users. We ask the question, what flashcard elements do users prefer when learning to ID? We discovered that people unanimously preferred the version that coupled an illustration with the description of order-level characteristics of each insect, as opposed to the version that only had a textual description with closeup views of characteristics.
Meeting of the Minds Live Symposium Poster Link
Macroinvertebrates.org: A Digital Tool for Supporting Identification Activities During Water Quality Biomonitoring Trainings
Authors: Alice Fang (Carnegie Mellon University), Marti Louw (Carnegie Mellon University), Camellia Sanford-Dolly (Rockman et al), Andrea Kautz (Carnegie Museum of Natural History), Dr. John Morse (Clemson University)
Abstract: Macroinvertebrates.org is a visual teaching and learning resource to support identification activities in water quality biomonitoring. The online tool was developed with diverse stakeholders (including trainers, professional scientists, educators, and volunteers) through a codesign process. In this poster, we report on design research and evaluation data collected from trainers that characterizes how they integrated the site into their training workshops, and features they found useful.
Survey, interview, and observation data revealed that Macroinvertebrates.org provided a compelling visual reference tool for trainers, allowing users to easily zoom in on key diagnostic characteristics needed for identification. Trainers reported that the site made it easier for them to train volunteers to Order and Family, and for volunteers to see relevant features and increase ID accuracy. Specifically, the site extended the utility of trainings by serving as a resource for volunteers to practice and review the ID process before and after trainings. In this poster, we report on the perspectives of trainers, gathered through the design research and formative evaluation process.
Using Log Visualization to Interpret Online Interactions During Self-Quizzing Practice for Taxonomic Identification by Chelsea Cui, Jordan Hill, Marti Louw, Jessica Roberts.
We were excited to present CMU alumna, and former REU student Chelsea Cui’s study at the 2021 annual meeting of the American Educational Researchers Association (AERA). Chelsea analyzed log data from participants in a 10-day study in which they used our quiz feature to practice aquatic macroinvertebrate ID. We analyzed interactions with the quiz tool along with accuracy in pre- and post-tests to determine how the quizzing platform was being utilized by learners at different experience levels. Through the custom visualization platform we were able to glean insights on how to improve our quizzing platform for future use.
See the poster here: https://aera21-aera.ipostersessions.com/Default.aspx?s=D0-9D-A0-1B-C0-49-87-5B-8A-3F-56-A7-02-C5-ED-66#
by Estelle Jiang
Before sharing about the work the team and I got done, I want to quickly wrap up about the main functionalities of the application, the app condenses the following aspects in terms of learning and teaching freshwater insect identification after a range of use cases centered around exploration:
1. Field Guide
2. Identification (ID) Key
This is a blog that further elaborate on the work and tasks the design team finished over the past semester. Moving from the summer work where we finished a big round of testing with users for the field guide design and interaction, the design team mainly focuses on iterating the field guide through second round of user testings and also switched the gear towards design for Macroinvertebrates assessment - design a quiz functionality to allow users to review the learning contents from the field guide and also quiz themselves on the understanding.
01. Field guide iterations and finalization -
1. Brainstormed and explored the design interaction for the field guide:
There are some small features and interactions, such as global navigation and the zoomable page interaction that are not fully considered over the summer. Therefore, Alice and I started exploring different interaction and design possibilities before going to testing. Here are some my explorations that inform our final field guide page design:
A. Global Navigation
B. Zoomable Image Flipping
C. Onboarding Instruction and Design
2. Provided guidance and drafted out the design system for the field guide and entire app.
Along the way, we decided to start consolidating the design guidelines and system for the team and product which can be useful for further development and also team collaboration. I started the finding good industry practices in terms of generating and designing the system, provided guidances and gave feedbacks while Alice consolidated the overall visual/layout and turned them into components in Figma to better speed up the design process.
3. Facilitated the second round of user testings with product evaluation and conducted synthesis workshops on Miro -
Instead of getting insights about application flow, logic and concept aspects of the app, the major goals of the second round testing are identify the specific usability issues for overall navigation and zoomable pages new design and evaluate more on the overall engagement and usability. Based on the 8 testing results, I facilitated the synthesis workshops on Miro with the team to generate iteration insights and help the team finalize the field guide MVP.
For the main insights:
You can refer Alice's post: [Mobile App Pt. IV: Refining the Field Guide]
02. Quiz / game to assess the learning goals
1. Explored the market product interactions.
Before designing the quiz, we firstly explored some predecessors that have quiz features, such as Quizlet, Duolingo, and Lumosity. It helped us generate several possible formats of the quiz as well, such as matching games, card flipping games and flashcard reviews.
2. Considered what we want to assess and the learning goals of the application before going deeper into the user experience and interaction design.
We took a pause and realized that know the learning goals and purpose of making the quiz is the first priority comparing to brainstorm design solutions for quiz. Without understanding what we want the learners to learn, we cannot provide the suitable design to meet their needs. I summarized some potential learning goals before meeting the current trainers and experts:
3. Mapped out the design formats and interaction and finalized the flow and logics for each learning goal we defined.
A. Initial flow explorations (without knowing the learning goals)
B. New flow explorations that better inform the design with rationales.
C. New quiz flow by following and considering the design goals
D. Designed for varied use cases and learning cases - knew limitations and considered tradeoffs
—Critiqued and iterated the design with engineers and trainers/experts to narrow down the scope.
Review the feedback we gathered in Dominique's post: Macro App Tasks This Semester
03. Being a designer by wearing different hats and entering the development stage
While our design team kept working on the ID key and quiz design and conducted further testings, we got in touch with Chris Bartley, an experienced engineer and involved him into our design process along the way. We are still figuring out the better and more efficient way to collaborate, here are some attempts we took so far:
An interdisciplinary team