To reach communities not able to easily access watersheds learning experiences. Stroud launched its Mobile Watershed Education Lab kitted out and ready to come to learners where they are. The brightly festooned trailer truck features images and resources from the Macroinvertebrates.org. In addition to reaching underserved schools and communities, the mobile lab expands public engagement at parks and festivals. It’s outfitted with a full set of instructional materials, such as microscopes and equipment for monitoring water quality to create hands-on experiences that build awareness and connections to local waterways. Read more at https://stroudcenter.org/press/watershed-on-wheels-makes-a-splash-world-water-day/ Read more at https://stroudcenter.org/press/watershed-on-wheels-makes-a-splash-world-water-day/
0 Comments
by Alice Fang After working on Macroinvertebrates.org for two summers and four semesters, it feels crazy that I’ll be leaving the project. As my very last blog post, I’ll be recapping some of the (final) work that I did this semester. CitSciVirtual Poster One major artifact I worked on was an online poster for the CitSciVirtual 2021 Conference! With Marti and Camilla, we submitted a poster about Macroinvertebrates.org as “A Digital Tool for Supporting Identification Activities During Water Quality Biomonitoring Trainings.” You can access the poster here. Content Revision As part of management work for the mobile app, I condensed and revised the descriptions for each insect order, making the content more novice-friendly and less technical. I also dug around iNaturalist for some Creative Commons License images of every insect order, finding hero images for the overview page of each order. I'll be leaving behind my master-list spreadsheet of every specimen's common and scientific names (and more) and hopefully it will be useful for future bug designers as well! PTV Pollution tolerance has always been a tricky part of the database—how reliable is it at the order level? What's the best language to use: sensitive/insensitive to pollution, or tolerant/intolerant to pollution? It's still not the most resolved design, but I proposed a layout where pollution tolerance sits at the bottom of the overview page—if you read the paragraph description, it leads you into the information about pollution tolerance, providing you more context on perhaps why or how the insect is sensitive or insensitive. (Also, some header tweaking—good headers are super useful in providing context!) Family-level pollution tolerance is more complicated—what do these pollution tolerance values even mean? I've always wanted to use a data visualization to represent the ranges of sensitivity, and maybe this is a component that can be iterated on in the future :-) The following example was quickly mocked up in Figma for demonstration purposes, and not the best data viz—but some representation of the numerical ranges that allows someone to visually compare from region to region would be useful I think! (Again, more informative headers! "Regional Pollution Tolerances") Non-Insects? Another major bit of work that I started was trying to figure out how to incorporate the few non-insects, which are used in water monitoring and training, into the structure of the mobile app. The challenge here is that the non-insects aren't grouped as nicely under one Class like the insects are (under Class Insect); instead, they are representatives of several different Classes. Neither the site nor the app currently support Class-level categorizations, and so I had to decide how much of the existing architecture could be shifted to show these separations. In the end, to simplify the content, I removed the nested 'family' page and only showed the non-insect order-level page—the database also has very little content in the first place for non-insects. The following are some design recommendations for work that I did in December and February. Suborders: Odonata Suborders: Trichoptera Version 1 Version 2 Closing Out
Working on Macroinvertebrates.org has been such fulfilling and rewarding work, and I'm honored that I got to contribute so much to an open-sourced educational tool. I've learned a lot these past 3(?) years, and tried a lot of things, and realized I'm a lot better at spreadsheets than I previously knew. This project has been such a formative part of my undergrad experience, which I'll carry with me as I start a fellowship at the New York Times this June. I'll definitely miss Marti and Chris and the rest of the (very small) team! by Dominique Aruede, CMU Cognitive Psychology and Human-Computer Interaction Although I've had other roles this semester, I've been focused on the development of the Quiz section of the app. I had a separate assignment through my independent study to iterate on the quiz prototype. The following details my process: Design Challenge The app contains an interactive subsection which we named "Quiz." Quiz contains helpful tools for reviewing insect information (taxonomy, fun facts, etc.) like flashcard decks and short, image-based multiple choice question sets. The flashcard decks are the first component of the prototype to reach its user testable version in development. We wanted to figure out what categories of information presented on the backside of each flashcard would best facilitate learning. To clarify, I did not test for level of learning and transfer itself, but rather, I tested for level of understanding and comfort with two different degrees of content specifications. How scaffolded and robust does the flashcard information have to be for users to feel comfortable to rely on this tool (Quiz) as support for practice with macroinvertabrate ID activities? I ideated on two possible flashcard designs and conducted a series of AB tests plus online surveys with six users. Version A implemented solely a text-based copy from the desktop site describing order-level features on the macroinvertebrate of interest; the feature names were clickable and revealed scrollable zoomed-in images of the feature on many different families within the order to show variety. Version B implemented both this interactive copy and an additional annotated illustration which highlighted the same features backed by the context of the entire animal (the annotations are also clickable, leading to the same formatting of visual examples)—images below. I asked the question, what flashcard elements do users prefer when learning to ID? I also adapted the AB test with thinkaloud procedures some to make the data a bit more robust. I incorporated prompts and exit interviews into the study to answer the questions, how do users imagine using this in their lives; what other features might they want? what could make the user experience more seamless for a user? Mockup Versions and Final Prototype Iterations Developed Below are moving examples of both flows users were taken through during testing. They were not shown the full scope of the quiz, only the front and back of the flashcards in Version A and Version B. Version A is on the left and Version B is on the right. ![]() User Testing I recruited for this test through email and also reached out over social media, depending on what was appropiate for the type of user. We sampled with a range of users: novice types (that is a student in middle school or high school), amateur types, educator/trainer types, and volunteer/designer types. Not everyone I reached out to responded back, and so my final sample consisted of two amateur types, one educator/trainer type, one novice, and two designer types (n=6). I began each AB test by presenting two flows to the particpant. First they would start on the front side of a flashcard in Version A, and then they would be able to view the backside and direct me on where to click next, if they wanted to see the next card, and so on. Then we repeat this for version B. During this process, we embedded think aloud prompts to guide participants to comment on any possible confusion or areas of opportunity. Finally, participants were asked to complete a survey. Findings with Design Implications Both of the amateur type participants and the educator/trainer type called in to the interview over Zoom. The novice type called in over FaceTime, and the two designer types participated in person. I had taken notes and during each interview in a pre-made data collection sheet on the comments and reactions of individual participants. There were no transcripts generated for these interviews, but the interviews conducted over Zoom were all visually and auditorily recorded. I then went back and consulted the videos again to fill in information I missed while I was conducting the intwerview. The next step was to transfer every individual data point to a synthesis workspace in Miro and organize first by participant, labeling each data point with design cues that are good identifiers for sorting later (i.e. "suggestion," "preference," "Version A," "Version B," etc.). Finally, I built an affinity diagram, grouping by concern, summarizing the painpoint/comment, and developing suggestions or design implications for future designers on this project to refer to. I identified three main concern areas. Below are the findings:
Additional Content
3. Usability Clickable Words
Flashcard MVP for the 'Review Orders' Learning Goal Taking all of this into account, I developed an interactive prototype in Figma, incorporating the bare minimum edits that were suggested through the synthesis (aka, fleshing out Version B with the proper annotations for the illustrations, and fixing some some small UI thing slike the help button). I also prototyped an alternative way to display the images, using a bigger aspect ratio and drawing from the design of the lightbox feature in the Field Guide, a suggestion first proposed by my colleage, Estelle Jiang. I also prototyped the possible flow that's created when the bookmark button is pressed on a card: a new deck with only the bookmarked cards will be presented upon completion of the whole deck for the user to review once more or as many times as they like. Once the user indicates that they've mastered them, they will see an end screen with options. One note is that this version of flashcards contains a 'back' button near the bottom of each flashcard, but that is strictly for prototyping purposes because Figma doesn't recognize the difference between left and right swipes, it just registers a general swipe. In the real beta, left swipe means 'Card Mastered' and right swipe means 'Undo Last Swipe' or 'Go Back,' and of course, bookmark button means 'Study Again.' So there will be no need for a physical back button. Below is the exemplary demo of all that I've described. While creating this MVP, I realized that certain important content is missing in order to create a usable beta. I detailed out what's missing and what assets will need to be generated in a preadsheet available to all project team members. I recommend that new designers on the team take a look because I left comments in it about already curated content and plans that might be useful for designing the rest of the Quiz section, including other learning goals. You can find supporting links to everything covered in this process blog below.
Helpful Links
HCII AB Usability Testing of Macroinvertabrates.org Native App and Figma Prototype
Authors: Dominique Aruede (Carnegie Mellon University), Marti Louw (Carnegie Mellon University) Abstract: The opportunity this project presents is to support practicing and learning identification (ID) of macroinvertebrates. The app is a companion to the macroinvertebrates.org website and works in-field without an internet connection on mobile devices for a diverse set of users. The app has an interactive subsection of the prototype which we named "Quiz." Quiz contains helpful tools for reviewing insect information (taxonomy, fun facts, etc.) like flashcard decks and short, image-based multiple choice question sets. We ideated on two possible flashcard designs and conducted a series of AB tests plus online surveys with six users. We ask the question, what flashcard elements do users prefer when learning to ID? We discovered that people unanimously preferred the version that coupled an illustration with the description of order-level characteristics of each insect, as opposed to the version that only had a textual description with closeup views of characteristics. Meeting of the Minds Live Symposium Poster Link Macroinvertebrates.org: A Digital Tool for Supporting Identification Activities During Water Quality Biomonitoring Trainings Authors: Alice Fang (Carnegie Mellon University), Marti Louw (Carnegie Mellon University), Camellia Sanford-Dolly (Rockman et al), Andrea Kautz (Carnegie Museum of Natural History), Dr. John Morse (Clemson University) Abstract: Macroinvertebrates.org is a visual teaching and learning resource to support identification activities in water quality biomonitoring. The online tool was developed with diverse stakeholders (including trainers, professional scientists, educators, and volunteers) through a codesign process. In this poster, we report on design research and evaluation data collected from trainers that characterizes how they integrated the site into their training workshops, and features they found useful. Survey, interview, and observation data revealed that Macroinvertebrates.org provided a compelling visual reference tool for trainers, allowing users to easily zoom in on key diagnostic characteristics needed for identification. Trainers reported that the site made it easier for them to train volunteers to Order and Family, and for volunteers to see relevant features and increase ID accuracy. Specifically, the site extended the utility of trainings by serving as a resource for volunteers to practice and review the ID process before and after trainings. In this poster, we report on the perspectives of trainers, gathered through the design research and formative evaluation process. https://connect.citizenscience.org/posts/macroinvertebratesorg-a-digital-tool-for-supporting-identification-activities-during-water-quality-biomonitoring-trainings Using Log Visualization to Interpret Online Interactions During Self-Quizzing Practice for Taxonomic Identification by Chelsea Cui, Jordan Hill, Marti Louw, Jessica Roberts. We were excited to present CMU alumna, and former REU student Chelsea Cui’s study at the 2021 annual meeting of the American Educational Researchers Association (AERA). Chelsea analyzed log data from participants in a 10-day study in which they used our quiz feature to practice aquatic macroinvertebrate ID. We analyzed interactions with the quiz tool along with accuracy in pre- and post-tests to determine how the quizzing platform was being utilized by learners at different experience levels. Through the custom visualization platform we were able to glean insights on how to improve our quizzing platform for future use. See the poster here: https://aera21-aera.ipostersessions.com/Default.aspx?s=D0-9D-A0-1B-C0-49-87-5B-8A-3F-56-A7-02-C5-ED-66# Mobile App: More details for field guide revisions, quiz feature design and team collaboration.1/28/2021
by Estelle Jiang Before sharing about the work the team and I got done, I want to quickly wrap up about the main functionalities of the application, the app condenses the following aspects in terms of learning and teaching freshwater insect identification after a range of use cases centered around exploration: 1. Field Guide 2. Identification (ID) Key 3. Quiz This is a blog that further elaborate on the work and tasks the design team finished over the past semester. Moving from the summer work where we finished a big round of testing with users for the field guide design and interaction, the design team mainly focuses on iterating the field guide through second round of user testings and also switched the gear towards design for Macroinvertebrates assessment - design a quiz functionality to allow users to review the learning contents from the field guide and also quiz themselves on the understanding. 01. Field guide iterations and finalization - 1. Brainstormed and explored the design interaction for the field guide: There are some small features and interactions, such as global navigation and the zoomable page interaction that are not fully considered over the summer. Therefore, Alice and I started exploring different interaction and design possibilities before going to testing. Here are some my explorations that inform our final field guide page design: A. Global Navigation B. Zoomable Image Flipping C. Onboarding Instruction and Design 2. Provided guidance and drafted out the design system for the field guide and entire app. Along the way, we decided to start consolidating the design guidelines and system for the team and product which can be useful for further development and also team collaboration. I started the finding good industry practices in terms of generating and designing the system, provided guidances and gave feedbacks while Alice consolidated the overall visual/layout and turned them into components in Figma to better speed up the design process. 3. Facilitated the second round of user testings with product evaluation and conducted synthesis workshops on Miro - Instead of getting insights about application flow, logic and concept aspects of the app, the major goals of the second round testing are identify the specific usability issues for overall navigation and zoomable pages new design and evaluate more on the overall engagement and usability. Based on the 8 testing results, I facilitated the synthesis workshops on Miro with the team to generate iteration insights and help the team finalize the field guide MVP. For the main insights: You can refer Alice's post: [Mobile App Pt. IV: Refining the Field Guide] 02. Quiz / game to assess the learning goals 1. Explored the market product interactions. Before designing the quiz, we firstly explored some predecessors that have quiz features, such as Quizlet, Duolingo, and Lumosity. It helped us generate several possible formats of the quiz as well, such as matching games, card flipping games and flashcard reviews. 2. Considered what we want to assess and the learning goals of the application before going deeper into the user experience and interaction design. We took a pause and realized that know the learning goals and purpose of making the quiz is the first priority comparing to brainstorm design solutions for quiz. Without understanding what we want the learners to learn, we cannot provide the suitable design to meet their needs. I summarized some potential learning goals before meeting the current trainers and experts:
3. Mapped out the design formats and interaction and finalized the flow and logics for each learning goal we defined. A. Initial flow explorations (without knowing the learning goals) B. New flow explorations that better inform the design with rationales. C. New quiz flow by following and considering the design goals D. Designed for varied use cases and learning cases - knew limitations and considered tradeoffs —Critiqued and iterated the design with engineers and trainers/experts to narrow down the scope. Review the feedback we gathered in Dominique's post: Macro App Tasks This Semester 03. Being a designer by wearing different hats and entering the development stageWhile our design team kept working on the ID key and quiz design and conducted further testings, we got in touch with Chris Bartley, an experienced engineer and involved him into our design process along the way. We are still figuring out the better and more efficient way to collaborate, here are some attempts we took so far:
By Dominique Aruede, CMU Cognitive Psychology When I joined, the quiz section had not been developed at all in favor of refining and refreshing the ID key and field guide which are more directly interpreted from the original website. Consequently, it was important to gather human-centered data again in order to assess the efficiency of the current design and determine if we could branch out to the quiz or keep working on the current designs. First Task: Audubon Field Guide Walkthrough & Modeling I spent a little time doing some exploration with a physical field guide and noting how it maps to real-world use and facilitation of citizen science. This step was particularly useful for getting my bearings on what the purpose of the Macroinvertabrates.com project is, how insect identification fits in, and how it works. Second Task: User Testing We conducted several user tests on insect ID experts and novices alike, with a focus on novice user experience from high schoolers and other young students. The purpose of these usability tests was to gauge the design direction of our second iteration and to gather more user input to launch into the next iteration. We reused the protocol from the first round of user testing and changed the questions to capture answers on our new inquiries:
The results revealed to us that the field guide was indeed informative, but subsequently neglected some utilitarian features that would be helpful for novice users. This helped to inform the learning goals and first iterations of the quiz design. Below is an affinity diagram of the synthesized insights. It includes
Third Task: Quiz Design The earliest version of the quiz was a rough sketch I drew up in Figma, but it did not have any learning goal or research claim behind it besides identifying some image as belonging to an order. These are the sketches below. After further visualization, we wrote out the learning goals we thought would be the best to target in quiz mode based on the insights gleaned from the previous affinity diagram. The learning goals are summarized here:
After we gathered our quiz references, including Quizlet and Duolingo, we used our learning goals to consolidate designs for one flow of the quiz section. Users select their quiz type first, then their learning goal. An alternative flow that we are yet to explore involves switching those two options. Then we incorporated three quiz types: a flash card review, multiple choice quiz, and a matching quiz. We drew this up on Figma and discovered a few issues/limitations
Afterwards we consulted our collaborator, who is an expert in aquatic macroinvertebrate education to give her opinion on the content and direction. From her insights, we decided to scrap learning PTV as a learning goal due to the inaccuracies of generalizing at the order level.
The next steps we plan to take include developing all the content necessary for accurate quizzing, fleshing out the flashcard review, the between order quiz learning goal for all quiz types, the common features learning goal for all quiz types, the common mistakes learning goal for all quiz types, exploring a new learning goal: “learning life history and fun facts about insect orders” as a deck in the flashcard review, and testing the usability and flow of our current design. By Alice Fang, CMU Design 🐞📱This semester, I've also been working on the design of the ID Key. As the Field Guide became more refined, we knew we had to focus on the other aspects of the app, less the design becomes incoherent. With previous key designs, we didn't really explore the edge cases, focusing on one prototypical 'question' and hint structure; soon I learned this question layout would not work with most of the interactive key. Previous Design Questions we were grappling with over the summer:
Low-Fi Flow While refining the first wireflow for the Field Guide, I began thinking about the interaction flow of the ID Key in its most basic components using the visual style of the Field Guide. The simplest flow for the ID Key is a start page, a question page where the user has to make a choice, an optional hint, which will eventually result in identifying an Order, where the user can jump to that Order in Field Guide . Issues I began to mock up a flow with all of the questions and paths, in order to design the end pages for each decision; at this point, I discovered some discrepancies with the website’s ID Key
Button Explorations As with the rest of the app, to take advantage of the unique-ness and affordance of Macroinvertebrates.org's photography, the ID Key takes an Image-forward approach. For the Key, this means taking the images from the cards on the site ID Key and bringing them to the actual questions on the mobile app. In doing so, what do the hints look like now? Are they just text definitions?
In the process of mocking up different questions, I realized there are 3 general types of question pages, which would need to inform the type of button they had. The three types of questions are: Yes or No for 1 trait, variable answers for 1 trait, or choose between 2 traits. How can I design a button that a. looks clickable and b. can still show a range of photographs? I didn't want to use one 'prototypical' image to represent each trait, because part of the beauty of the collection is being able to see the range of differences for a single trait. Wing Pads look different across different Orders, as do Tails. Another button design I had to work out was representing the ~absence~ of a trait. The original key design had one image gallery, with choices underneath it; however, in choosing between Yes and No, it felt like there was some disconnect between seeing the trait, and then clicking 'No.' Current FLow
|
Project TeamAn interdisciplinary team Categories
All
Archives
April 2022
|