[Beta] Visual AI Checks
Note: Visual AI Checks is currently in Beta.
Visual AI Checks are an integral part of Guided Journeys. Visual AI Checks can be established for each section within the Guided Journey, ensuring that criteria or skills specific to each section are systematically applied and validated. The goal of this check is to provide a visual verification mechanism to ensure an expected result or step is achieved.
The Visual AI Check feature offers a way to configure, manage, and track checks configured as part of one Guided Journey - thus enhancing both self-paced and instructor-led experiences. Checks can be set up, edited, and deleted - and check indicators can be viewed alongside participant progress tracking.
CloudShare’s Visual AI Check algorithm leverages computer vision to detect predefined visual elements and compare them against the expected results set by the experience manager. This enables automated verification of participant progress, providing real-time insights into success or failure within Guided Journeys.
Creating Visual AI Checks
Note: For recommended tips and tricks on creating Visual AI Checks, see the Best Practices section later in this article.
- Visual AI Checks are integrated into the Guided Journey. To use this feature, ensure that the Guided Journey is properly configured. For detailed instructions, refer to the Guided Journeys article.
- When you have a Guided Journey, Visual AI Checks can be created within each section by selecting the Check tab (note that creating Visual AI Checks for a section is optional).
- You can upload images from your computer or add them using drag and drop. For uploads, the supported file type is .png (RGBA), as it provides the highest resolution, ensuring optimal performance with the algorithm.
Note: The uploaded image file should be no larger than 5 MB.
- After uploading an image, the experience manager should also select any relevant visual elements that describe what the end result should be for participants. At least one part must be selected in order to have a check criteria. For example, if the experience manager is in Section 3 and adds an image to that section, they can select a particular part to define where participants should be when they are in that section.
5. Use the Failure Options to define how the system responds when a participant fails a check. These settings let experience managers determine the level of feedback and guidance provided, helping participants understand their mistakes and improve their performance. By configuring failure options, the learning experience can be customized, ensuring that participants receive the appropriate support when they encounter challenges.
-
- Section hint - This option is designed to assist participants in reaching the check for this section, which will only appear if they fail. Here, the hint text can be customized to guide participants toward achieving the correct result.
-
- Show Visual AI Hint – When a check is set up for a specific section and participants make a mistake or fail, the Show Visual AI Hint checkbox can be enabled to provide visual feedback. This feature highlights both correct and incorrect parts within the verified section, helping participants understand what they did right and where they went wrong.
Note that the Visual AI Hint is only applied to failed checks; it does not appear for checks that have passed or are still in progress.
6. Click Save to save your check configuration. You will then be forwarded to a screen with the check image. All of the selected elements are also displayed here.
Best Practices
The following best practices should be exercised when setting Visual AI Checks.
1. Understanding the algorithm. While the algorithm is effective for structured visual comparisons, note that its accuracy and reliability are subject to the following limitations:
- Screen Resolution Variability. To ensure the accuracy of the check algorithm, the Editor and Viewer should have the same screen resolution (display settings). Variations in resolution could impact the algorithm's effectiveness. Therefore, it is recommended to use the relevant Virtual Machine (VM) - accessible via the Environment Details page or experience manager environment - to select the check image. This setup closely aligns with the Viewer’s environment, reducing the risk of resolution discrepancies.
- Display Zoom Discrepancies. If the operating system’s display zoom is not set to 100% for both the Editor and Viewer, scaling inconsistencies could interfere with the algorithm’s ability to recognize visual elements accurately. It is recommended to use the relevant VM (accessible via the Environment Details page or experience manager environment) to select the check image. This approach closely aligns with the Viewer’s environment, preventing any display zoom inconsistencies.
- Partial or Cropped Screenshots. Capturing check images that do not encompass the full screen could introduce inconsistencies, as the algorithm relies on precise visual matching. Screenshots should always be taken in full-screen mode from the same VM used by the Viewer.
2. Selected checks should be informative, unique, and specific. Ensure that what you choose for your check images contains diverse visual elements and distinct characteristics. This approach will improve the accuracy and effectiveness of your checks.
The following example demonstrates the recommended way for selecting elements for a Visual AI Check that satisfy the check criteria.
In the following example, however, the selection is the entire row and contains many elements inside of it that are not specific enough.
The example below is informative, because the elements have meaning and significance within the image.
The following example, however, is not informative because it has blank areas of the screen, which do not have meaning or significance within the image.
3. Add relevant and specific instructions to your participants. This check relies on a specific image captured from a designated VM. Therefore, the experience manager must guide participants to navigate to the correct location to perform the check. Clear navigation instructions should be provided, ensuring that participants are in the same VM and product screen as the experience manager when the check was created.
4. Test your environment. The experience manager should test everything first within the experience manager environment, to prevent errors from occurring during the class, and to make sure that everything will work as it should.
Limitations
- There can only be one check per section.
- Visual AI Checks are not applicable for multi-step classes (the Editor is presently not supported for multi-step classes).
- There is no limit to the number of check attempts.
- There is no check dependency between sections (from one section to another).
- A check is associated with a specific visual element(s) that is likely located on a dedicated machine, so participants must be on the correct machine to execute it successfully.
Editing the Check Configuration While the Experience is Live
While your experience is live (meaning that the experience access time has started or participants are in the environment, even if they joined before the access time), any local edits that you save in the Editor will update the participants' content in real time. When making these edits, you will be prompted with the following message, allowing you to decide how to handle the progress and check results of your participants.
You should select one of the following options:
- Keep sections’ progress and check results. This option allows you to retain the progress and check results for the sections that have already been completed, while still applying any changes made to the Guided Journey. This is useful if you want to preserve your previous work without starting over.
- Reset progress and check for all sections. This option resets both the progress and check results for all sections in the Guided Journey. It provides a fresh start, applying changes from the beginning without retaining any prior progress or results.
- Reset all sections’ progress and check starting from. This option provides the ability to reset all sections' progress and check results, but with the added feature of selecting a starting point. You can choose to begin the reset process from a specific section rather than starting from the first one, offering more control over which sections are affected by the reset.
After making your decision, you can add a custom message to notify your participants about the changes, and include instructions of your choice:
Clicking Yes will apply all of your changes and confirm your previous decision regarding resetting or not resetting progress and checks.
Participant Progress in the Viewer
Once checks are created, participants can track their status in the Viewer.
- At the top of the Viewer screen, a tracker displays the section numbers on top. Below that, each check is displayed as passed (green checkmark) or failed (red checkmark). A gray checkmark indicates that there is a check for that section, but that the check has not yet been run. The result of any particular check does not affect the participants’ progress.
- Participants can run the Check button at the bottom of the screen. If a participant’s result matches the desired results set by the experience manager, the check for that section appears as Passed at the top of the screen, and the participant can then move to the next step.
If a participant is unable to achieve the correct check (meaning there is no match between the visual results displayed on their screen and the desired results set by the experience manager), the check will fail, and the participant will receive an optional message prompting them to try again. This message provides a hint to help the participant successfully complete the check.
Participants can skip a section at any time, giving them the option to bypass the check if it fails.
Participant Status in the Instructor Console
The Instructor Console view for checks is similar to what is displayed in the Viewer. The check status from the Viewer is updated accordingly in real time, in the Instructor Console.
Real-time analytics can also be viewed within Experience Analytics, under the Progress tab. This includes average check rates, and a check results chart with passed, failed, and not started status indicators for each numbered section. Note that these indicators represent a combination of all participant environments.
The Check Results chart has a similar structure to the Progress Results chart, but they present different aspects.
At the bottom of the Progress tab is the Actual vs. Estimated Time chart. This chart displays the average time versus estimated time that participants are spending per section. The colors in the graph indicate to the experience manager when the time is above, equal, or below the time configured for the section. This chart therefore helps the experience manager better utilize their class time, and learn how to fine tune it going forward.
For example, here you can determine whether participants are spending too much time or not enough time on a particular section, or whether they are on pace according to the experience manager’s criteria.
Note: Estimated time is set up within the section configuration setup in the Editor.
Reporting the Score to the LMS
CloudShare supports the Learning Tools Interoperability (LTI) specification, a cross-platform protocol that enables learning platforms (like your LMS) to securely communicate and share data with external platforms (like CloudShare).
When a Guided Journey contains a Visual AI Check in CloudShare, participants’ grades must be reported to an LMS, based on participants’ results.
These results are calculated as a successful check within the Guided Journey / the total check within the Guided Journey.
So for example, let’s say there are five checks, and a participant succeeded on only one of the checks. The participant’s reported score to the LMS would then be 1/5 (or 20%).
The score is triggered when there is a new explored section, or when there are new check results (which can be either pass or fail).
When there is a Guided Journey with no check, the score will be based on the participants’ progress. For more details on this, see the Guided Journeys article.
Comments
0 comments
Please sign in to leave a comment.