Visual AI Checks
Note: Visual AI Checks is currently in Beta.
Visual AI Checks are an integral part of Guided Journeys. Visual AI Checks can be established for each section within the guided journey, ensuring that criteria specific to each section are systematically applied and evaluated. The goal of this check is to provide a visual verification mechanism that can be used for each section.
The Visual AI Check feature offers a comprehensive way to configure, manage, and track checks configured as part of one Guided Journey - offering value for both self-paced and experience manager-led classes. Checks can be set up, edited, and deleted - and check indicators can be viewed alongside participant progress tracking.
CloudShare’s sophisticated check algorithm uses computer vision to verify if the participant’s class results match the experience manager's visual element criteria. The algorithm identifies predefined visual elements during the verification process. Valuable insights can then be gained on participants’ success or failure throughout their Guided Journey.
The informative image area for a computer vision algorithm refers to the specific region or portion of an image that contains the most relevant or essential information required for the algorithm to accurately perform its task. This area should be carefully selected by the experience manager - to maximize the algorithm's ability to detect, classify, or analyze the content effectively while minimizing any noise or irrelevant data.
Best Practices
The following best practices should be exercised when working with Visual AI Checks.
- Screen resolution matching: To ensure the accuracy of the check algorithm, the Editor and Viewer should have the same screen resolution (display settings). Variations in resolution could impact the algorithm's effectiveness. Therefore, it is recommended to use the relevant Virtual Machine (VM) - accessible via the Environment Details page or experience manager environment - to select the check image. This setup closely aligns with the Viewer’s environment, reducing the risk of resolution discrepancies.
- Operating system display zoom: Make sure the display zoom setting on your operating system is set to 100% for both the Editor and Viewer, in order to ensure optimal check performance. It is recommended to use the relevant VM (accessible via the Environment Details page or experience manager environment) to select the check image. This approach closely aligns with the Viewer’s environment, preventing any display zoom inconsistencies.
- Take a full screen screenshot. The screenshot should be captured in full-screen mode, encompassing the entire display. It is recommended to take the screenshot on the same VM used by the Viewer to ensure consistency.
- Check area selection should be informative and unique. The area selection for your check images should be informative, unique, specific, and relevant. You should choose informative and unique elements within the image to increase the chance of your checks’ success. For example, you could highlight an area with “Normal” priority (as in the image below), but if that priority appears in other places on the screen, then highlighting that area is not specific or informative enough.
Another example is blank areas (as illustrated below), which do not have meaning or significance within the image.
It is recommended to capture and upload a full-screen screenshot; however, when you select your "visual check areas", your check criteria should be specific and distinct, focusing on a particular area rather than the entire screen.
For instance, try to avoid selecting a broad, undefined image area like this:
- Add relevant and specific instructions to your participants. This check relies on a specific image captured from a designated VM. Therefore, the experience manager must guide participants to navigate to the correct location to perform the check. Clear navigation instructions should be provided, ensuring that participants are in the same VM and product area/screen as the experience manager when the check was created.
- Test your environment. The experience manager should test everything first within the experience manager environment, to prevent errors from occurring during the class, and to make sure that everything will work as it should.
Limitations
- There can only be one check per section.
- Currently, there can only be one section hint per section.
- Visual AI Checks are not applicable for multi-step classes.
- There is no limit to the number of check attempts.
- There is no check dependency between sections (from one section to another).
- A check is associated with a specific visual element that is likely located on a dedicated machine, so participants must be on the correct machine to execute it successfully. Therefore, experience managers must guide participants to the exact location within a specific section to run the check and achieve successful results.
Creating Visual AI Checks
1. Visual AI Checks are integrated into the Guided Journey. To use this feature, ensure the Guided Journey is properly configured. For detailed instructions, refer to the Guided Journey article.
2. When you have a Guided Journey set, Within each section, Visual AI Checks can be created by selecting the Check tab.
Note: Creating Visual AI Checks for a section is optional.
3. You can upload images from your computer or add them using drag and drop. For uploads, the supported file type is .png (RGBA), as it provides the highest resolution, ensuring optimal performance with the algorithm.
4. After uploading an image, the experience manager should also select any relevant visual elements that describe what the end result should be for participants. At least one area must be selected in order to have a check criteria. For example, if the experience manager is in Section 3 and adds an image to that section, they can select a particular area to define where participants should be when they are in that section.
Tip! The check algorithm works with “informative areas” - a minimum amount of features in the visual element. When choosing areas, select a unique area that does not appear often in the image. For example, areas without color are not considered informative, and the algorithm cannot learn from it. Additionally, the experience manager and participant screens should have the same high level of resolution. Set up an environment via CloudShare and run the screenshot there, to ensure identical customization between the experience manager and participants.
Note: A check does not have to be created for each section. It can run after a few sections. The check itself is a subsequent results test of those sections.
Note: There is no limitation on the number of elements that can be added for each check. At least one element needs to be set in order to have the check. Do not select the entire screen; instead, select the elements within that screen that make it unique.
5. Click Save to save your check configuration. You will then be forwarded to a screen with the check image. All of the selected elements are also displayed here.
Optional: At the bottom, there is an optional Section Hint designed to assist participants in reaching the check for this section, which will only appear if they fail.
Note: If a check is edited while a class is in session (with environments already associated with this class and some participants having successfully completed the check), the configuration will be updated in both the Editor and Viewer, and their checks will be reset to the new configurations. As a result, some participants will need to run the check again.
Editing the Check Configuration While the Class is Live
While your class is live (meaning that the class access time has started or participants are in the environment, even if they joined before the access time), any local edits that you save in the Editor will update the participants' content in real time. When making these edits, you will be prompted with the following message, allowing you to decide how to handle the progress and check results of your participants:
You should select one of the following options.
- Keep sections’ progress and check results. This option allows you to retain the progress and check results for the sections that have already been completed, while still applying any changes made to the Guided Journey. This is useful if you want to preserve your previous work without starting over.
- Reset progress and check for all sections. This option resets both the progress and check results for all sections in the Guided Journey. It provides a fresh start, applying changes from the beginning without retaining any prior progress or results.
- Reset all sections’ progress and check starting from. This option provides the ability to reset all sections' progress and check results, but with the added feature of selecting a starting point. You can choose to begin the reset process from a specific section rather than starting from the first one, offering more control over which sections are affected by the reset.
After making your decision, you can add a custom message to notify your participants about the changes, and include instructions of your choice:
Clicking Yes will apply all of your changes and confirm your previous decision regarding resetting or not resetting progress and checks.
Tracking Check Status in the Viewer
Once checks are created, participants can track their status in the Viewer.
1. At the top of the Viewer screen, a tracker displays the section numbers on top. Below that, each check is displayed as passed (green checkmark) or failed (red checkmark). A gray checkmark indicates that there is a check for that section, but that the check has not yet been run.
Note:
- The result of any particular check does not affect the participants’ progress.
2. Participants can run the Check button at the bottom of the screen. If a participant’s result matches the desired results set by the experience manager, the check for that section appears as Passed at the top of the screen, and the participant can then move to the next step.
Note: If a participant is unable to achieve the correct check (meaning there is no match between the visual results displayed on their screen and the desired results set by the experience manager), the check will fail, and the participant will receive an optional message prompting them to try again. This message provides a hint to help the participant successfully complete the check.
Note: Participants can skip a section at any time, giving them the option to bypass the check if it fails.
Tracking Check Status in the Experience Manager Console
The Experience Manager Console view for checks is similar to what is displayed in the Viewer. The check status from the Viewer is updated accordingly in real time, in the Experience Manager Console.
Real-time analytics can also be viewed within Class Analytics, under the Progress tab. This includes average check rates, and a check results chart with passed, failed, and not started status indicators for each numbered section. Note that these indicators represent a combination of all participant environments.
Note:The Check Results chart has a similar structure to the Progress Results chart, but they present different aspects.
At the bottom of the Progress tab is the Actual vs. Estimated Time chart. This chart displays the average time versus estimated time that participants are spending per section. The colors in the graph indicate to the experience manager when the time is above, equal, or below the time configured for the section. This chart therefore helps the experience manager better utilize their class time, and learn how to fine tune it going forward.
For example, here you can determine whether participants are spending too much time or not enough time on a particular section, or whether they are on pace according to the experience manager’s criteria.
Note: Estimated time is set up within the section configuration setup in the Editor.
The statuses in the Actual vs. Estimated Time chart are as follows:
1. Above the estimated time - indicated in red.
2. Equal ~ to the estimated time - indicated in orange.
3. Below the estimated time - indicated in yellow.
The area on the left offers additional insights into the average time vs. estimated time per section.
LMS Integration - Score Reporting
CloudShare supports the Learning Tools Interoperability (LTI) specification, a cross-platform protocol that enables learning platforms (like your LMS) to securely communicate and share data with external platforms (like CloudShare).
When a Guided Journey contains a Visual AI Check in CloudShare, participants’ grades must be reported to an LMS, based on participants’ results.
These results are calculated as a successful check within the Guided Journey / the total check within the Guided Journey.
So for example, let’s say there are five checks, and a participant succeeded on only one of the checks. The participant’s reported score to the LMS would then be 1/5 (or 20%).
The score is triggered when there is a new explored section, or when there are new check results (which can be either pass or fail).
Note: When there is a Guided Journey with no check, the score will be based on the participants’ progress. For more details on this, see the Guided Journey article.
Comments
0 comments
Please sign in to leave a comment.