Site scanner - product documentation
The Evinced Site Scanner will help you discover accessibility issues across your entire website, study changes over time, and monitor your overall compliance level in a fraction of the time.
Creating a Property
Enter the URL of the website you would like to scan for accessibility issues. You will be prompted with the form below upon initially logging in and can return to it by clicking the “Add Property” button on the main scanner dashboard.
Once you have entered the URL click “Next” and then “Start Crawling” to begin the process. The default crawl setting is a maximum 1500 pages. Please contact email@example.com if more pages are needed. Once the crawling process has completed, click “Scan” to complete the scanning process.
Page Mapping Options
There are two options for populating URLs or pages that will then be scanned for accessibility issues.
Crawl mode will automatically traverse and explore the provided website to find as many pages as possible. Toggling between modes is available at the top right corner of the “Advanced settings” page.
Provide a static list of urls that will be scanned.
This is a list of URLs that serve as entry points for the automatic crawler. Generally only a single seed URL is needed, however if there are multiple areas of your website that are not directly connected then multiple seed URLs are a great option.
Defining the scope can help make sure that the important areas of your website are included in the scan and areas that shouldn’t be scanned are excluded.
Include or exclude any URLs discovered by the crawler using domains. An example might be if you wanted to scan www.evinced.com but also wanted to include developer.evinced.com. We could add a simple include domain rule to make sure that developer.evinced.com is included in the scan.
Include or exclude a language setting. For language rules, Evinced takes the value of the language attribute (under the
<html> tag) and checks it against the regex provided in this field. A language rule is required so if you would like to include all languages simply use the
.* regular expression in the field as shown.
To exclude a language, select the “Exclude Language” option and the language. In this example,
en-US will only match the exact string. When using exact strings, please check the format used with the website being scanned as formats can differ between sites.
Regex rules let you include or exclude domains as well as url paths, parameters, etc. Every URL found while crawling will be matched against the regex and if it matches it will be included or excluded from the scan. For example, if you would like to include the app.evinced.com/stores/ and app.evinced.com/about/ directories but would like to exclude app.evinced.com/products/ we would add the below exclude regex rule below.
Grouping rules allow you to sample pages that may be built on the same underlying structure.
Mapping URLs with query parameters to unique pages
URLs that have differences in query parameters, case sensitivity or fragments(#) may map to same page or different pages. In the case that the URLs
www.evinced.com/products?id=2 present two different product pages we may want to sample these pages as they are likely built on the same underlying structure.
Sample similar pages to reduce the time/size of the scan
An e-commerce site with hundreds of thousands of product pages all build around the same base components may not need to scan each and every one for accessibility issues. Scanning a subset of these pages will have significant efficiency gains while still providing accessibility coverage.
So if we know there are thousands of products under the
app.evinced.com/products/ directory, we may only need to scan a handful for accessibility issues because they are based on the same underlying page structure. After clicking on the “Add rule” option, we can fill out the form as shown to limit the number of pages to 20. This means that once the crawler has discovered 20 of these pages it will stop and move on to other parts of the website. Then just click “Save rule”.
Scanning Behind Login
The Evinced Site Scanner has the ability to scan areas of any website that require login. Simply provide the page, locators, and login credentials in the advanced settings to start finding accessibility issues.
To begin, click on the property in which you would like to add a login and click on the “Settings” button in the top right hand corner of the page.
On the settings page, scroll down until you find the Login section.
From the dropdown, select “Form Login” and a number of new options will populate.
The URL for the page that contains the login form.
Valid user name needed to login to the application
User Name Selector
CSS selector for the user name text entry field web element
Valid password needed to login to the application
CSS selector for the password text entry field web element
Login Button Selector
CSS selector for the “Login” or “Submit” button web element
How to find a CSS Selector
A CSS selector is a strategy for locating elements within a web pages. To find a CSS selector, navigate to the page that contains the elements you need to locate and then right(control) click and select “Inspect” on the element. In this case, we have right clicked on the “Email” field of our Evinced Product Hub website.
This will automatically bring up the browser developer tools with the element in question highlighted. Right (control) click on the highlighted element in the Elements tab and select Copy > Selector. The selector is now in your clipboard and ready to be used.
Save and Crawl
Once the fields are complete click the “Save and Crawl” button in the bottom right corner of the page.
Once the crawl is complete simply click the scan button. Scanning time can vary based on the number and complexity of pages. To re-scan a property, simply click the “Scan” button again and a new set of results will automatically be created.
To export results, simply click on the “Download CSV” button in the top right hand corner of the scan results page.
This report contains results organized by page. Data includes Property Name, Scan ID, Time, URL, and Issue Count.
This report contains files organized by issue. Data includes Property ID, Property Name, Scan ID, Issue ID, Time, Cross Scans Issue ID, First Seen Time, Issue Type ID, Issue Type Name, Severity ID, Severity Name, WCAG Tag Names, Issue Type Summary, Issue Type Description, Issue Type Knowledge Base URL, Element Template Signature ID, Selector, HTML’s DOM Snippet, and Page URL.
Please feel free to reach out to firstname.lastname@example.org with any questions.