Integrity and Scrutiny software support - FAQs
Before emailing your question, please take a quick look at the FAQ's below to see whether your question's answered.
These FAQ's accompany the manual for Integrity and Scrutiny.
If your problem isn't answered here, please use this form.
Can I put in a username and password to crawl pages that require authentication?
This can have disastrous results because some website systems have a web interface with controls (including 'delete' buttons) that look to web crawlers like links.
I have now included this feature into Scrutiny with the necessary advice, warnings and disclaimers. Click 'Advanced settings' on the settings screen.
What does the "page titles are unique" option do?
Choosing this option is a quicker and more accurate way to crawl your site, but it only works if each of your pages has a different title.
After checking each internal link, the app has to then fetch the contents of the page, read through it and pull out the links from that page. That's how it crawls the site. It'll get a link like "index.html" lots of times (on every page perhaps) so before fetching the contents, it has to decide whether it's done that page already. It compares the new link with the list of those it's already done.
Integrity used to use the url to determine this. However, it's often the case that the same page is referred to by a number of different urls - eg peacockmedia.co.uk and peacockmedia.co.uk/index.html are the same page, but a web crawler can't know that. Some content management systems can refer to the same page by quite a few different urls. That means that the app could do lots more work than it needed to, and over-report the number of links and pages.
Should I set "ignore querystrings"?
The querystring is information within the url of a page. It follows a '?' - for example www.mysite.co.uk/index.html?thisis=thequerystring. If you don't use querystrings on your site, then it won't matter whether you set this option. If your page is the same with or without the querysrting (for example, if it contains a session id) then check 'ignore querystrings'. If the querystring determines which page appears (for example, if it contains the page id) then you shouldn't ignore querystrings, because Integrity or Scrutiny won't crawl your site properly.
What does altering the number of threads do?
Using more threads may crawl your site faster, but it will use more of your computer's resources and your internet bandwidth. More importantly, more threads will bombard your server with more requests at a time, which some may not be able to handle (see below).
Using fewer will allow you to use your computer while the crawl is going on with the minimum disruption, and make less of a demand on your web server.
The default is twelve, minimum is one and maximum is 40. (Before v4, max was 30 and default was seven.) I've found that using more than this has little effect.
Pages time out / the web server stops responding
This isn't uncommon. Some servers will respond to many simultaneous requests, but some will have trouble coping, or may deliberately stop responding if being bombarded from the same IP. There are a couple of things you can do. First of all, the 'threads' slider sets the number of requests that Scrutiny/Integrity can make at once. If you move this to the extreme left, then Scrutiny/Integrity will send one request at a time, and process the result before sending the next. This alone may work. If not, then there's a box beside that slider which allows you to set a delay (in seconds). You can set this to what you like, but a fraction of a second may be enough.
If your server is simply being slow to respond, you can increase the timeout.
A link to [a social networking site ie Youtube, Facebook] is reported as a bad link or an error in Scrutiny, but the link works fine in my browser?
In your browser, log out of the site in question, then visit the link. You'll be seeing the same page that Scrutiny sees because, by default, it doesn't attempt to authenticate.
If you see a page that says something like 'you need to be logged in to see this content' then this is the answer. It's debatable whether a site should return a 404 if the page is asking you to log in, but that should be taken up with the site in question.
You have several options. You could switch on authentication in Scrutiny (you may not need to give Scrutiny the username and password, only need to be logged in using Safari). You could set up a rule so that Scrutiny does not check these links, or you could change your profile on the social networking site so that the content is visible to everyone.
If your site is a larger site then the memory use and demand on the processor will increase as the lists of pages crawled and links checked get longer.
Version 4 is much more efficient, but if the site is large enough (hundreds of thousands of links) then the app will eventually run out of memory and obviously can't continue.
I have several suggestions:
- You can crawl the site in parts, if you can break it down into sections, using the black and whitelists (ie, to crawl everything under /engineering, you start at mysite.com/engineering and type /engineering into the 'only follow links containing' box)
- Make sure Integrity isn't going into a loop or crawling the same page multiple times because of a session id or date in a querystring - you can switch off querystrings in the settings, but make sure that content that you want to crawl isn't controlled by information in the querystring (eg a page id)
- See if you're crawling unnecessary pages, such as a messageboard. To Integrity and Scrutiny, a well-used messageboard can look like tens of thousands of unique pages and it will try to list and check all of those pages. Again, you can exclude these pages by blacklisting part of the url or querystring or ignoring querystrings.
- 'Page titles are unique' can also help you to avoid duplicating or crawling unnecessary pages, and will be quicker, but this only works if every page really does have a unique title.
I use Google advertising on my pages and don't want hits on these ads from my IP address
A link that's listed in the form "www.mysite.com/../page.html" is reported as an error but when I click it in the browser it works perfectly well
Sometimes a link is written in the html as '../mypage.html'. This means that the page is to be found in the directory above, which is fine as long as the link is deep in the site. If it appears on a top-level page in that form, then it's technically incorrect because no-one should have access to the directory above your domain. Browsers tend to tolerate this and assume the link is supposed to point to the root of your site. At present, Integrity and Scrutiny do not make this assumption and report the error.
A link that uses non-ascii or unicode characters is reported as an error but when I click it in the browser it works perfectly well
Although some browsers do handle this (by percent-encoding the characters) as far as I'm aware it is still outside web standards (RFC 1738). I'm expecting that in time it will be used more commonly and supported by everything which needs to parse a url and thus become an 'unofficial standard' before eventually becoming allowed as an official standard. But in the mean time, I don't think you're well-advised to use non-ascii characters in urls.
I need to make Integrity or Scrutiny appear to be a 'real' browser
You can change the user-agent string to make it appear to the server to be a browser (known as 'spoofing').
Go to Preferences and paste your chosen user-agent string into the box
There is an incredibly comprehensive list of browser user-agent strings on this page: http://www.zytrax.com/tech/web/browser_ids.htm
If you would like to find the user-agent string of the browser you're using now, just hit this link:
What's my user-agent string?
What's the difference between 'checking' and 'following'?
In a nutshell, checking means just asking the server for the status of that page without actually visiting the page. Following means visiting that page and scraping all the links off it.
Checking a link is sending a request and receiving a status code (200, 404, whatever). Integrity and Scrutiny will check all of the links it finds on your starting page. If you've checked 'Check this page only' then it stops there.
But otherwise, it'll take each of those links it's found on your first page and 'follow' them. That means requesting and loading the content of the page, then going through the content finding the links on that page. It adds all the links it finds to its list and then goes through those checking them, and if appropriate, following them in turn. Note that it won't 'follow' external links, because it would then be crawling someone else's site - it just needs to 'check' external links
You can ask Integrity or Scrutiny to not check certain links, to only follow or not to follow certain links. You do this by typing part of a url into the relevant box. For example, if you want to only check the section of your site below /engineering you would type '/engineering' (without quotes) into the 'Only follow urls containing...' box. (You will also need to start your crawl at a page containing that term).
You don't need to know about pattern matching such as regex or wildcards, just type a part of the url.
What do the red and orange colours mean in the list?
To check a link, Integrity sends a request and receives a status code back from your server (200, 404, whatever).
The 'status' column tells you the code that the server returns to Integrity when it checks each link. 200 means that the link is good, 300 means there's something not quite right (usually a redirection) but the link still works, 400 codes mean that the link is bad and the page can't be accessed and 500 codes mean some kind of error with the server. So the higher the number, the worse the error and Integrity colours these (by default) white, orange and red.
(There's a full list of all the possible status codes here: http://en.wikipedia.org/wiki/List_of_HTTP_status_codes) but Integrity helpfully gives you a description of the status as well as the code number.
What is the difference between the 'by link', 'by page' and flat views?
If you're not sure of the difference between a page and a link, there's an infographic here
The 'by link' view is a list of links - remember that each link can occur more than once (for example your 'home' link will probably appear on every one of your pages) - so in the 'by link' view, each link will be listed once, and you can open it up to see a list of pages that the link appears on (and if it's broken you'll probably have to fix it on each of those pages).
'by page' means that you see the same information the other way around, ie you'll see a list of your pages, expandable to show the links that appear on that page.
The 'flat view' is provided for a number of reasons; the 'by link' list is expanded so that each occurrence of each link has a row in the table. if you're exporting your list to an Excel spreadsheet then this view will be much more suitable.
You may find that when you're fixing your links you might prefer one view over the other.
When I export to csv the file is corrupt in Excel
Because the 'by link' view shows a list of pages in one column when exported (and this list will show in one cell when opened in Excel) it may break Excel's 256-character limit.
Switch to the 'flat view' before exporting, which expands the 'by link' view so that each occurrence of each link is shown on a separate row. This should solve the problem. If it produces a very large file (and Excel has another limit of about 65,000 rows) you may prefer to switch to 'bad links only' before exporting.
I'm happy to investigate but I'll need to know about your mac and version of OSX and the url that you're starting from. Please use this form
If your site is a larger site (hundreds of thousand pages or several hundreds of thousands of links) then the chances are that you're hitting a limit. See limitations above for more details and suggestions.
What does 'Location of Validator' mean? Scrutiny
By default, this screen uses W3C's HTML validation service. This is a donation-funded service.
It's possible to download, install and run the validator on your Mac for free. I can't support you with this, but if you are successful, you can enter the url of your instance of the validator in the appropriate box in Preferences.
As an alternative to the installation instructions above I can recommend the free 'Validator S.A.C' from Chuck Houpt. It's easy to download and run. You will need to start the validator as a web service but this is relatively easy too and the instructions are on the same page as the download. You will then need to enter http://localhost/w3c-validator as the location of the validator.
Scrutiny's HTML Validation screen times out
Scrutiny currently starts validation as soon as the first page is crawled, but it respects W3C's wish for automated requests to be no less than a second apart. This is why validation may still be running after your crawl has finished.
The public service is not always available and has limits, reporting 'The Request Timed Out' after a certain number of checks.Consider running your own instance of the validator. (See previous question)
What does the archive feature do?
When Integrity crawls the site, it has to pull in the html code for each page in order to find the links. WIth the archive mode switched on, it simply saves that html as a file in a location that you specify at the end of the crawl.
If you need to go back and refer to them or use them as a backup that's fine but it doesn't alter those files in any way (eg making the links relative) so they're not particularly user-friendly if you want to view them.
What does 'Check for robots.txt and robots meta tag' do? (Scrutiny feature)
The robots.txt file and robots meta tag allow you to indicate to web robots such as the Google robot and Scrutiny that you wish them to ignore certain pages. The preference in Scrutiny is off by default and switched on in preferences. All links are followed and checked regardless of this setting, but if a page is marked as 'noindex' in the robots meta tag or disallowed in the robots.txt file, it will not be included in the sitemap, SEO or validation checks. robots.txt must have a lowercase filename, be placed in te root directory of your website and be constructed as at http://www.robotstxt.org/robotstxt.html
Can I give you some money?
Yes - Integrity works for free with no restrictions but donations are very much appreciated and enable me to spend more time developing. Especially if you use it a lot or are a business user. Donate here