Why can’t I find URLs from the Audit report on my website?
Your campaign's crawl source dictates which pages our bot finds during the audit.
There are three types of crawl sources:
-Website;
-Sitemaps;
-URLs from file.
If you choose one of the Sitemaps options or the URL from file option, our bot will crawl a specific set of pages either from your Sitemap or from the file you upload.
If you have a Website as a crawl source, we are crawling your site starting from the homepage and navigating through the links we see on your page’s code.
You can check incoming internal links of the URL in the Crawled Pages Individual page report if you are not sure how our bot could find a certain link:
In case you can’t find a specific URL in the code of your page, there can be three possible reasons for this:
1. Our bot received a different version of the page during the crawl. In most cases, this means that the page changed since, and the URL got deleted from the code.
2. You are looking for an absolute URL and links in your code are relative:
<a href = http://www.example.com/xyz.html> - absolute link
<a href = "/xyz.html"> - relative link
3. The link you see in our report is in decoded format, but in your code the link is in encoded format. This usually happens if a URL contains uncommon symbols (for example, "" quotes):
<a href = "/"xyz".html"> - decoded URL
<a href = "/%2F%22xyz%22.html"> - encoded URL
Häufig gestellte Fragen
- What Issues Can Site Audit Identify?
- How many pages can I crawl in a Site Audit?
- Wie lange dauert das Crawlen einer Webseite? Ich habe den Eindruck, mein Audit hängt fest.
- Wie überprüfe ich eine Subdomain?
- Can I manage the automatic Site Audit re-run schedule?
- Can I set up a custom re-crawl schedule?
- Wie wird der Site Health Score im Site Audit Tool berechnet?
- How Does Site Audit Select Pages to Analyze for Core Web Vitals?
- How do you collect data to measure Core Web Vitals in Site Audit?
- Why is there a difference between GSC and Semrush Core Web Vitals data?
- Weshalb werden nur wenige Seiten meiner Website gecrawlt?
- Warum werden mir funktionierende Seiten als defekt angezeigt?
- Why can’t I find URLs from the Audit report on my website?
- Why does Semrush say I have duplicate content?
- Why does Semrush say I have an incorrect certificate?
- What are unoptimized anchors and how does Site Audit identify them?
- What do the Structured Data Markup Items in Site Audit Mean?
- Can I stop a current Site Audit crawl?
- How to Disable JS Rendering and Inspect a Page
Anleitung
- Site Audit konfigurieren
- Fehlerbehebung bei Site Audit
- Site Audit Overview Report
- Thematische Berichte in Site Audit
- Reviewing Your Site Audit Issues
- Site Audit Crawled Pages Report
- Site Audit Statistics
- Compare Crawls and Progress
- Exporting Site Audit Results
- So optimierst du die Crawling-Geschwindigkeit deines Site Audits
- So integrierst du Site Audit mit Zapier
- JS Impact Report