Why does Semrush say I have duplicate content?
If the Site Audit bot identifies multiple pages with 85% similarities in content, it will flag them as duplicate content.
HTTP & HTTPS / WWW & non-WWW
In most cases, domains have duplicate content due to http/https issues. According to W3C standards, whenever you have two versions of a URL (one on http and the other https), they are considered two separate documents.
The same goes for when a site has a www version of a page as well as a non-www version of the same page - search bots see these as two separate documents.
So when the SemrushBot sees these two separate documents, it will identify them as duplicate because that’s how the GoogleBot would see them.
To avoid this issue, you need to use canonical tags pointing from the duplicate pages to the correct page that you set as the canonical (or indexed) version.
You should also set up a 301 redirect from the http page to the https page so that users and search engine bots strictly see your https version.
Pages with little content
Another reason would be if Site Audit sees two pages that have the same content in the header and footer of your website, but there is so little body content (1 or 2 sentences on the page) that the bot sees the pages as at least 85% similar and therefore duplicates. In this case, you would need to expand the content on your pages so that bots can identify them as unique.
- What Issues Can Site Audit Identify?
- How many pages can I crawl in a Site Audit?
- Wie lange dauert das Crawlen einer Webseite? Ich habe den Eindruck, mein Audit hängt fest.
- Wie überprüfe ich eine Subdomain?
- Can I manage the automatic Site Audit re-run schedule?
- Can I set up a custom re-crawl schedule?
- Wie wird der Site Health Score im Site Audit Tool berechnet?
- How Does Site Audit Select Pages to Analyze for Core Web Vitals?
- How do you collect data to measure Core Web Vitals in Site Audit?
- Why is there a difference between GSC and Semrush Core Web Vitals data?
- Weshalb werden nur wenige Seiten meiner Website gecrawlt?
- Warum werden mir funktionierende Seiten als defekt angezeigt?
- Why can’t I find URLs from the Audit report on my website?
- Why does Semrush say I have duplicate content?
- Why does Semrush say I have an incorrect certificate?
- What are unoptimized anchors and how does Site Audit identify them?
- What do the Structured Data Markup Items in Site Audit Mean?
- Can I stop a current Site Audit crawl?
- Using JS Impact Report to Review a Page
- Site Audit konfigurieren
- Fehlerbehebung bei Site Audit
- Site Audit Overview Report
- Thematische Berichte in Site Audit
- Reviewing Your Site Audit Issues
- Bericht „Gecrawlte Seiten“ in Site Audit
- Site Audit Statistics
- Compare Crawls and Progress
- Exporting Site Audit Results
- So optimierst du die Crawling-Geschwindigkeit deines Site Audits
- So integrierst du Site Audit mit Zapier
- JS Impact Report