4
votes
Googlebot doesn't see links in Angular SPA because routerLink is rendered with an href starting with a hash
That's a common JavaScript / SEO problem. It can happen with all JavaScript frameworks.
To make your website compatible with Google (and benefit from SEO), you might use the Server Side Rendering (SSR)...
3
votes
Accepted
Routing in the most SEO friendly way with Angular?
There is more SEO friendly way. Instead of calling function that will do redirect, you can do it directly in HTML.
<a [routerLink]="'/'">Home</a>
3
votes
In a client-side-rendered app, some pages are successfully crawled by Googlebot while others fail to fully render. How can I get all pages to render?
It's unlikely the latency issue. I mean, I haven't seen how fast it works and whether lighthouse was able to fully render the page.
Given that, yes, a prerenderer would be the simplest solution here.
...
3
votes
Accepted
For a single page app, What is the proper way to offer large sitemap.xml files to webcrawlers?
One solution would be to use CloudFront to tie various services together under a single domain. CloudFront is Amazon's content delivery network (CDN.) You point your DNS to CloudFront and it fetches ...
2
votes
How should I implement links in Angular so that Google can follow them?
<a
class="example-class"
[routerLink]="getRouteByName('example-route')">
Click Me
</a>
Will then look like this in your page source:
<a
_ngcontent-c[...
2
votes
How should I implement links in Angular so that Google can follow them?
If href is populated on dom ready and its value is correct (e.g. when you copy it to the address bar, it shows the correct page), then you're good.
You can also check if google crawls your site by ...
2
votes
How to include URLs generated from the WordPress API in my XML sitemap?
Just adding https://my-site.example/blog/
to your https://my-site.example/sitemap.xml will work and google will catch the URLs.
Another way:
Put a https://my-site.example/blog/sitemap.xml file/url in ...
2
votes
Googlebot doesn't see links in Angular SPA because routerLink is rendered with an href starting with a hash
Check if you have { useHash: true } config in your Router configuration. If you do, just remove it, and that should solve the problem.
Before:
RouterModule.forRoot(routes, { useHash: true });
Now:
...
2
votes
Google Analytics reporting all traffic as organic
You didn't do enough debugging. A switch in attribution like that is only possible with complete session breakage. It's common when you either have checkout on a different TLD (top-level-domain) so ...
1
vote
Publication wrong date in google search result
It’s funny, but it seems like you’ve already answered your own question in the comments! You mentioned checking the structured data, which is indeed the most likely cause of the problem. Great job—it’...
1
vote
Accepted
In a client-side-rendered app, some pages are successfully crawled by Googlebot while others fail to fully render. How can I get all pages to render?
I have found the issue.
In Google Search Console you can examine a URL and do a "Live Test". When live testing the broken URLs and the working URLs I noticed a difference in the "More ...
1
vote
Crawled page screenshot for Angular site is blank in the Google Search Console
Through this issue, I found a way to solve it
It looks like angular.io may no longer be indexing
Angular can be indexed by the Googlebot without using server-side
render way.
How to debug
add ...
1
vote
Accepted
Should I exclude the assets folder from search engine crawlers in angular
It is usually a good idea to allow the entire assets folder to be crawled.
Images
If you don't allow your images to be crawled, they won't be indexed in image search. Disallow images from crawling ...
1
vote
Should I exclude the assets folder from search engine crawlers in angular
What type of files are in your assets folder?
If you have images, JS, CSS, etc, it's ok for Googlebot to crawl them. If Googlebot needs JS to render your page, then those files will need to be crawled ...
1
vote
For a single page app, What is the proper way to offer large sitemap.xml files to webcrawlers?
It is important to define the purpose of the XML sitemap. The XML sitemap itself is a list of URLs that are important to you and you want to be reached by all crawling bots so that they can be crawled ...
Only top scored, non community-wiki answers of a minimum length are eligible
Related Tags
angular × 18seo × 15
googlebot × 5
google-search-console × 4
google-search × 4
javascript × 3
wordpress × 2
search-engine-indexing × 2
web-crawlers × 2
links × 2
single-page-application × 2
google × 1
google-analytics × 1
html × 1
sitemap × 1
subdomain × 1
robots.txt × 1
xml-sitemap × 1
meta-tags × 1
tracking × 1
google-ranking × 1
navigation × 1
yoast × 1
server-side-scripting × 1
google-news × 1