My photo blog
The whole environment is based on webpack 4, pug templates and the input data are json files.
sources
βββ data
β βββ index.json
β βββ jeden-dzien-w-berlinie.json
βββ images
βββ jeden-dzien-w-berlinie
βββ 1200
β βββ IMG_0432.jpg
β βββ ...
βββ 576
β βββ IMG_0432.jpg
β βββ ...
βββ 768
β βββ IMG_0432.jpg
β βββ ...
βββ 992
βββ IMG_0432.jpg
βββ ...
Optimization
Of course, this 100/100 is when there is no google adsense code and that here it is only the main page has 100/100 ;)
Clone the repo and install dependencies
git clone
cd node-sharp-images
npm iHow to run
Dev
npm run dev
or
yarn dev
Prod
npm run prod
or
yarn prod
It is also possible to generate a sitemap based on html files
npm run sitemap
or
yarn sitemap
Photo optimization
The page consists of the pictures themselves, therefore I load the photos dynamically using 'IntersectionObserver'. In addition, each picture is served in several sizes depending on the width of the window.
To generate such a number of photos I used my script which, based on the original, generates folders with appropriate image sizes -> sharp-images
<picture>
<source data-srcset="./images/576/img.jpg" media="(max-width: 576px)" class="fade-in" srcset="./images/576/img.jpg">
<source data-srcset="./images/768/img.jpg" media="(max-width: 768px)" class="fade-in" srcset="./images/768/img.jpg">
<source data-srcset="./images/992/img.jpg" media="(max-width: 992px)" class="fade-in" srcset="./images/992/img.jpg">
<source data-srcset="./images/1200/img.jpg" media="(max-width: 1200px)" class="fade-in" srcset="./images/1200/img.jpg">
<img data-src="./images/1200/img.jpg" class="fade-in" src="./images/1200/lwow/.jpg">
<noscript><img src="./images/1200/img.jpg"></noscript>
</picture>Of course, this solution is compatible with SEO - photos are indexed by google.
The addition is an essential element <noscript><img src="./images/1200/img.jpg"></noscript>
After optimizing images, the page PageSpeed Insight shows 100/100 in the results.
Production version
Visit online: http://www.grzegorztomicki.pl

Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
