The HTTP Archive is a non-profit organization dedicated to tracking and analyzing the evolution of the web. This organization periodically crawls the top websites to collect detailed information about fetched resources, web platform APIs, and page execution traces. The collected data is then used to identify trends and patterns in web development and user experience. The HTTP Archive provides access to this data through reports, a public dataset in Google BigQuery, and other resources. This data can be used for offline analysis or to gain insights into the state of the web.
This website is categorized in Web Development, Internet and Computer Networking, providing comprehensive solutions across these business domains.
The website httparchive.org is built with 8 technologies.
UI frameworks
Bootstrap
Websites built with BootstrapBootstrap is a free and open-source CSS framework directed at responsive, mobile-first front-end web development. It contains CSS and JavaScript-based design templates for typography, forms, buttons, navigation, and other interface components.
Analytics
Google Analytics
Websites built with Google AnalyticsGoogle Analytics is a free web analytics service that tracks and reports website traffic.
Version: GA4
IaaS
Performance
Google Cloud Trace
Websites built with Google Cloud TraceGoogle Cloud Trace is a distributed tracing system that collects latency data from applications and displays it in the Google Cloud Console.
Tag managers
Google Tag Manager
Websites built with Google Tag ManagerGoogle Tag Manager is a tag management system (TMS) that allows you to quickly and easily update measurement codes and related code fragments collectively known as tags on your website or mobile app.
Security
HSTS
Websites built with HSTSHTTP Strict Transport Security (HSTS) informs browsers that the site should only be accessed using HTTPS.
Miscellaneous
Open Graph
Websites built with Open GraphOpen Graph is a protocol that is used to integrate any web page into the social graph.
RUM

The HTTP Archive tracks how the web is built. We periodically crawl the top sites on the web and record detailed information about fetched resources, used web platform APIs and features, and execution traces of each page.