Skip to main content

How does CloudFront Delivers Content to Users


Once you configure CloudFront to deliver your content, here's what happens when users request your objects:
  1. A user accesses your website or application and requests one or more objects, such as an image file and an HTML file.
  2. CloudFront determines which edge location can best serve the user's request, typically the nearest CloudFront edge location in terms of latency, and routes the request to that edge location.
  3. In the edge location, CloudFront checks its cache for the requested files. If the files are in the cache, CloudFront returns them to the user. If the files are not in the cache, it does the following:
    1. CloudFront compares the request with the specifications in your distribution and forwards the request for the files to the applicable origin server for the corresponding file type—for example, to your Amazon S3 bucket for image files and to your HTTP server for the HTML files.
    2. The origin servers send the files back to the CloudFront edge location.
    3. As soon as the first byte arrives from the origin, CloudFront begins to forward the files to the user. CloudFront also adds the files to the cache in the edge location for the next time someone requests those files.
  4. After an object has been in an edge cache for 24 hours or for the duration specified in your file headers, CloudFront does the following:
    1. CloudFront forwards the next request for the object to your origin to determine whether the edge location has the latest version.
    2. If the version in the edge location is the latest, CloudFront delivers it to your user.
      If the version in the edge location is not the latest, your origin sends the latest version to CloudFront, and CloudFront delivers the object to your user and stores the latest version in the cache at that edge location.

Comments

Popular posts from this blog

What is Tensor Parallelism and relationship between Buffer and GPU

  Tensor Parallelism in GPU Tensor parallelism is a technique used to distribute the computation of large tensor operations across multiple GPUs or multiple cores within a GPU .   It is an essential method for improving the performance and scalability of deep learning models, particularly when dealing with very large models that cannot fit into the memory of a single GPU. Key Concepts Tensor Operations : Tensors are multidimensional arrays used extensively in deep learning. Common tensor operations include matrix multiplication, convolution, and element-wise operations. Parallelism : Parallelism involves dividing a task into smaller sub-tasks that can be executed simultaneously. This approach leverages the parallel processing capabilities of GPUs to speed up computations. How Tensor Parallelism Works Splitting Tensors : The core idea of tensor parallelism is to split large tensors into smaller chunks that can be processed in parallel. Each chunk is assigned to a different GP...

What's replicated, what's not?

Logged operations are replicated. These include, but are not limited to: DDL DML Create/alter table space Create/alter storage group Create/alter buffer pool XML data. Logged LOBs Not logged operations are not replicated. These include, but are not limited to: Database configuration parameters (this allows primary and standby databases to be configured differently). "Not logged initially" tables Not logged LOBs UDF (User Defined Function) libraries. UDF DDL is replicated. But the libraries used by UDF (such as C or Java libraries)  are not replicated, because they are not stored in the database. Users must manually copy the libraries to the standby. Note: You can use database configuration parameter  BLOCKNONLOGGED  to block not logged operations on the primary.

Data Wrangling vs EDA

  Aspect Data Wrangling (Data Preprocessing) Exploratory Data Analysis (EDA) Objective Prepare raw data for modeling by cleaning, transforming, and formatting it appropriately. Explore and understand the data to gain insights, identify patterns, and make decisions on data handling and modeling. Order Typically performed as a preliminary step before EDA. Usually conducted after data wrangling to further investigate data characteristics. Data Handling Focuses on data cleaning, filling missing values, encoding categorical variables, and scaling features. Involves data visualization, statistical analysis, and summary statistics to uncover patterns, relationships, and anomalies. Techniques Techniques include imputation, outlier detection, feature scaling, and one-hot encoding. Techniques include histograms, scatter plots, box plots, correlation matrices, and descriptive statistics. Data Transformation Involves structural changes to the dataset, such as feature engineering, data normaliz...