Elastic Social Manual / Version 2401
Table Of ContentsWhen sizing the deployment of an Elastic Social enabled application, you should take into account that adding user generated content to pages increases the page delivery time depending on the caching strategy. When using a HTTP proxy like Varnish that caches all pages for a fixed time (one minute, for instance) or when using a timed dependency CAE cache key any extra costs can be eliminated. Delivering user generated content directly from the database roughly doubles the amount of CAEs required. Using a mixed strategy for dynamically serving all requests with a session and statically caching everything else allows you to reduce the amount of extra CAEs required. With 10% dynamic requests, 20% more CAEs are required; with 20% dynamic requests, it's 40% and so on. However, the response time remains constant regardless of the number of users and the amount of the user generated content they create.
The statements above have been verified in a test deployment on Amazon EC2. EC2 was used to run the tests on a comparable and reproducible environment. The setup consisted (among other servers) of 3 m1.xlarge instances running the CoreMedia CAE Live web application in Apache Tomcat 7, one load balancer and 3 m1.xlarge instances running MongoDB in a Replica Set. Up to 10 million users and 10 million comments have been imported into the Elastic Social database. The load balancer has been configured to distribute load evenly between the CAE instances. An article page has been used to measure response time and throughput. Two scenarios have been tested, one with user feedback disabled and one with 10 comments on the article page.
Adding user generated content to pages increases the page delivery time depending on the caching strategy:
static: a HTTP proxy that caches all pages for one minute or a timed dependency CAE cache key eliminates any extra costs
dynamic: delivering directly from the store roughly doubles the amount of CAEs required
mixed: use the dynamic strategy for all requests with a session and the static strategy for everything else allows you to reduce the amount of extra CAEs: with 10% dynamic requests, 20% more CAEs are required; with 20% dynamic requests, it's 40%
During various tests the following best practices have been showing up:
The amount of RAM dedicated to a single MongoDB process (mongod) should exceed the working set size of the data.
The usage of fast HDDs or SSDs is mandatory if writing becomes a bottleneck.
When using sharding, the MongoDB Routing processes (mongos) should be deployed on the same machine as the CoreMedia CAE thus eliminating one network hop and reducing latency for database queries.
The MongoDB routing processes (mongos) and configuration servers (mongod) consume only very few resources.
For MongoDB and Apache Solr the CPU is typically not limiting but Memory and I/O.
The numbers have been measured on a developer machine and can be used as a conservative lower limit to estimate performance and space requirements:
Category | MongoDB RAM [Bytes] | MongoDB disk space [Bytes] | MongoDB Throughput [1/h] |
---|---|---|---|
Users | 2500 | 2500 | 1800000 |
Comments | 4000 | 4000 | 900000 |
Ratings | 2500 | 2500 | 1800000 |
Likes | 3500 | 3500 | 1200000 |
Table 3.1. Measured performance