Posted in scalability, typo, javascript, dhtml, datetime
Tue, 06 Feb 2007 00:28:00 GMT
Some considerations when displaying dates and times on a website include showing delta times, customized timezones and caching. Often it's nice to show a delta time like "10 minutes ago" or "5 days ago" to give readers a frame of reference instead of an absolute date. When the date is far enough in the past and an absolute date becomes desired, customizing the date to the user's timezone is useful. And if your site grows large enough that caching becomes useful, finding a way to display customized deltas and timezone information in a cacheable static page becomes an ideal solution.
Read more...
2 comments
Posted in scalability
Tue, 24 Oct 2006 16:56:00 GMT
Recently eWeek ran an article on eHarmony's storage scaling solution choice which discussed how they chose to go with proprietary solutions from 3PAR and ONStor. I was hoping to learn something interesting about their deployment architecture but the most interesting things I learned was that eHarmony has 8+ million users, 9+ million photos and their proprietary solution vendor choice. Some interesting quotes from Mark Douglas, eHarmony's VP of Technology:
Read more...
no comments
Posted in scalability, mysql
Fri, 06 Oct 2006 03:07:00 GMT
I just ran across the MySQL Web 2.0 page which lists a number of their users including the following:
The most interesting thing from that page, however, is links to various presentations given by those sites on how they architected their sites to scale with MySQL, some of them scaling up to hundreds of MySQL servers.
Read more...
2 comments
Posted in scalability
Wed, 23 Aug 2006 05:49:00 GMT
I just ran across the Hadoop DFS which is an open source alternative to distributed file systems such as GoogleFS, OneFS and others. GoogleFS and OneFS are both proprietary so it's nice to finally have a FOSS solution. MySpace uses OneFS. From the Hadoop Wiki:
Hadoop's Distributed File System is designed to reliably store very large files across machines in a large cluster. It is inspired by the
Google File System. Hadoop DFS stores each file as a sequence of blocks, all blocks in a file except the last block are the same size. Blocks belonging to a file are replicated for fault tolerance. The block size and replication factor are configurable per file. Files in HDFS are "write once" and have strictly one writer at any time.
Until now, I had only been aware of MogileFS for FOSS solutions, however MogileFS is designed for smaller files such as images and the others are designed for very large files. It will be interesting to see how much traction Hadoop DFS gets since it could be very useful and a good FOSS compliment to MogileFS. Hadoop is part of the Lucene Apache project.
no comments