Image, inspired by [MOO99], summarizes some predictions about disks and storage subsystems.
A few comments are in order, including the observation that the suggested order of events is not to be taken literally.
» The first changes suggested are to do with improvements in the bandwidth of the system interface (with FC projected at 2x2 Gb/s in 2002), or with improvements in bandwidth between the disk units themselves and the disk subsystems (using FC-AL or SCSI at 360 MB/s in 2003), as well as the projected increase in disk capacity (to beyond 100 GB) and rotational speed (moving first to 10,000 rpm and then on to 15,000 rpm). Increasing disk rotational speed improves average access time (which is the time for half a turn of the disk).
» Using high-level interfaces such as SCSI or FC-AL implies the use of a 32-bit microprocessor and a few megabytes of memory in the disk units; the provision of memory allows the disks to provide a cache in a totally transparent manner.
» A spread in the use of data compression is seen. This allows economies in several aspects of disk usage—not only is effective data capacity increased, but also effective bandwidth is increased and latency reduced by the degree of compression attained—at the cost of some processing to effect the compression and decompression. Given the very favorable improvements expected in price-performance for microprocessors, the computational burden imposed by this extra processing should easily be supported, whether in the server proper or in processors embedded in the disk subsystem.
» Physical sharing of storage between servers is currently possible at the subsystem level, and is the basis for SAN.
» As the intelligence in the subsystems grows, it is possible to give them the responsibility for higher-level activities, such as implementing backup and restore: the system initializes as needed, and then the peripheral does the work autonomously.
» Sharing storage at the logical level is significantly more complex a problem than physical sharing, since it means that two different systems (different processor architecture, operating system and data manager) are sharing logical data. Because of the wide diversity of systems and because in many cases the stored data is managed by a DBMS, it is reasonable to suppose that logical sharing is not at all widespread (in heterogeneous systems); it is much simpler to base server cooperation on a client/server-like behavior between applications.
» With disk technology improvements, it is possible for a disk manufacturer to offer multi-disk subsystems, operating as a RAID system and/or as a SAN subsystem; that is, disk manufacturers could become players in the storage world. This would likely hurt the established storage vendors in the same way that Intel moved from being a mere chip manufacturer to a specifier and supplier of systems.
» Data availability requirements and risk avoidance are likely to make remote backup increasingly common, and perhaps commoditized. This could drive a market for companies offering backup storage, with network exchanges taking the place of physical exchanges of magnetic cartridges.
» We can then envision an increase in Fibre Channel bandwidth, and the arrival of 500 GB capacity disks.
» When a system's overall performance is dependent on the performance of its I/ O, the overall performance is (as we have noted previously) liable to drop as improved disk technology becomes available, since as capacity increases the number of physical disks needed is reduced, automatically reducing the amount of parallelism available to the I/O system. Having multiple independent read/write channels on a single disk could claw back some of this concurrency, by more or less doubling parallelism at the disk level.
The last point on the graph is a disk built from DRAM, sometimes called a RAM Disk or a Solid State Disk. For many years, some observers have repeatedly predictied the crossing of the cost prices for DRAM and magnetic storage, and consequently the replacement of disks by DRAM. Beyond the projected cost benefits, the driving force behind this movement is the simple fact that DRAM is orders of magnitude faster than magnetic storage (perhaps 40 microseconds in place of 10 milliseconds). However, cache technology works for disk accesses as well, and a disk subsystem marrying a traditional magnetic storage device to a large DRAM cache can approach the performance of a pure DRAM disk without having to overcome the key issues of cost (since the cost curves have still not crossed and show no immediate signs of doing so) and volatility.
The volatility issue is real; DRAM is volatile—cut the power for more than a rather short interval and all the information is lost. To avoid this, one can provide a battery to keep power on at all times, or look for a better technology. The semiconductor industry is well aware of this and has been developing a number of promising technologies which marry silicon and magnetic effects. Motorola's MRAM [GEP03] seems to be a leader, although more directed at embedded memory than commodity memory chips; IBM's TMJ-RAM (Tunneling Magnetic Junction—Random Access Memory) is also worthy of mention. These technologies promise most interesting benefits: MRAM can be roughly characterized as having the density of DRAM, the access times of a mid-range SRAM, non-volatility and the ability to be rewritten an indefinitely large number of times. Even if costs do not reduce enormously, memory chips built from these technologies could be beneficial for applications requiring very high bandwidths, or for specialized roles such as journaling transactions—that is, recording the journals which track transaction-based changes in a non-volatile memory.
Source of Information : Elsevier Server Architectures 2005