<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Lucid Technology</title>
	<atom:link href="https://www.lucidti.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.lucidti.com</link>
	<description>Elastic Network Data Storage</description>
	<lastBuildDate>Mon, 27 Oct 2014 04:12:26 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>LucidFlash Accelerates NAS And Slashes Costs</title>
		<link>https://www.lucidti.com/lucidflash-accelerates-nas-and-slashes-cost/</link>
				<pubDate>Mon, 27 Oct 2014 03:55:04 +0000</pubDate>
		<dc:creator><![CDATA[vadim]]></dc:creator>
				<category><![CDATA[Latest News]]></category>
		<category><![CDATA[flash]]></category>
		<category><![CDATA[flash nas]]></category>
		<category><![CDATA[flash raid]]></category>
		<category><![CDATA[flash storage]]></category>
		<category><![CDATA[nas]]></category>
		<category><![CDATA[solid state nas]]></category>
		<category><![CDATA[solid state storage]]></category>
		<category><![CDATA[ssd]]></category>

		<guid isPermaLink="false">http://test.lucidti.com/?p=1084</guid>
				<description><![CDATA[Lucid Technology, Inc. introduces LucidFlash -  an all-flash storage array to enable cost effective application performance acceleration. <a href="https://www.lucidti.com/lucidflash-accelerates-nas-and-slashes-cost/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
								<content:encoded><![CDATA[<h1>LucidFlash Accelerates NAS And Slashes Costs</h1>
<p>&nbsp;</p>
<p>Largo, FL – October 27, 2014 – Lucid Technology, Inc., a leading provider of NAS/SAN unified storage solutions, introduced an all-flash storage array to enable cost effective application performance acceleration.</p>
<p><a title="LucidFlash SS3120 All Flash Solid State NAS" href="http://www.lucidti.com/hardware-nas-zfs-raid-storage-arrays/lucidflash-ss3120-all-flash-solid-state-nas/">LucidFlash</a>, a 100% solid state expandable NAS designed to accelerate and simplify storage infrastructure, delivers enterprise grade performance for latency-sensitive applications responsible for driving revenue and productivity.</p>
<p>Powered by LucidNAS storage software, LucidFlash is a unified all-flash array that supports both block and file protocols including iSCSI, NFS, and CIFS, ensuring future-proof flexibility in meeting changing business and technology requirements.</p>
<p>It offers simple management, allowing IT departments to focus on delivering business value instead of managing storage, and delivers consistent sub-millisecond response times to accelerate business critical applications on a day-in, day-out basis.</p>
<p>Packed in a tiny 1U footprint, LucidFlash NAS is optimized for the native capabilities of flash and yields significant performance improvements over legacy storage especially in handling write traffic and data protection, accelerating I/O-intensive workloads and delivering high IOPS at sub-millisecond latency.</p>
<p>To maximize the longevity of the installed flash memory and extend its endurance, LucidFlash utilizes system tools to achieve even wear throughout the drive, improving performance and reducing premature media fatigue resulting from high concentration of P/E cycles. Intelligent wear-leveling ensures that the lifespan of flash can be extended to its maximum endurance limit.</p>
<p>“In order to keep up with evolving business needs, organizations are increasingly turning to all-flash storage for their primary storage needs,” said Vadim Carter, Lucid Technology’s President. “LucidFlash combines affordability and performance to make all-flash storage a viable option for a variety of applications and for companies of all sizes. It is a full-featured, unified storage solution engineered for applications that demand low latency and fast storage performance, which delivers a compelling balance of good economics, rapid time to deployment, and flexible scalability, allowing our customers to start small and scale capacity as their needs grow.&#8221;</p>
<p>Delivering the benefits of solid-state technology at a price point the mainstream could afford, LucidFlash offers high end features that large enterprises have been spending millions on for years:</p>
<p>&nbsp;</p>
<ul>
<li>Thin provisioning to deliver increased capacity utilization and more efficient storage architecture, enabling on-demand allocation.</li>
<li>Inline deduplication that achieves 10-to-1 data reduction to drive down the cost of storage. LucidFlash deduplication is synchronous, safe, efficiently scales with any size, and places no restrictions on the amount of data to deduplicate.</li>
<li>End-to-end data integrity, keeping data consistent and eliminating silent data corruption with copy-on-write and checksums. LucidFlash uses a 256-bit checksum stored separately from the data it relates to and unlike a simple disk block checksum, it detects phantom writes, misdirected reads and writes, DMA parity errors, driver bugs, accidental overwrites as well as traditional &#8220;bit rot&#8221;. Once corruption is detected, LucidFlash automatically repairs the damage before the data is passed off to the process that requested it.</li>
<li>Unlimited snapshots to protect data from accidental deletion, corruption, and modification with minimal time and capacity requirements. LucidFlash snapshots are very small and efficient, as only the deltas from the previous snapshot are stored. LucidFlash offers functionality to create a snapshot of the file system contents, transfer it to another storage array, and extract the snapshot to recreate the file system. By continually creating, replicating, and restoring snapshots, one can provide synchronization between one or more machines.</li>
<li>Software RAID that offers single (RAID-Z) or double parity (RAID-Z2) protection like hardware RAID, but without the “write hole” vulnerability thanks to the copy-on-write architecture. The additional level of RAID-Z3 offers triple parity protection, and software mirror option is also available.</li>
</ul>
<p>LucidFlash brings new level of affordability to high performance, low latency all-flash block and file shared storage, aiming to improve core operations that depend on applications where responsiveness drives the bottom line. It implements several targeted, flash specific software optimizations to deliver improved performance, superior data protection, and flash endurance.</p>
<p>“Faced with rapidly evolving infrastructure requirements, companies are now looking for greater simplicity and efficiency from their storage than ever before,” added Vadim Carter, Lucid Technology’s President. “LucidFlash is well positioned to deliver numerous benefits of solid-state technology across diverse markets and a multitude of applications.”</p>
<p>For more information on the LucidFlash NAS Storage Array, please visit:</p>
<p><a href="http://www.lucidti.com/hardware-nas-zfs-raid-storage-arrays/lucidflash-ss3120-all-flash-solid-state-nas/">http://www.lucidti.com/hardware-nas-zfs-raid-storage-arrays/lucidflash-ss3120-all-flash-solid-state-nas/</a></p>
<p>&nbsp;</p>
<p><b>ABOUT LUCID TECHNOLOGY, INC.</b></p>
<p>Located in Largo, Florida, Lucid Technology, Inc. is an innovative technology leader providing network storage solutions to OEMs, Value Added Resellers and System Integrators. The company&#8217;s products address business-critical needs for flexibility, affordability, high availability, and high performance. For more information, visit <a href="http://www.lucidti.com/">http://www.lucidti.com</a>  or email <a href="mailto:sales@lucidti.com">sales@lucidti.com</a></p>
<p>&nbsp;</p>
<p>PRESS CONTACTS</p>
<p>Marketing Dept.</p>
<p>PR for Lucid Technology, Inc.</p>
<p><a href="mailto:pr@lucidti.com">pr@lucidti.com</a></p>
<p>727-487-2430 x311</p>
<p>&nbsp;</p>
 ]]></content:encoded>
										</item>
		<item>
		<title>LumaForge and Lucid Technology Deliver LumaZAN Collaborative Workflow Platform</title>
		<link>https://www.lucidti.com/lumaforge-and-lucid-technology-deliver-lumazan-collaborative-workflow-platform/</link>
				<pubDate>Tue, 24 Jun 2014 14:24:59 +0000</pubDate>
		<dc:creator><![CDATA[vadim]]></dc:creator>
				<category><![CDATA[Latest News]]></category>
		<category><![CDATA[post production storage]]></category>
		<category><![CDATA[video]]></category>
		<category><![CDATA[video workflow]]></category>

		<guid isPermaLink="false">http://test.lucidti.com/?p=894</guid>
				<description><![CDATA[LumaForge and Lucid Technology Combine Forces to Deliver LumaZAN - Collaborative Workflow Platform for Media and Entertainment Industry.  <a href="https://www.lucidti.com/lumaforge-and-lucid-technology-deliver-lumazan-collaborative-workflow-platform/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
								<content:encoded><![CDATA[<h2>LumaForge and Lucid Technology Combine Forces to Deliver LumaZAN Collaborative Workflow Platform for Media and Entertainment Industry.</h2>
<p><i>Combining Cost-Effective Open IT Solutions with Easy to Manage LucidNAS Shared Storage System, LumaForge Introduces the LumaZAN Collaborative Workflow Platform.</i></p>
<p>&nbsp;</p>
<p><b>LOS ANGELES </b><b>–</b><b> June 23, 2014</b></p>
<p>Leveraging the incredible price/performance value of commodity PC servers and workstations with the enterprise strength of ZFS file sharing software, LumaForge and Lucid Technology combine forces to provide M&amp;E clients with the most cost-effective shared storage solution available on the market today. The LumaZAN collaborative workflow platform delivers rock solid, dependable shared storage across a fast, easy to manage data network that unites artist workstations with back-end servers in an elegant and affordable total workflow solution.</p>
<p>CEO of LumaForge, <b>Neil Smith</b> observes that, “The demand for reliable and cost-effective shared storage for digital content workflows is only going to grow. Hollywood and the M&amp;E industry are rapidly moving to a 4K acquisition and finishing pipeline. As resolution, frame rates and the sheer volume of digital content increases, the demand for cost-effective shared storage solutions is going to increase exponentially. We’ve spent many years working closely mastering the ‘knack of workflow optimization’ in order to deliver maximum performance with maximum ROI. The result of the extensive fieldwork with our customers is the LumaZAN collaborative workflow platform. LumaZAN provides Open IT solutions that combine hardware, software and middleware in ways that are incredibly well suited and cost-effective for digital content pipelines.”</p>
<p>At the core of the LumaZAN solution platform is the LucidNAS shared storage file system from Lucid Technology. LucidNAS storage software is built on a UNIX foundation that includes ZFS, the most advanced enterprise file system available today. ZFS brings a wealth of benefits to the LucidNAS storage solution, including native I/O pooling, end-to-end data integrity, and unlimited snapshot capabilities. And to further increase the software utility, Lucid Technology adds an intuitive and comprehensive data management front-end for organizing and administering all aspects of the unified storage solution.</p>
<p><b>Vadim Carter</b>, President of Lucid Technology explains the importance of ZFS as part of the LucidNAS file sharing architecture. “Traditional fiber channel SANs (Storage Area Networks) are expensive to install and difficult to maintain for small to medium M&amp;E companies. Our unique LucidNAS GUI interface allows customers to install, set-up and manage their LucidNAS storage pools with the minimum amount of hassle. ZFS is a modern 21<sup>st</sup> century storage architecture that provides high performance scalability with solid data integrity that just cannot be found in any alternative approach. The LucidNAS unique Hybrid Storage Pool architecture automatically caches data on RAM or SSDs to provide optimal performance and exceptional efficiency, while ensuring that data remains safely stored on reliable and high capacity hard-disk drive storage pools.”</p>
<p>By leveraging the Hybrid Storage Pool architecture of LucidNAS, the LumaZAN collaborative workflow platform allows Post-production facilities, VFX houses and studios to benefit from the price/performance value of commodity hardware and the functionality of COTS (Commercial Off The Shelf) applications from companies like Autodesk, Adobe, Apple, Avid, Blackmagic Design and The Foundry. Many of these powerful desktop tools are being re-architected to take advantage of OpenCL and high powered GPUs. LumaForge is working closely with the software companies and AMD to ensure that the LumaZAN collaboration platform incorporates the latest advances in Open IT and GPU technology.</p>
<p>“Open IT standards like ZFS, Unix and OpenCL are critical to sustaining the technological innovation taking place in M&amp;E market,” says Smith. “The days of single vendor proprietary lock-in are over. Customers increasingly want freedom of choice over their strategic hardware and software platforms. The LumaZAN collaboration platform means that customers can choose the hardware, software and networking combination that best meets their budgetary and performance requirements., The added advantage of the LumaZAN platform is that we never hold your data hostage. At any time, your valuable data and video files can easily be moved to any other ZFS platform at the drop of a hat.”</p>
<p>The LumaZAN collaborative workflow platform will premier at the 2014 Creative Storage Conference in Culver City on Tuesday June 24<sup>th</sup> with an in-depth follow up seminar on Saturday June 28<sup>th</sup> on The Lot in West Hollywood.</p>
<p><b>For more information<ins cite="mailto:Neil%20Smith" datetime="2014-03-10T14:35"> </ins>on the 2014 Creative Storage Conference visit </b><a href="http://www.creativestorage.org/index.htm">http://www.creativestorage.org/index.htm</a></p>
<p><b>For more information on the </b><b>‘LumaZAN Collaborative Workflow Seminar</b><b>’ visit Eventbrite at </b><a href="https://www.eventbrite.com/e/lumazan-collaborative-workflow-seminar-saturday-june-28th-tickets-11972342599">https://www.eventbrite.com/e/lumazan-collaborative-workflow-seminar-saturday-june-28th-tickets-11972342599</a></p>
<p><b>For more information about Lucid Technology and the powerful LucidNAS file sharing system, visit </b><a href="http://www.lucidti.com/">http://www.lucidti.com</a></p>
<p><b>For more information about LumaForge and its products and services, visit </b><a href="http://www.lumaforge.com">http://</a><ins cite="mailto:Neil%20Smith" datetime="2014-03-10T14:35"><a href="http://www.lumaforge.com">www.lumaforge.com</a></ins></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
 ]]></content:encoded>
										</item>
		<item>
		<title>Lucid Technology partners with StorageRep</title>
		<link>https://www.lucidti.com/lucid-technology-partners-with-storagerep/</link>
				<pubDate>Wed, 02 Apr 2014 03:25:27 +0000</pubDate>
		<dc:creator><![CDATA[vadim]]></dc:creator>
				<category><![CDATA[Latest News]]></category>

		<guid isPermaLink="false">http://test.lucidti.com/?p=889</guid>
				<description><![CDATA[Lucid Technology Partners with StorageRep to Offer Unified NAS Storage Solutions for the Video Market &#160; LucidNAS is Ideal for Shared-Storage Video Applications &#160; Largo, FL &#8211; April 2, 2014 – Lucid Technology, Inc., a leading provider of NAS/SAN unified &#8230; <a href="https://www.lucidti.com/lucid-technology-partners-with-storagerep/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
								<content:encoded><![CDATA[<h2>Lucid Technology Partners with StorageRep to Offer Unified NAS Storage Solutions for the Video Market</h2>
<p>&nbsp;</p>
<p align="center"><i>LucidNAS is Ideal for Shared-Storage Video Applications</i></p>
<p>&nbsp;</p>
<p>Largo, FL &#8211; April 2, 2014 – Lucid Technology, Inc., a leading provider of NAS/SAN unified storage solutions, announced the appointment of StorageRep LLC as a worldwide manufacturing  representative for the video market. StorageRep will actively promote all LucidNAS products to resellers and integrators serving the corporate video, film, television, and post-production industries.</p>
<p>&nbsp;</p>
<p>&#8220;We have identified the video market as a key growth opportunity for our products,&#8221; says Vadim Carter, Lucid Technology’s President. &#8220;Video professionals are viewing NAS-based storage networks as an increasingly popular solution as it is easy to deploy and cost effective. We are excited to be working with StorageRep and are confident they have the expertise, relationships, and experience to help us succeed in this market.&#8221;</p>
<p>&nbsp;</p>
<p>Lucid’s  flagship product, the LucidNAS Unified Storage Appliance, allows customers to easily make the transition to a lower cost NAS/SAN shared storage environment using any commodity  64-bit Intel or AMD server. LucidNAS provides video workgroups with fast, highly-scalable, ultra-reliable, and cost effective storage.  The Lucid product also removes the complexity of the filesystem and volume management with automated policies that dramatically reduce storage administration.</p>
<p>&nbsp;</p>
<p>Lucid’s product perfectly complements our other product offerings in the video market. With a definite advantage in speed, and scalability &#8211; key requirements in 2K and 4K video &#8211; the Lucid partnership will enhance our customers&#8217; high performance digital video solutions for film, TV and post production,&#8221; says Jerry Palace Co-founder, StorageRep LLC.</p>
<p>&nbsp;</p>
<p>The video market represents data-intensive applications including computer graphics, streaming, editing, and content creation. The LucidNAS is well-suited for Adobe Creative Suite, Avid Pro Tools, Apple Final Cut Pro 7, Apple Final Cut Pro X, and others. Multiple editors can access a hybrid storage pool that combines DRAM, SSDs, and spinning HDDs in order to speed up workflow completion time and overall project management.</p>
<p>&nbsp;</p>
<p>LucidNAS Unified Storage Appliance provides customers with significant cost reductions and increased productivity by offering affordable, high performance access to networked storage. By utilizing existing Ethernet infrastructure, NAS-based solutions simplify storage deployment, management, and support for editors and IT personnel. Furthermore, many organizations can leverage their existing networks instead of having to purchase new network equipment.</p>
<p>&nbsp;</p>
<p><b>Lucid Technology, Inc. </b>is an innovative technology leader providing network storage solutions to OEMs, Value Added Resellers and System Integrators. The company&#8217;s product addresses business-critical needs for flexibility, affordability, high availability, and high performance. Lucid’s flagship LucidNAS™ product is a new type of unified storage appliance that delivers easy-to-use, easy-to-administer scalable storage software in a moderate-cost, plug-and-play, solution. Every bit as powerful as a traditional server, LucidNAS reduces complexity and cost by being task specific and platform agnostic, equally capable of serving Windows, MAC OS, and Linux clients.  For more information, visit <a href="http://www.lucidti.com/">http://www.lucidti.com</a> or email <a href="mailto:sales@lucidti.com">sales@lucidti.com</a></p>
<p>&nbsp;</p>
<p><b>StorageRep, LLC</b> was founded in 2006 on the philosophy of providing our customers with a business model that simplifies the way they purchase, sell, and support shared storage. StorageRep provides customers one source for access to industry standard, cutting edge, and high-level storage technologies. We provide pre and post sales engineering support, as well as offering fully tested storage solutions for SMB and mid-size businesses’. Vertical markets that we service include the media and entertainment industry where market realities continue to reshape the industry. Technology platforms are evolving quickly to meet the demand of more eyes watching video on the internet.  Shared storage is the cornerstone of media collaboration, providing the performance, reliability, and capacity to sustain real-time workflows in the most demanding environments.</p>
<p>&nbsp;</p>
<p>StorageRep, LLC has over seventy five years of combined engineering, marketing, and sales experience in the storage industry. For more information, visit <a href="http://www.storagerep.com/">http://www.storagerep.com</a> or email info@storagerep.com<strong></strong></p>
<div>
<hr align="left" size="1" width="33%" />
<div></div>
</div>
 ]]></content:encoded>
										</item>
		<item>
		<title>RAID 5 // Level 5 RAID // RAID Level 5</title>
		<link>https://www.lucidti.com/raid-5-level-5-raid-raid-level-5/</link>
				<pubDate>Fri, 14 Mar 2014 04:32:40 +0000</pubDate>
		<dc:creator><![CDATA[vadim]]></dc:creator>
				<category><![CDATA[lucidU]]></category>
		<category><![CDATA[distributed parity]]></category>
		<category><![CDATA[ECC]]></category>
		<category><![CDATA[parity]]></category>
		<category><![CDATA[RAID]]></category>
		<category><![CDATA[raid 5]]></category>
		<category><![CDATA[RAID technology]]></category>
		<category><![CDATA[redundant]]></category>
		<category><![CDATA[storage]]></category>

		<guid isPermaLink="false">http://test.lucidti.com/?p=884</guid>
				<description><![CDATA[RAID 5 / Level 5 RAID / RAID Level 5 The use of dedicated ECC / parity drives in RAID levels 1 through 4 limits each of these architectures to a single write transaction at a time, and thus they &#8230; <a href="https://www.lucidti.com/raid-5-level-5-raid-raid-level-5/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
								<content:encoded><![CDATA[<h1>RAID 5 / Level 5 RAID / RAID Level 5</h1>
<p>The use of dedicated ECC / parity drives in RAID levels <a title="RAID 1 // Level 1 RAID // RAID Level 1" href="http://www.lucidti.com/raid-1-level-1-raid-raid-level-1/">1</a> through <a title="RAID 4 // Level 4 RAID // RAID Level 4" href="http://www.lucidti.com/raid-4-level-4-raid-raid-level-4/">4</a> limits each of these<br />
architectures to a single write transaction at a time, and thus they are poor choices for<br />
multitasking or transaction processing systems. RAID 5 attempts to eliminate this<br />
problem. In RAID 5, an entire transfer block is placed on a single drive, but there are no<br />
dedicated data or parity drives. Rather, the ECC blocks are distributed, as shown in<br />
the diagram below, so that each drive in the array contains a combination of<br />
data and parity blocks.</p>
<div id="attachment_885" style="width: 484px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-885" class="size-full wp-image-885 " title="RAID 5 / RAID Level 5" alt="RAID 5 / RAID Level 5" src="http://www.lucidti.com/wp-content/uploads/2014/03/RAID-5.png" width="474" height="413" srcset="https://www.lucidti.com/wp-content/uploads/2014/03/RAID-5.png 474w, https://www.lucidti.com/wp-content/uploads/2014/03/RAID-5-300x261.png 300w" sizes="(max-width: 474px) 100vw, 474px" /><p id="caption-attachment-885" class="wp-caption-text">RAID 5 Diagram</p></div>
<p>Data recovery and seek times are the same as for <a title="RAID 4 // Level 4 RAID // RAID Level 4" href="http://www.lucidti.com/raid-4-level-4-raid-raid-level-4/">RAID 4</a>, except that now we<br />
can perform multiple write transactions in parallel. In a saturated I/O situation, we could<br />
have half as many write transactions as the number of drives.</p>
<h2>RAID 5 Benefits</h2>
<p>The primary advantage of RAID 5 is its ability to perform both reads and writes in<br />
parallel. For &#8216;n&#8217; drives in the array, virtual transfer rates in a saturated I/O environment<br />
approach &#8216;n&#8217; times a single drive for all reads and &#8216;n/2&#8217; times a single drive for writes. In a<br />
combined read/write environment, the virtual transfer rate will be at least &#8216;n/2&#8217; times a<br />
single drive and will approach &#8216;n&#8217; times a single drive as the ratio of reads to writes<br />
increases.</p>
 ]]></content:encoded>
										</item>
		<item>
		<title>RAID 4 // Level 4 RAID // RAID Level 4</title>
		<link>https://www.lucidti.com/raid-4-level-4-raid-raid-level-4/</link>
				<pubDate>Sat, 01 Mar 2014 05:43:32 +0000</pubDate>
		<dc:creator><![CDATA[vadim]]></dc:creator>
				<category><![CDATA[lucidU]]></category>
		<category><![CDATA[parity]]></category>
		<category><![CDATA[RAID]]></category>
		<category><![CDATA[RAID 4]]></category>
		<category><![CDATA[RAID Level 4]]></category>
		<category><![CDATA[RAID technology]]></category>

		<guid isPermaLink="false">http://test.lucidti.com/?p=861</guid>
				<description><![CDATA[RAID 4 / Level 4 RAID / RAID Level 4 We said that the two primary disadvantages of the RAID 3 architecture were the large (and/or inconsistent) transfer block sizes and the inability to perform multiple simultaneous I/O transactions. Both &#8230; <a href="https://www.lucidti.com/raid-4-level-4-raid-raid-level-4/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
								<content:encoded><![CDATA[<h1>RAID 4 / Level 4 RAID / RAID Level 4</h1>
<p>We said that the two primary disadvantages of the <a title="RAID 3 // Level 3 RAID // RAID Level 3" href="http://www.lucidti.com/raid-3-level-3-raid-raid-level-3/">RAID 3</a> architecture were the large (and/or inconsistent) transfer block sizes and the inability to perform multiple simultaneous I/O transactions. Both of these problems result from the fact that a single transfer block of data is interleaved across all of the data drives.</p>
<p>In RAID 4, we place the entire first transfer block on the first data drive, the second transfer block on the second data drive, and so forth. Diagram below shows this scheme for 4 drives. There is still only one ECC / parity drive and it is computed as the exclusive-or (XOR) of all of the data drives:</p>
<div id="attachment_862" style="width: 484px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-862" class="size-full wp-image-862" alt="RAID 4 / RAID Level 4" src="http://www.lucidti.com/wp-content/uploads/2014/03/RAID-4.png" width="474" height="413" srcset="https://www.lucidti.com/wp-content/uploads/2014/03/RAID-4.png 474w, https://www.lucidti.com/wp-content/uploads/2014/03/RAID-4-300x261.png 300w" sizes="(max-width: 474px) 100vw, 474px" /><p id="caption-attachment-862" class="wp-caption-text">RAID 4 Diagram</p></div>
<p>In the event of unreadable data, the lost sector/drive is reconstituted by computing the exclusive-or of the remaining drives.</p>
<h2>RAID 4 Read Operations</h2>
<p>A read transaction involves only a single data drive and the timing is identical to a single drive configuration. Also, since only one drive is involved, a multitasking operating system could issue an independent read transaction against each data drive. This means that in a saturated I/O environment, with &#8216;n&#8217; data drives, &#8216;n&#8217; times as many seeks will complete during a given interval. So a RAID Level 4 array with 6 data drives, each with a mean access time of 12ms will have a virtual access time that approaches 2ms as system loading increases. This effect has been seen for years in benchmarking of operating systems such as Unix, where parallel operations on multiple drives are supported. However, achieving it in practice has required two conditions. First, you have to be able to segregate the data into distinct subsets for inclusion on different drives, and secondly, the system administrator has to have a good enough understanding of how and when the data is used so that an effective segregation is achieved. A RAID 4 makes the allocation to distinct drives automatic, since each and every file is automatically distributed.</p>
<h2>RAID 4 Write Operations</h2>
<p>Contrary to initial inclinations, a write transaction does not involve all of the drives. It requires reads and writes of the data drive involved and the parity disk. Because the existing ECC block already contains information about the other data drives, the following mechanism can be used:</p>
<p>a. Read the existing data and parity;<br />
b. Compute the new parity block as: new parity = existing parity x existing data x new data;<br />
c. Write the new data and parity.</p>
<p>As a result, write operations have slightly longer seek times for both the read and write portions than a single drive. However, more important is the fact that the ECC drive is involved in every write transaction and thus only one write transaction can be performed at a time. The parallelism possible in the read transactions is impossible in write transactions.</p>
<p>Transfer rates for a single read/write transaction are the same as for a single drive. In a multitasking transaction processing system, the virtual transfer rate will approach the number of data drives times the single drive transfer rate as the ratio of reads to writes increases.</p>
<h2>RAID 4 Benefits and Drawbacks</h2>
<p>The primary advantage of RAID 4 is the ability to process multiple simultaneous reads. This<br />
makes it extremely efficient for transaction or multitasking systems where the ratio of reads to<br />
writes is very high. The disadvantage is that it can only process one write transaction at a time. Single parity drive becomes a bottleneck in RAID Level 4.</p>
 ]]></content:encoded>
										</item>
		<item>
		<title>RAID 3 // Level 3 RAID // RAID Level 3</title>
		<link>https://www.lucidti.com/raid-3-level-3-raid-raid-level-3/</link>
				<pubDate>Fri, 21 Feb 2014 02:15:40 +0000</pubDate>
		<dc:creator><![CDATA[vadim]]></dc:creator>
				<category><![CDATA[lucidU]]></category>
		<category><![CDATA[ECC]]></category>
		<category><![CDATA[parity]]></category>
		<category><![CDATA[RAID 3]]></category>
		<category><![CDATA[RAID Level 3]]></category>
		<category><![CDATA[RAID technology]]></category>

		<guid isPermaLink="false">http://test.lucidti.com/?p=842</guid>
				<description><![CDATA[RAID 3 / Level 3 RAID / RAID Level 3 A Level 3 RAID architecture assumes that each disk drive in the array can detect and report errors. Therefore, the RAID architecture needs only be concerned with maintaining the redundant &#8230; <a href="https://www.lucidti.com/raid-3-level-3-raid-raid-level-3/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
								<content:encoded><![CDATA[<h1>RAID 3 / Level 3 RAID / RAID Level 3</h1>
<p>A Level 3 RAID architecture assumes that each disk drive in the array can detect and report errors.<br />
Therefore, the RAID architecture needs only be concerned with maintaining the redundant data<br />
necessary to correct the error. In RAID 3, we have two or more data disks and one ECC/parity disk. Data is interleaved across all of the data drives, so that the first byte is on the first drive, the second byte is on the second drive, and so on:</p>
<div id="attachment_843" style="width: 484px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-843" class="size-full wp-image-843" alt="RAID 3 / RAID Level 3 Diagram" src="http://www.lucidti.com/wp-content/uploads/2014/02/RAID-3.png" width="474" height="413" srcset="https://www.lucidti.com/wp-content/uploads/2014/02/RAID-3.png 474w, https://www.lucidti.com/wp-content/uploads/2014/02/RAID-3-300x261.png 300w" sizes="(max-width: 474px) 100vw, 474px" /><p id="caption-attachment-843" class="wp-caption-text">RAID 3 Diagram</p></div>
<p>If we have &#8216;n&#8217; data drives, the &#8216;n&#8217;+ 1st byte is back on the first drive, etc. until we have a block of data n-times the single drive sector size interleaved across the &#8216;n&#8217; data drives. A single dedicated parity drive is used. Each logical sector of the ECC drive contains the bit-wise exclusive-or (XOR) of the corresponding sector from each data drive. This is the same scheme as is used in parity memory.</p>
<p>When a data sector is unreadable, or when an entire drive fails, the data for that<br />
drive can be reconstructed by computing the exclusive-or (XOR) of all of the remaining drives<br />
(including the ECC drive).</p>
<p>Data reads require that all of the data disk drives seek to the proper location and<br />
thus the effective seek time is subject to the slowdowns. Write transactions require a read transaction, a seek by all drives, including the ECC drive, and a write to all of the drives.</p>
<h2>RAID 3 Benefits</h2>
<p>The primary advantage of a RAID 3 is that it has the same high transfer rates that<br />
we discussed for <a title="RAID 2 // Level 2 RAID // RAID Level 2" href="http://www.lucidti.com/raid-2-level-2-raid-raid-level-2/">RAID 2</a>. It is well suited for most applications requiring sustained high-speed transfers.</p>
<h2>RAID 3 Drawbacks</h2>
<p>Achieving the high transfer rate results in two main disadvantages:<br />
Every data drive is involved in every read or write transaction. Therefore, a RAID 3 can<br />
process only one I/O transaction at a time. Secondly, the logical sector size of the RAID<br />
3 gets larger every time we add another data disk. Even if the individual drives have a<br />
small sector size, say 256 bytes, a RAID 3 with six data drives has a logical sector size of<br />
1536 bytes. This means that in a transaction processing system, we often have to read<br />
a lot of data in order to get the small amount of data that we are really interested in. It<br />
also creates integration problems for operating systems that maintain their disk caches,<br />
since a sector size like the 1536 above is usually not accommodated.</p>
<h2>RAID 3 Applications</h2>
<p>RAID Level 3 works well for applications that require fast sequential access to a single large file,<br />
such as image or video processing systems, where the data has been collated and organized by pre-processing so that it can be used directly by the processor. It performs poorly where the I/O transactions are for small amounts of data, or where requests for multiple files are interspersed. Thus, it is a poor alternative for any transaction or multitasking system, such as database, file servers, or general purpose workstations.</p>
 ]]></content:encoded>
										</item>
		<item>
		<title>RAID 2 // Level 2 RAID // RAID Level 2</title>
		<link>https://www.lucidti.com/raid-2-level-2-raid-raid-level-2/</link>
				<pubDate>Tue, 11 Feb 2014 05:47:23 +0000</pubDate>
		<dc:creator><![CDATA[vadim]]></dc:creator>
				<category><![CDATA[lucidU]]></category>
		<category><![CDATA[ECC]]></category>
		<category><![CDATA[RAID 2]]></category>
		<category><![CDATA[RAID Level 2]]></category>
		<category><![CDATA[RAID technology]]></category>

		<guid isPermaLink="false">http://test.lucidti.com/?p=825</guid>
				<description><![CDATA[Level 2 RAID / RAID 2 / RAID Level 2 RAID 2 tries to get around the 50% disk overhead in the RAID 1. Four decades ago, R.W. Hamming in his &#8220;Error Detecting and Correcting Codes&#8221; research paper showed that &#8230; <a href="https://www.lucidti.com/raid-2-level-2-raid-raid-level-2/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
								<content:encoded><![CDATA[<h1>Level 2 RAID / RAID 2 / RAID Level 2</h1>
<p>RAID 2 tries to get around the 50% disk overhead in the <a title="RAID 1 // Level 1 RAID // RAID Level 1" href="http://www.lucidti.com/raid-1-level-1-raid-raid-level-1/">RAID 1</a>. Four decades ago,<br />
R.W. Hamming in his &#8220;Error Detecting and Correcting Codes&#8221; research paper showed<br />
that if data could be organized so that an error was only likely to affect one bit<br />
in a group, then the error could be detected and corrected with significantly<br />
lower overhead.</p>
<div id="attachment_829" style="width: 512px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-829" class="size-full wp-image-829" alt="RAID 2 Diagram" src="http://www.lucidti.com/wp-content/uploads/2014/02/RAID-2.png" width="502" height="413" srcset="https://www.lucidti.com/wp-content/uploads/2014/02/RAID-2.png 502w, https://www.lucidti.com/wp-content/uploads/2014/02/RAID-2-300x246.png 300w" sizes="(max-width: 502px) 100vw, 502px" /><p id="caption-attachment-829" class="wp-caption-text">RAID 2 / RAID Level 2 Diagram</p></div>
<p>RAID 2 takes advantage of the Hamming codes to reduce disk overhead. The<br />
first disk drive contains the first bit in each data group, the second disk<br />
contains the second bit, and so forth. Thus, if each data group is eight<br />
bits, we have eight data drives. We then add one additional drive for each<br />
bit in the Hamming code or ECC (error correcting code). For example, if data<br />
were grouped into bytes, we would need 11 drives &#8211; 8 for data and 3 for ECC.<br />
Depending upon the number of data drives and the number of required ECC drives,<br />
the overhead would range from 27% (11 drives total) to 50% (4 drives total).</p>
<p>For a read, all of the data disks must seek before the read starts. For a write,<br />
all of the data drives must seek, the data is read, all of the drives (including<br />
ECC) must seek again, and then the data is written. Thus seek times are going to<br />
be very slow relative to a single drive. However, once the seek has completed,<br />
data transfers are very high. With 8 data drives, the drives will all transmit<br />
data in parallel and the transfer rate of the virtual drive will be 8 times that<br />
of the individual physical drives.</p>
<p>However, keep in mind that the Berkeley papers were written for the mini,<br />
mainframe, and supercomputer environments. Thus RAID 2 overlooks the realities of<br />
the microcomputer environment. The ECC bits in the Hamming code serve two purposes.<br />
They are used to correct the faulty bit, but they are also used to identify<br />
which bit contains the error. In the microcomputer environment, we already know<br />
which bit (drive) has an error due to the internal checksums on the disk and standard<br />
drive and controller error flags. Thus the Hamming codes are too robust for our need<br />
and we pay a penalty for carrying redundant error isolation data.</p>
<p>The Berkeley papers suggest that the data drives could contain 10% more data<br />
by eliminating the internal checksums and allowing the Hamming code to isolate<br />
errors. Such a plan would be expensive though. It would require non-standard low-level<br />
formatting and read/write logic on the drives and would also require that the ECC be<br />
verified for every block of every read. Is all this trouble and expense worthwhile for a<br />
10% increase? If we could simply let the drives manage the error detection and found a<br />
way to convert 2 of the ECC drives in the 8/3 RAID 2 to data drives, we would get a 25%<br />
increase without modifying any off-the-shelf drives.</p>
<p>The following RAID articles describing various RAID levels will show that <a title="RAID 3 // Level 3 RAID // RAID Level 3" href="http://www.lucidti.com/raid-3-level-3-raid-raid-level-3/">RAID 3</a><br />
through RAID 5 allow us to convert all of the ECC drives, except one, to data. Therefore,<br />
RAID 2 cannot be considered a viable alternative and should not be considered for any<br />
commercial implementation.</p>
 ]]></content:encoded>
										</item>
		<item>
		<title>RAID 1 // Level 1 RAID // RAID Level 1</title>
		<link>https://www.lucidti.com/raid-1-level-1-raid-raid-level-1/</link>
				<pubDate>Fri, 31 Jan 2014 20:50:23 +0000</pubDate>
		<dc:creator><![CDATA[vadim]]></dc:creator>
				<category><![CDATA[lucidU]]></category>
		<category><![CDATA[duplexing]]></category>
		<category><![CDATA[mirror]]></category>
		<category><![CDATA[mirroring]]></category>
		<category><![CDATA[RAID]]></category>
		<category><![CDATA[RAID 1]]></category>
		<category><![CDATA[RAID Level 1]]></category>
		<category><![CDATA[RAID technology]]></category>

		<guid isPermaLink="false">http://test.lucidti.com/?p=802</guid>
				<description><![CDATA[Level 1 RAID / RAID 1 / RAID Level 1 A level 1 RAID is often called &#8220;mirrored disks&#8221; , “duplexed disks”, or &#8220;shadowed disks&#8221;. For each disk in the system, a duplicate disk is maintained with an exact copy &#8230; <a href="https://www.lucidti.com/raid-1-level-1-raid-raid-level-1/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
								<content:encoded><![CDATA[<h1>Level 1 RAID / RAID 1 / RAID Level 1</h1>
<p>A level 1 RAID is often called &#8220;mirrored disks&#8221; , “duplexed disks”, or &#8220;shadowed disks&#8221;.<br />
For each disk in the system, a duplicate disk is maintained with an exact copy of the information.<br />
Data redundancy is obvious as every byte is duplicated.</p>
<div id="attachment_805" style="width: 375px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-805" class="size-full wp-image-805 " title="RAID 1" alt="RAID 1 / RAID Level 1 / Level 1 RAID" src="http://www.lucidti.com/wp-content/uploads/2014/01/RAID-1.png" width="365" height="413" srcset="https://www.lucidti.com/wp-content/uploads/2014/01/RAID-1.png 365w, https://www.lucidti.com/wp-content/uploads/2014/01/RAID-1-265x300.png 265w" sizes="(max-width: 365px) 100vw, 365px" /><p id="caption-attachment-805" class="wp-caption-text">RAID 1 / RAID Level 1 Diagram</p></div>
<p>Computing the performance impact is more difficult. With an optimized<br />
filesystem driver or controller, RAID 1 reads can be faster than a<br />
single drive. If we allow both drives containing the duplicate data to<br />
begin seeking together and then use the one that completes the seek<br />
first, our average access time will be better than for a single drive.</p>
<p>RAID 1 writes always require writing to two drives and we end up<br />
suffering a penalty, relative to a single drive, waiting for both drives<br />
to complete the write . However, writes are almost always preceded by a<br />
read (at the faster rate), and the average decrease in seek time for a<br />
read is exactly offset by the average increase in seek time for a write.<br />
So the single read and two writes associated with the mirror in RAID<br />
Level 1 takes the same time as the single read/write in a single drive<br />
case. Coupled with the improvement in the read only case, the overall<br />
result is that the optimized RAID 1 has slightly lower average access<br />
times than a single drive.</p>
<p>In a multitasking system, we can take a different approach. Since we<br />
have two complete sets of data by RAID 1 definition, we can satisfy two<br />
read requests simultaneously by sending one to each drive. In a situation<br />
where the system is saturated with read requests, twice as many requests<br />
will be processed in a given time period and the apparent seek time will<br />
be half that of a single drive. However, it is obvious that this speedup<br />
falls apart once a write request is received. In RAID Level 1, the write<br />
must be made to both drives and the parallel operations are interrupted.<br />
Whether this method will yield better results than the simpler approach<br />
in the paragraph above depends upon the ratio of reads to writes and<br />
the size of the transfers. Careful analysis of the actual target<br />
application should be made before it is adopted.</p>
<h2>RAID 1 Benefits</h2>
<p>The primary advantage of RAID 1 over other types of RAID architectures<br />
is its simplicity. It does not provide any significant performance<br />
improvement, but it does provide easily integrated data redundancy.<br />
RAID 1 can be implemented by a dual channel controller or a minimal<br />
device driver using one or two controllers without any changes to<br />
the operating system or it can be implemented on a filesystem level.</p>
<h2>RAID 1 Drawbacks</h2>
<p>RAID 1 has two disadvantages. The most serious is cost. RAID<br />
implementations have three cost components: special software drivers,<br />
custom controllers, and disk overhead, the cost of storing the redundant<br />
data. In RAID 1, the cost of software drivers and controllers is low or<br />
the same as a single drive, but this more than offset by the 50% disk<br />
overhead. The second problem area is packaging. To achieve the same<br />
amount of usable storage space, a RAID 1 requires either twice as many<br />
drives or larger drives than a conventional system. Either approach<br />
requires significantly more power and usually requires more physical<br />
space to mount.</p>
 ]]></content:encoded>
										</item>
		<item>
		<title>What is a RAID System?</title>
		<link>https://www.lucidti.com/what-is-a-raid-system/</link>
				<pubDate>Thu, 30 Jan 2014 05:26:13 +0000</pubDate>
		<dc:creator><![CDATA[vadim]]></dc:creator>
				<category><![CDATA[lucidU]]></category>
		<category><![CDATA[architecture]]></category>
		<category><![CDATA[array]]></category>
		<category><![CDATA[level]]></category>
		<category><![CDATA[RAID]]></category>
		<category><![CDATA[redundant]]></category>
		<category><![CDATA[system]]></category>

		<guid isPermaLink="false">http://test.lucidti.com/?p=789</guid>
				<description><![CDATA[What is a RAID System? The Berkeley papers do not provide a concise definition of the term RAID. Instead, they propose RAID schemes as an inexpensive method for obtaining significant increases in I/O bandwidth, and then provide an implied definition &#8230; <a href="https://www.lucidti.com/what-is-a-raid-system/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
								<content:encoded><![CDATA[<h1>What is a RAID System?</h1>
<p>The Berkeley papers do not provide a concise definition of the term RAID.<br />
Instead, they propose RAID schemes as an inexpensive method for obtaining<br />
significant increases in I/O bandwidth, and then provide an implied definition of RAID by<br />
an example of the architectures. Unfortunately, not all RAID architectures improve<br />
bandwidth. Therefore, we propose the following definition which incorporates the<br />
models presented here and that also embodies the manner in which RAID is used in<br />
the industry:<br />
<div class='et-box et-shadow'>
					<div class='et-box-content'><p align="center"><b>A Redundant Array of Inexpensive Disks (RAID) is any disk subsystem</b>
<b>architecture that combines two or more standard physical disk drives into a</b>
<b>single logical drive in order to achieve data redundancy.</b></p></div></div><br />
<br />
We are deliberately leaving some terms, such as &#8220;standard physical disk drive&#8221; vague,<br />
since the meaning of such a term changes from year to year.  Note that the definition<br />
says nothing about improved performance. The only criteria for a disk array to be a<br />
RAID is that it provides data redundancy. Performance can even degrade compared to a<br />
single drive case.<br />
<br />
In practice, many of the reasons that we would use a RAID architecture also require<br />
improved performance, and the primary motivation for most of the differences among<br />
the RAID architectures is improved performance in one area or another. But, if we<br />
simply build an array in order to improve performance through parallel disk activity<br />
and do not provide any data redundancy in the architecture, we do not have a true RAID<br />
architecture.<br />
</p>
<h3>Real World Applicability</h3>
<p>In the posts that follow, we will formally define and discuss the relative strengths<br />
and weaknesses of five basic types of RAID architectures followed by expanded or “nested”<br />
RAID schemes. It should be remembered that these implementations are described as they<br />
are defined in the Berkeley papers and that they are not inclusive of all of the viable array<br />
alternatives. Also, particular vendors may or may not have implemented their products in<br />
strict accordance with the definitions. Therefore, simply because a vendor lists<br />
their product as being a RAID 5 implementation, it should not automatically be expected<br />
to exhibit all of the advantages or disadvantages of the RAID 5 definition. Rather, what it<br />
means is that potential customers have heard about the RAID definitions and they want<br />
to know which type of RAID the vendor has implemented. When the vendor says that it<br />
has a RAID 5 implementation, what it really means is that RAID 5 is the RAID definition<br />
that best describes the vendor product, not that it exactly describes the product.<br />
<br />
With the above caution in mind, potential RAID purchasers should use this information<br />
to understand the key implementation differences between architectures and their<br />
relative strengths and weaknesses. Armed with this understanding, actual vendor<br />
offerings may be evaluated against the user requirements to determine applicability.</p>
 ]]></content:encoded>
										</item>
		<item>
		<title>Data Storage &#8211; Historical Background</title>
		<link>https://www.lucidti.com/data-storage-raid-historical-background/</link>
				<pubDate>Thu, 23 Jan 2014 05:23:01 +0000</pubDate>
		<dc:creator><![CDATA[vadim]]></dc:creator>
				<category><![CDATA[lucidU]]></category>
		<category><![CDATA[history]]></category>
		<category><![CDATA[RAID]]></category>
		<category><![CDATA[storage]]></category>

		<guid isPermaLink="false">http://test.lucidti.com/?p=779</guid>
				<description><![CDATA[Data Storage &#8211; Historical Background Humble Computing Beginnings In 1976, the microcomputer was just beginning to make personal computers a reality for a handful of individuals, the first &#8220;hackers&#8221;.  The premier systems of the day were built around an S-100 or &#8230; <a href="https://www.lucidti.com/data-storage-raid-historical-background/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
								<content:encoded><![CDATA[<div>
<h1>Data Storage &#8211; Historical Background</h1>
<h3>Humble Computing Beginnings</h3>
</div>
<p>In 1976, the microcomputer was just beginning to make personal computers a reality for a handful of individuals, the first &#8220;hackers&#8221;.  The premier systems of the day were built around an S-100 or STD bus, and had Intel 8080 or Motorola 6800 processors running with clock frequencies below 5 Mhz. The primary interactive I/O devices were converted teletypes or alphanumeric terminals from mini and mainframe computers. Primary secondary storage was via converted audio cassette recorders. A fully populated memory board could have as little as 4K of Random Access Memory and the processors could only directly address 64K of RAM. And, even if they could use more memory, the cost and power requirements of the existing static RAM made a system with more than 16K of RAM unusual. Each hacker or vendor had a unique control program, usually embedded in a hardware front panel and every application was responsible for all of its own low level I/O.</p>
<div></div>
<p>In short order, system capabilities and subsystems were standardized and expanded in capability. Interactive I/O was extended to graphics, embedded keyboards, and pointing devices. Secondary storage expanded to include floppy disk drives and then hard disks. And processor capabilities have increased many fold.  The standardization of peripheral and bus interfaces and operating systems led to a commonality that allowed the development of highly sophisticated applications and the use by large numbers of individuals.</p>
<div></div>
<p>In the area of disk subsystems, the major technical changes have dealt with increased capacities, single component reliability, data bandwidth, and decreased access times. None of these changes has altered the fundamental architecture of the disk subsystem. Two recent trends in computers have required the consideration of some fundamental changes in this architecture.</p>
<h3>Networks and Virtualization &#8211; Two Drivers for Fast, Reliable, and Resilient Storage</h3>
<p>Until mid-1990s, the main characteristic of most systems has been their use by individuals. A system failure was isolated to a single system and the individual or function that it supported. The impact was isolated and could be remedied by correcting the fault and rebuilding from the latest backups of the data system. In the worst case, work since the last backup had to be redone. Now, the vast majority of networks are incorporating file servers. The loss of the file server complicates matters considerably. First, the loss of the server incapacitates all of the personal systems using the data from that server, not just a single individual. As more and more companies move databases from their mini and mainframe systems onto network based transaction systems, the loss of the file server can mean a shutdown of corporate functions until the system (and its databases) are restored. Also, since database updates are being received from multiple sources, not just the local machine, it may be difficult or impossible to roll forward from the last backup. Most formal database systems will have an independent transaction log, but it is naive to assume that the developers of custom applications (particularly single machine applications that have been adapted for networks) have incorporated these sorts of sophisticated protection mechanisms. An even if the databases are secure, what about other files, such as a compendium of outgoing correspondence and undelivered and/or saved email? This is further exacerbated by server virtualization – a situation where multiple virtual servers are running on the same physical server while sharing storage. Obviously, what is needed is some way to provide data protection and/or recovery even if the disk subsystem fails.</p>
<div></div>
<p>The general computer user and the popular computer magazines tend to generalize the capabilities of a computer system based upon the type of processor and the clock speed. We talk about an &#8220;Intel system&#8221; or an &#8220;AMD system&#8221; or &#8220;a Quad Core Xeon machine&#8221;. To some extent, these generalizations are valid with regard to memory and instruction architectures and capabilities. But, increasingly, we find test results showing that the performance differences between an Intel-based file server and an AMD-based file server may be considerably less than the differences between two Xeon-based file servers. In examining this apparent paradox, we find that the raw computing power of the processors has little to do with the overall system performance. The processor capabilities have so far exceeded the capabilities of the other subsystems that processor throughput is a negligible component in most cases. Ninety nine percent of the time, the processor is waiting for the disk subsystem or the network adapter to complete some task. The conclusion is obvious. In future systems, high performance will be obtained by integrating a set of balanced subsystems. In the area of disk subsystems, we need higher throughput and faster access times.</p>
<h3>Is RAID the answer?</h3>
<p>The Redundant Array of Inexpensive Disks, or RAID, provides the ability to deliver both increased reliability and increased performance at moderate cost and using proven existing technology. In the posts that will follow, we’ll attempt to address and explore the following issues:</p>
<p>&nbsp;</p>
<div>
<ul>
<li><em>What is a RAID architecture and what do we mean by a RAID level?</em></li>
<li><em>What do customers need in disk subsystems?</em></li>
<li><em>What constitutes a reliable disk subsystem?</em></li>
<li><em>How much throughput is needed in a file server? Since we want to build balanced systems, when does increasing throughput take the system out of balance?</em></li>
<li><em>Do application server requirements differ from file server requirements? If so, how?</em></li>
<li><em>What specialized applications benefit from RAID type architectures?</em></li>
<li><em>How does the Operating System affect the requirements?</em></li>
<li><em>Do OEMs and system integrators have requirements that are different from the end user?</em></li>
<li><em>How reliable are disk arrays?</em></li>
<li><em>Is it better to implement a disk array using specialized hardware, or should software be used to implement an array using standard “off the shelf” hardware?</em></li>
<li><em>Where should disk (and particularly array) technology progress in the future?</em></li>
</ul>
</div>
 ]]></content:encoded>
										</item>
	</channel>
</rss>
