Tuesday, August 16, 2011

NAS, DAS, or SAN? - Choosing the Right Storage Technology ?

Data is unquestionably the lifeblood of today's digital organization. Storage solutions remain a top priority in IT budgets precisely because the integrity, availability and protection of data are vital to business productivity and success. But the role of information storage far exceeds day to day functions. Enterprises are also operating in an era of increased uncertainty. IT personnel find themselves assessing and planning for more potential risks than ever before, ranging from acts of terrorism to network security threats. A backup and disaster recovery plan is essential, and information storage solutions provide the basis for its execution.

Businesses are also subject to a new wave of regulatory compliance legislation that directly affects the process of storing, managing and archiving data. This is especially true for the financial services and healthcare industries, which handle highly sensitive information and bear extra responsibility for maintaining data integrity and privacy.

Although the need for storage is evident, it is not always clear which solution is right for your organization. There are a variety of options available, the most prevalent being direct-attached storage (DAS), network-attached storage (NAS) and storage area networks (SAN). Choosing the right storage solution can be as personal and individual a decision as buying a home. There is no one right answer for everyone. Instead, it is important to focus on the specific needs and long-term business goals of your organization. Several key criteria to consider include:
• Capacity - the amount and type of data (file level or block level) that needs to be stored and shared
• Performance - I/O and throughput requirements
• Scalability - Long-term data growth
• Availability and Reliability - how mission-critical are your applications?
• Data protection - Backup and recovery requirements
• IT staff and resources available
• Budget concerns
While one type of storage media is usually sufficient for smaller companies, large enterprises will often have a mixed storage environment, implementing different mediums for specific departments, workgroups and remote offices. In this paper, we will provide an overview of DAS, NAS and SAN to help you determine which solution, or combination of solutions, will best help you achieve your business goals.


DAS: Ideal for Local Data Sharing Requirements

Direct-attached storage, or DAS, is the most basic level of storage, in which storage devices are part of the host computer, as with drives, or directly connected to a single server, as with RAID arrays or tape libraries. Network workstations must therefore access the server in order to connect to the storage device. This is in contrast to networked storage such as NAS and SAN, which are connected to workstations and servers over a network. As the first widely popular storage model, DAS products still comprise a large majority of the installed base of storage systems in today's IT infrastructures. Although the implementation of networked storage is growing at a faster rate than that of direct-attached storage, it is still a viable option by virtue of being simple to deploy and having a lower initial cost when compared to networked storage. When considering DAS, it is important to know what your data availability requirements are. In order for clients on the network to access the storage device in the DAS model, they must be able to access the server it is connected to. If the server is down or experiencing problems, it will have a direct impact on users' ability to store and access data. In addition to storing and retrieving files, the server also bears the load of processing applications such as e-mail and databases. Network bottlenecks and slowdowns in data availability may occur as server bandwidth is consumed by applications, especially if there is a lot of data being shared from workstation to workstation.

DAS is ideal for localized file sharing in environments with a single server or a few servers - for example, small businesses or departments and workgroups that do not need to share information over long distances or across an enterprise. Small companies traditionally utilize DAS for file serving and e-mail, while larger enterprises may leverage DAS in a mixed storage environment that likely includes NAS and SAN. DAS also offers ease of management and administration in this scenario, since it can be managed using the network operating system of the attached server. However, management complexity can escalate quickly with the addition of new servers, since storage for each server must be administered separately.

From an economical perspective, the initial investment in direct-attached storage is cheaper. This is a great benefit for IT managers faced with shrinking budgets, who can quickly add storage capacity without the planning, expense, and greater complexity involved with networked storage. DAS can also serve as an interim solution for those planning to migrate to networked storage in the future. For organizations that anticipate rapid data growth, it is important to keep in mind that DAS is limited in its scalability. From both a cost efficiency and administration perspective, networked storage models are much more suited to high scalability requirements.

Organizations that do eventually transition to networked storage can protect their investment in legacy DAS. One option is to place it on the network via bridge devices, which allows current storage resources to be used in a networked infrastructure without incurring the immediate costs of networked storage. Once the transition is made, DAS can still be used locally to store less critical data.
NAS: File-Level Data Sharing Across the Enterprise

Networked storage was developed to address the challenges inherent in a server- based infrastructure such as direct-attached storage. Network-attached storage, or NAS, is a special purpose device, comprised of both hard disks and management software, which is 100% dedicated to serving files over a network. As discussed earlier, a server has the dual functions of file sharing and application serving in the DAS model, potentially causing network slowdowns. NAS relieves the server of storage and file serving responsibilities, and provides a lot more flexibility in data access by virtue of being independent.

NAS is an ideal choice for organizations looking for a simple and cost-effective way to achieve fast data access for multiple clients at the file level. Implementers of NAS benefit from performance and productivity gains. First popularized as an entry-level or midrange solution, NAS still has its largest install base in the small to medium sized business sector. Yet the hallmarks of NAS - simplicity and value - are equally applicable for the enterprise market. Smaller companies find NAS to be a plug and play solution that is easy to install, deploy and manage, with or without IT staff at hand. Thanks to advances in disk drive technology, they also benefit from a lower cost of entry.

In recent years, NAS has developed more sophisticated functionality, leading to its growing adoption in enterprise departments and workgroups. It is not uncommon for NAS to go head to head with storage area networks in the purchasing decision, or become part of a NAS/SAN convergence scheme. High reliability features such as RAID and hot swappable drives and components are standard even in lower end NAS systems, while midrange offerings provide enterprise data protection features such as replication and mirroring for business continuance. NAS also makes sense for enterprises looking to consolidate their direct-attached storage resources for better utilization. Since resources cannot be shared beyond a single server in DAS, systems may be using as little as half of their full capacity. With NAS, the utilization rate is high since storage is shared across multiple servers.

The perception of value in enterprise IT infrastructures has also shifted over the years. A business and ROI case must be made to justify technology investments. Considering the downsizing of IT budgets in recent years, this is no easy task. NAS is an attractive investment that provides tremendous value, considering that the main alternatives are adding new servers, which is an expensive proposition, or expanding the capacity of existing servers, a long and arduous process that is usually more trouble than it's worth. NAS systems can provide many terabytes of storage in high density form factors, making efficient use of data center space. As the volume of digital information continues to grow, organizations with high scalability requirements will find it much more cost-effective to expand upon NAS than DAS. Multiple NAS systems can also be centrally managed, conserving time and resources.

Another important consideration for a medium sized business or large enterprise is heterogeneous data sharing. With DAS, each server is running its own operating platform, so there is no common storage in an environment that may include a mix of Windows, Mac and Linux workstations. NAS systems can integrate into any environment and serve files across all operating platforms. On the network, a NAS system appears like a native file server to each of its different clients. That means that files are saved on the NAS system, as well as retrieved from the NAS system, in their native file formats. NAS is also based on industry standard network protocols such as TCP/IP, FC and CIFS.

SANs: High Availability for Block-Level Data Transfer

A storage area network, or SAN, is a dedicated, high performance storage network that transfers data between servers and storage devices, separate from the local area network. With their high degree of sophistication, management complexity and cost, SANs are traditionally implemented for mission-critical applications in the enterprise space. In a SAN infrastructure, storage devices such as NAS, DAS, RAID arrays or tape libraries are connected to servers using Fibre Channel. Fibre Channel is a highly reliable, gigabit interconnect technology that enables simultaneous communication among workstations, mainframes, servers, data storage systems and other peripherals. Without the distance and bandwidth limitations of SCSI, Fibre Channel is ideal for moving large volumes of data across long distances quickly and reliably.

In contrast to DAS or NAS, which is optimized for data sharing at the file level, the strength of SANs lies in its ability to move large blocks of data. This is especially important for bandwidth-intensive applications such as database, imaging and transaction processing. The distributed architecture of a SAN also enables it to offer higher levels of performance and availability than any other storage medium today. By dynamically balancing loads across the network, SANs provide fast data transfer while reducing I/O latency and server workload. The benefit is that large numbers of users can simultaneously access data without creating bottlenecks on the local area network and servers.

SANs are the best way to ensure predictable performance and 24x7 data availability and reliability. The importance of this is obvious for companies that conduct business on the web and require high volume transaction processing. Another example would be contractors that are bound to service-level agreements (SLAs) and must maintain certain performance levels when delivering IT services. SANs have built in a wide variety of failover and fault tolerance features to ensure maximum uptime. They also offer excellent scalability for large enterprises that anticipate significant growth in information storage requirements. And unlike direct-attached storage, excess capacity in SANs can be pooled, resulting in a very high utilization of resources. There has been much debate in recent times about choosing SAN or NAS in the purchasing decision, but the truth is that the two technologies can prove quite complementary. Today, SANs are increasingly implemented in conjunction with NAS. With SAN/NAS convergence, companies can consolidate block-level and file-level data on common arrays.

Even with all the benefits of SANs, several factors have slowed their adoption, including cost, management complexity and a lack of standardization. The backbone of a SAN is management software. A large investment is required to design, develop and deploy a SAN, which has limited its market to the enterprise space. A majority of the costs can be attributed to software, considering the complexity that is required to manage such a wide scope of devices. Additionally, a lack of standardization has resulted in interoperability concerns, where products from different hardware and software vendors may not work together as needed. Potential SAN customers are rightfully concerned about investment protection and many may choose to wait until standards become defined.

Conclusion

With such a variety of information storage technologies available, what is the best way to determine which one is right for your organization? DAS, NAS and SAN all offer tremendous benefits, but each is best suited for a particular environment. Consider the nature of your data and applications. How critical and processing-intensive are they? What are your minimum acceptable levels of performance and availability? Is your information sharing environment localized, or must data be distributed across the enterprise? IT professionals must make a comprehensive assessment of current requirements while also keeping long-term business goals in mind.

Like all industries, storage networking is in a constant state of change. It's easy to fall into the trap of choosing the emerging or disruptive storage technology at the time. But the best chance for success comes with choosing a solution that is cost-correct and provides long term investment protection for your organization. Digital assets will only continue to grow in the future. Make sure your storage infrastructure is conducive to cost-effective expansion and scalability. It is also important to implement technologies that are based on open industry standards, which will minimize interoperability concerns as you expand your network.

Sunday, August 14, 2011

IP CCTV transmission methods

There are essentially three ways of transmitting video streams over the network from the source to the destination: broadcast, unicast and multicast.

Broadcast
Broadcast is defined as a one-to-all communication between the source and the destinations. In IP video surveillance, the source refers usually to the IP camera and the destination refers to the monitoring station or the recording server. In this case, broadcasting would mean that the IP camera would send the video stream to all monitoring stations and recording servers, but also to any IP devices on the network, even though only a few specific destination sources had actually requested the stream. Typically, this method of transmission is not commonly used in IP video surveillance applications, but can be seen more often in the TV broadcasting industry where TV signals are switched at the destination level.

Unicast
Unicast is defined as a one-to-one communication between the source and the destination. Unicast transmissions are usually done in TCP or UDP and require a direct connection between the source and the destination. In this scenario, the IP camera (source) needs to have the capabilities to accept many concurrent connections when many destinations want to view or record that same video at the same time.
In terms of video streaming in unicast transmission, the IP camera will stream as many copies of the video feed requested by the destinations. In figure 1 below, three copies of the same video stream are sent over the network; one copy for each of the three destinations requesting the stream. If each video stream is 4 Mbps, this transmission will produce 12 Mbps (3x4Mbps) of data on multiple network segments.

As a result, many destinations connected in unicast to a video source can result in high network traffic. In other words, if we imagine a large system with 200 destinations requesting the same video stream, we would end up having 800 Mbps (200x4Mbps) of data travelling over the network, which is realistically unmanageable. Although this method of transmission is widely used over the Internet where most routers are not multicast-enabled, within a corporate LAN, unicast transmission is not necessarily the best practice as it can quickly increase the bandwidth needed for viewing and recording camera streams.

Multicast
In multicast transmission, there is no direct connection between the source and the destinations. The connection to the video stream of the IP camera is done by joining a multicast group, which in simple terms means actually connecting to the multicast IP address of the video stream. So the IP camera only sends a single copy of the video stream to its designated IP address and the destination simply connects to the stream available over the network with no additional overhead on the source. In other words, the destinations share the same video stream. In figure 2 below, the same three destinations requesting the video stream have the same impact on the network as a single destination requesting the stream in unicast and there is no more than 4 Mbps of data travelling on each segment of the network. Even with 200 destinations requesting that video stream, the same amount of data would be travelling on the network.

It is evident at this point that using multicast transmissions in an IP video surveillance application can save a lot of bandwidth, especially in large scale deployments where the number of destinations can grow very quickly.


Bandwidth optimisation for IP CCTV
When it comes to IP video surveillance, it is important to efficiently manage the way video streams are transmitted over the network in order not to overload the available bandwidth. Even though IT infrastructures are built to handle any kind of data, the applications generating traffic over the IP network need to be conducive with the efficient utilization of the network resources in place. To this end, different functionalities and mechanisms are offered by IP video surveillance solution providers to allow optimization of bandwidth and network resources such as:
• Multicasting
• Multistreaming
• Video compression

Even though the capacity and speed of the network are constantly increasing and its associated costs are declining, this is still not a good reason for users to ignore the additional investments and efforts needed to optimise bandwidth management. The amount of data travelling on the network is also still on the rise and therefore, investments in bandwidth optimization are ones that can contribute to a reduction in total cost of ownership, specifically in respect to efficiency gains and maximized resources.

For example, in video surveillance, more and more end-users are requesting cameras with higher picture quality and resolution, often opting for high-definition and megapixel cameras. These types of cameras require much more bandwidth than standard definition cameras. Also, more and more people inside as well as outside an organization’s walls are requesting access to video streams over the network. In the case where a large number of users are simultaneously trying to access a specific video stream, efficient use of network resources can be crucial in avoiding overloaded capacity and entire network crashes.
It is equally important to realize that optimizing the bandwidth on the network does not necessarily go hand in hand with large capital investments, but is more a matter of putting the right solutions in place and leveraging the unique and powerful capabilities of these solutions.

Saturday, August 13, 2011

Which Image Quality is Better

When thinking about maximizing image quality, resolution is usually the first thing that comes to mind. However, resolution is not the only factor that impacts quality. The amount of bandwidth available and used can have a dramatic impact on image quality. In this report, we examine bandwidth and the effect that it has on quality across numerous cameras.
Which Image Quality is Better?
To better understand image quality, let's start by examining two samples of the same scene side by side:
 
Consider two questions:
1. Which camera has higher resolution? A or B?
2. Which camera is better? A or B?
It is pretty obvious that the image from Camera B is better so this should be a simple case.
The reality is that those images are from the same camera at the same resolution and frame rate (720p/30). All that was done to the camera was changing the Constant Bit Rate target from 512 Kb/s to 8 Mb/s.
Factors Impacting Quality:
Even with the same resolution, two common settings impact quality: 
1. Bit Rate: Most cameras can have their bit rate adjusted to specific levels (e.g., 512 Kb/s, 2 Mb/s, 8Mb/s, etc.) 
2. Quantization Level: Most cameras can have the level of compression adjusted (often called a quality or compression setting with options from 1-10 or 0-100)
Typically, these are mutually exclusive. If you lock in bit rate, the camera will automatically adjust the quantization level to not exceed the bandwidth set. Vice versa, if you set the quantization level, the camera will automatically change the bandwidth consumed to make sure the quality / compression always stays at the same level.
Our Test Process
We wanted to better understand how changes in these two factors impact video quality. To do so, we did a series of tests with three HD cameras: the Axis P1344, the Sony CH140 and the Bosch NBN-921.
For the bandwidth tests, we tested each camera at the following levels:
  • 512 Kb/s
  • 1 Mb/s
  • 2 Mb/s
  • 4 Mb/s
  • 8 Mb/s
We did this across a series of scenes to see how quality would vary in different conditions:
  • Daytime Indoors (300 lux)
  • Nighttime Indoors (.5 lux)
  • Daytime Intersection
Finally, we did a similar series of tests varying the quality level of a VBR camera (the Axis across 0, 30, 60 and 100 levels) to better understand changes in quality and bandwidth consumption.