Showing posts with label SAN. Show all posts
Showing posts with label SAN. Show all posts

Saturday, October 15, 2022

Difference of Core i3, Core i5 & Core i7

Difference of Core i3, Core i5 & Core i7

 During Security Software installation we are giving some pre requisite to customer / we assumed what type of System hardware is required, accordingly get costing from Vendor. If you are System integrator, your design team must know about the processor.

Intel Core i3 Processor

This particular Intel processor is the entry level processor of this new series of Intel processors. While it may not be the fastest one of the bunch, it can get the job done, at least for most applications.

Mind you, if you need high speed, I suggest one of the other processors that I will unveil in front of your eyes later on in this post. Here’s some of the Core i3 features.

·        Uses 4 threads. Yes, it uses hyper-threading technology which is the latest craze due to its improved efficiency over earlier processors that were put on the market.

·        This processor consists of 2-4 cores, depending on which one you get your hands on.

·        Contains A 3-4 MB Cache

·        Uses less heat and energy than earlier processors, which is always a good thing in this day and age.

Intel Core i5 Processor

·        This is the mid-size processor of this bunch, recommended for those who demand a little speed, but not quite enough where the user will be running resource-intensive applications.

·        As with the Core i3 processor, this comes with 2-4 cores, the main difference is that it has a higher clock speed than the Core i3.

·        This is also a heat and energy efficient processor, but it does seem to be better at this particular job than the Core i3 processor.

·        The number of threads used in this is no different than the Core i3 with 2-4 threads, and it also uses hyper threading technology for a boost in performance.

·        The cache of the Core i5 is bigger than the Core i3, it’s at 3-8 MB.

·        The Core i5 is where the turbo mode is made available, this provides users with the opportunity to turn off a core if it’s not being utilized.

Intel Core i7 Processor

·        This is for the users that demand power, yes it does provide more power and if Tim Allen gets one of these, this would be the beast that he gets his hands on. Great for gamers and other resource intensive users. 

·        The cache on this one is 4-8 MB.

·        This processor comes with 8 threads, definitely enough to get the job done quickly, maybe even at the speed of light if you’re lucky.  And yes it also utilizes hyperthreading technology.

·        You will have four cores to take advantage of with this particular series.

·        And just like the other ones in this Intel series of processors, it is more energy efficient and produces less heat.

Below reviews the specifications (high-level) of 10th Gen Intel Core i7 processors as of late 2020.*

 

Cores/ Hyperthreading

Base Frequency

Maximum Turbo Frequency

Cache

Core i7 Laptops

4-8 / Yes

1.00-2.70 GHz

3.80-5.10 GHz

8-16 MB

(10th Generation)

Core i7 Desktops

8 / Yes

2.00-3.80 GHz

4.50-5.10 GHz

16 MB

(10th Generation)

The Intel Core i9 is often called Intel's processor line for "CPU enthusiasts," the early-adopters who always demand the industry’s latest and greatest. A Core i9-powered desktop or i9-powered laptop is great for users whose work requires extremely advanced computing capabilities (editing 4K video, for example). It’s also popular with high-end gamers who play live-action, multi-player, VR-based titles that can benefit from a CPU with hyper-fast cycle times and high core-thread counts.

The Core i9 debuted in 2017 along with a new socket-motherboard combination to support it. As this FAQ was written, the i9 had evolved to a deliver up to10 cores and 20 threads (desktop version). It costs more than the other members of the Intel Core family, but for certain applications, games and other workloads, the difference could be meaningful.

Table below lists the top-level specifications of Intel Core i9 processors (10th gen) as of late 2020.

 

Cores/ Hyperthreading

Base Frequency

Maximum Turbo Frequency

Cache

Core i9 Laptops

8 / Yes

2.40 GHz

5.30 GHz

16 MB

(10th Generation)

Core i9 Desktops

10 / Yes

1.90-3.70 GHz

4.60-5.30GHz

20 MB

(10th Generation)

Here are some broad statements addressing the comparative cost of Intel Core i9-enabled systems versus models with lesser processors, along with the kinds of users (and use cases) that are most likely to benefit from an advanced Core i9 PC:

  • Core i9 PC – Cost category
    • The Core i9 is the “enthusiast” line of Intel Core CPUs
  • Core i9 PC – Typical users
    • Processor early-adopters
    • Users of extremely demanding software
    • Gamers who always want the latest/greatest
    • Workstation users, server operators, etc.
  • Core i9 PC – Use cases
    • Everything the lesser Intel processors can do plus core-intensive activities such as editing huge video files, rendering complex engineering designs, acting as a server, and so on.

Wednesday, June 1, 2022

IPv6 and IPv4

IPv6 and IPv4 

Many engineers called to get know about IPv6 & IPv4. IP (short for Internet Protocol) specifies the technical format of packets and the addressing scheme for computers to communicate over a network OR, An IP (Internet Protocol) Address is an alphanumeric label assigned to computers and other devices that connect to a network using an internet protocol. This address allows these devices to send and receive data over the internet. Every device that is capable of connecting to the internet has a unique IP address.

There are currently two version of Internet Protocol (IP): IPv4 and a new version called IPv6. IPv6 is an evolutionary upgrade to the Internet Protocol. IPv6 will coexist with the older IPv4 for some time.

What is IPv4 (Internet Protocol Version 4)?

IPv4 (Internet Protocol Version 4) is the fourth revision of the Internet Protocol (IP) used to to identify devices on a network through an addressing system. The Internet Protocol is designed for use in interconnected systems of packet-switched computer communication networks. IPV4 header format is of 20 to 60 bytes in length, 

IPv4 is the most widely deployed Internet protocol used to connect devices to the Internet. IPv4 uses a 32-bit address scheme allowing for a total of 2^32 addresses (just over 4 billion addresses).  With the growth of the Internet it is expected that the number of unused IPv4 addresses will eventually run out because every device -- including computers, smartphones and game consoles -- that connects to the Internet requires an address.

A new Internet addressing system Internet Protocol version 6 (IPv6) is being deployed to fulfill the need for more Internet addresses. IPV6 header format is of 40 bytes in length

IPv6 (Internet Protocol Version 6) is also called IPng (Internet Protocol next generation) and it is the newest version of the Internet Protocol (IP) reviewed in the IETF standards committees to replace the current version of IPv4 (Internet Protocol Version 4). 

IPv6 is the successor to Internet Protocol Version 4 (IPv4). It was designed as an evolutionary upgrade to the Internet Protocol and will, in fact, coexist with the older IPv4 for some time. IPv6 is designed to allow the Internet to grow steadily, both in terms of the number of hosts connected and the total amount of data traffic transmitted.

IPv6 is often referred to as the "next generation" Internet standard and has been under development now since the mid-1990s. IPv6 was born out of concern that the demand for IP addresses would exceed the available supply.

The Benefits of IPv6

While increasing the pool of addresses is one of the most often-talked about benefit of IPv6, there are other important technological changes in IPv6 that will improve the IP protocol:

·        No more NAT (Network Address Translation)

·        Auto-configuration

·        No more private address collisions

·        Better multicast routing

·        Simpler header format

·        Simplified, more efficient routing

·        True quality of service (QoS), also called "flow labeling"

·        Built-in authentication and privacy support

·        Flexible options and extensions

·        Easier administration (say good-bye to DHCP)

The Difference Between IPv4 and IPv6 Addresses

An IP address is binary numbers but can be stored as text for human readers.  For example, a 32-bit numeric address (IPv4) is written in decimal as four numbers separated by periods. Each number can be zero to 255. For example, 1.160.10.240 could be an IP address.

IPv6 addresses are 128-bit IP address written in hexadecimal and separated by colons. An example IPv6 address could be written like this: 3ffe:1900:4545:3:200:f8ff:fe21:67cf.

Did You Know...? IPv6 in the News: (April, 2017) MIT announced it would sell  half of its 16 million valuable IPv4 addresses and use the proceeds of the sale to finance its own IPv6 network upgrades.

Sunday, August 16, 2015

Backup and Archiving

Backup and Archiving

Backup and archiving are always mentioned together, as both of these technologies support primary data storage. However, the commonalities end over here and cannot be carried forward. But in enterprise IT world it is often observed that archive is analogous to backup.
Simply put, backup and archive are not the same and here’s why explained in simple terms-

Data Backup

Data backup is intended to recover individual lost or corrupt files, or individually corrupt operating system instances. The backed up data has both active and inactive information which encompasses all of your production data. This backup set is useful for purposes of recovery in case of the original copy of data is lost or becomes inaccessible due to reasons. It is always critical that a backup is a copy of production data and the actual data still resides on the production storage systems.
Backups are historically being optimized for large scale recoveries. They are written in large blocks to dedicated hardware like tape libraries or deduplicated disk backup appliances.
On a typical note, these backups are scheduled, often every 24 hours, sometimes more frequently, even hourly with some continuous data protection solutions. The data driven by a backup is stored on a tape or a disk solution or off site like a cloud platform. Restoration from backup can be a complex and lengthy process depending on the volume of data to be restored.

Data archiving

Data archiving on the other hand, is data meant for long term retention, typically for compliance purposes in regulated industries such as finance and legal sectors. These are actually designed with very different access profiles. These systems typically store individual data objects such as files, databases or email messages and usually also capture metadata associated with each item.  The result is that an archive can provide immediate granular access to stored information and so accessing an individual file or email is typically very easy in an archive system.
Generally, archiving solutions which retain and index all copies and versions of a document, file, or email, making them easily are expected to be rapidly retrievable by end users rather than IT admins.

So, please do not treat your backups as archives or vice versa as same, as they serve for different purposes.

Tuesday, August 16, 2011

NAS, DAS, or SAN? - Choosing the Right Storage Technology ?

Data is unquestionably the lifeblood of today's digital organization. Storage solutions remain a top priority in IT budgets precisely because the integrity, availability and protection of data are vital to business productivity and success. But the role of information storage far exceeds day to day functions. Enterprises are also operating in an era of increased uncertainty. IT personnel find themselves assessing and planning for more potential risks than ever before, ranging from acts of terrorism to network security threats. A backup and disaster recovery plan is essential, and information storage solutions provide the basis for its execution.

Businesses are also subject to a new wave of regulatory compliance legislation that directly affects the process of storing, managing and archiving data. This is especially true for the financial services and healthcare industries, which handle highly sensitive information and bear extra responsibility for maintaining data integrity and privacy.

Although the need for storage is evident, it is not always clear which solution is right for your organization. There are a variety of options available, the most prevalent being direct-attached storage (DAS), network-attached storage (NAS) and storage area networks (SAN). Choosing the right storage solution can be as personal and individual a decision as buying a home. There is no one right answer for everyone. Instead, it is important to focus on the specific needs and long-term business goals of your organization. Several key criteria to consider include:
• Capacity - the amount and type of data (file level or block level) that needs to be stored and shared
• Performance - I/O and throughput requirements
• Scalability - Long-term data growth
• Availability and Reliability - how mission-critical are your applications?
• Data protection - Backup and recovery requirements
• IT staff and resources available
• Budget concerns
While one type of storage media is usually sufficient for smaller companies, large enterprises will often have a mixed storage environment, implementing different mediums for specific departments, workgroups and remote offices. In this paper, we will provide an overview of DAS, NAS and SAN to help you determine which solution, or combination of solutions, will best help you achieve your business goals.


DAS: Ideal for Local Data Sharing Requirements

Direct-attached storage, or DAS, is the most basic level of storage, in which storage devices are part of the host computer, as with drives, or directly connected to a single server, as with RAID arrays or tape libraries. Network workstations must therefore access the server in order to connect to the storage device. This is in contrast to networked storage such as NAS and SAN, which are connected to workstations and servers over a network. As the first widely popular storage model, DAS products still comprise a large majority of the installed base of storage systems in today's IT infrastructures. Although the implementation of networked storage is growing at a faster rate than that of direct-attached storage, it is still a viable option by virtue of being simple to deploy and having a lower initial cost when compared to networked storage. When considering DAS, it is important to know what your data availability requirements are. In order for clients on the network to access the storage device in the DAS model, they must be able to access the server it is connected to. If the server is down or experiencing problems, it will have a direct impact on users' ability to store and access data. In addition to storing and retrieving files, the server also bears the load of processing applications such as e-mail and databases. Network bottlenecks and slowdowns in data availability may occur as server bandwidth is consumed by applications, especially if there is a lot of data being shared from workstation to workstation.

DAS is ideal for localized file sharing in environments with a single server or a few servers - for example, small businesses or departments and workgroups that do not need to share information over long distances or across an enterprise. Small companies traditionally utilize DAS for file serving and e-mail, while larger enterprises may leverage DAS in a mixed storage environment that likely includes NAS and SAN. DAS also offers ease of management and administration in this scenario, since it can be managed using the network operating system of the attached server. However, management complexity can escalate quickly with the addition of new servers, since storage for each server must be administered separately.

From an economical perspective, the initial investment in direct-attached storage is cheaper. This is a great benefit for IT managers faced with shrinking budgets, who can quickly add storage capacity without the planning, expense, and greater complexity involved with networked storage. DAS can also serve as an interim solution for those planning to migrate to networked storage in the future. For organizations that anticipate rapid data growth, it is important to keep in mind that DAS is limited in its scalability. From both a cost efficiency and administration perspective, networked storage models are much more suited to high scalability requirements.

Organizations that do eventually transition to networked storage can protect their investment in legacy DAS. One option is to place it on the network via bridge devices, which allows current storage resources to be used in a networked infrastructure without incurring the immediate costs of networked storage. Once the transition is made, DAS can still be used locally to store less critical data.
NAS: File-Level Data Sharing Across the Enterprise

Networked storage was developed to address the challenges inherent in a server- based infrastructure such as direct-attached storage. Network-attached storage, or NAS, is a special purpose device, comprised of both hard disks and management software, which is 100% dedicated to serving files over a network. As discussed earlier, a server has the dual functions of file sharing and application serving in the DAS model, potentially causing network slowdowns. NAS relieves the server of storage and file serving responsibilities, and provides a lot more flexibility in data access by virtue of being independent.

NAS is an ideal choice for organizations looking for a simple and cost-effective way to achieve fast data access for multiple clients at the file level. Implementers of NAS benefit from performance and productivity gains. First popularized as an entry-level or midrange solution, NAS still has its largest install base in the small to medium sized business sector. Yet the hallmarks of NAS - simplicity and value - are equally applicable for the enterprise market. Smaller companies find NAS to be a plug and play solution that is easy to install, deploy and manage, with or without IT staff at hand. Thanks to advances in disk drive technology, they also benefit from a lower cost of entry.

In recent years, NAS has developed more sophisticated functionality, leading to its growing adoption in enterprise departments and workgroups. It is not uncommon for NAS to go head to head with storage area networks in the purchasing decision, or become part of a NAS/SAN convergence scheme. High reliability features such as RAID and hot swappable drives and components are standard even in lower end NAS systems, while midrange offerings provide enterprise data protection features such as replication and mirroring for business continuance. NAS also makes sense for enterprises looking to consolidate their direct-attached storage resources for better utilization. Since resources cannot be shared beyond a single server in DAS, systems may be using as little as half of their full capacity. With NAS, the utilization rate is high since storage is shared across multiple servers.

The perception of value in enterprise IT infrastructures has also shifted over the years. A business and ROI case must be made to justify technology investments. Considering the downsizing of IT budgets in recent years, this is no easy task. NAS is an attractive investment that provides tremendous value, considering that the main alternatives are adding new servers, which is an expensive proposition, or expanding the capacity of existing servers, a long and arduous process that is usually more trouble than it's worth. NAS systems can provide many terabytes of storage in high density form factors, making efficient use of data center space. As the volume of digital information continues to grow, organizations with high scalability requirements will find it much more cost-effective to expand upon NAS than DAS. Multiple NAS systems can also be centrally managed, conserving time and resources.

Another important consideration for a medium sized business or large enterprise is heterogeneous data sharing. With DAS, each server is running its own operating platform, so there is no common storage in an environment that may include a mix of Windows, Mac and Linux workstations. NAS systems can integrate into any environment and serve files across all operating platforms. On the network, a NAS system appears like a native file server to each of its different clients. That means that files are saved on the NAS system, as well as retrieved from the NAS system, in their native file formats. NAS is also based on industry standard network protocols such as TCP/IP, FC and CIFS.

SANs: High Availability for Block-Level Data Transfer

A storage area network, or SAN, is a dedicated, high performance storage network that transfers data between servers and storage devices, separate from the local area network. With their high degree of sophistication, management complexity and cost, SANs are traditionally implemented for mission-critical applications in the enterprise space. In a SAN infrastructure, storage devices such as NAS, DAS, RAID arrays or tape libraries are connected to servers using Fibre Channel. Fibre Channel is a highly reliable, gigabit interconnect technology that enables simultaneous communication among workstations, mainframes, servers, data storage systems and other peripherals. Without the distance and bandwidth limitations of SCSI, Fibre Channel is ideal for moving large volumes of data across long distances quickly and reliably.

In contrast to DAS or NAS, which is optimized for data sharing at the file level, the strength of SANs lies in its ability to move large blocks of data. This is especially important for bandwidth-intensive applications such as database, imaging and transaction processing. The distributed architecture of a SAN also enables it to offer higher levels of performance and availability than any other storage medium today. By dynamically balancing loads across the network, SANs provide fast data transfer while reducing I/O latency and server workload. The benefit is that large numbers of users can simultaneously access data without creating bottlenecks on the local area network and servers.

SANs are the best way to ensure predictable performance and 24x7 data availability and reliability. The importance of this is obvious for companies that conduct business on the web and require high volume transaction processing. Another example would be contractors that are bound to service-level agreements (SLAs) and must maintain certain performance levels when delivering IT services. SANs have built in a wide variety of failover and fault tolerance features to ensure maximum uptime. They also offer excellent scalability for large enterprises that anticipate significant growth in information storage requirements. And unlike direct-attached storage, excess capacity in SANs can be pooled, resulting in a very high utilization of resources. There has been much debate in recent times about choosing SAN or NAS in the purchasing decision, but the truth is that the two technologies can prove quite complementary. Today, SANs are increasingly implemented in conjunction with NAS. With SAN/NAS convergence, companies can consolidate block-level and file-level data on common arrays.

Even with all the benefits of SANs, several factors have slowed their adoption, including cost, management complexity and a lack of standardization. The backbone of a SAN is management software. A large investment is required to design, develop and deploy a SAN, which has limited its market to the enterprise space. A majority of the costs can be attributed to software, considering the complexity that is required to manage such a wide scope of devices. Additionally, a lack of standardization has resulted in interoperability concerns, where products from different hardware and software vendors may not work together as needed. Potential SAN customers are rightfully concerned about investment protection and many may choose to wait until standards become defined.

Conclusion

With such a variety of information storage technologies available, what is the best way to determine which one is right for your organization? DAS, NAS and SAN all offer tremendous benefits, but each is best suited for a particular environment. Consider the nature of your data and applications. How critical and processing-intensive are they? What are your minimum acceptable levels of performance and availability? Is your information sharing environment localized, or must data be distributed across the enterprise? IT professionals must make a comprehensive assessment of current requirements while also keeping long-term business goals in mind.

Like all industries, storage networking is in a constant state of change. It's easy to fall into the trap of choosing the emerging or disruptive storage technology at the time. But the best chance for success comes with choosing a solution that is cost-correct and provides long term investment protection for your organization. Digital assets will only continue to grow in the future. Make sure your storage infrastructure is conducive to cost-effective expansion and scalability. It is also important to implement technologies that are based on open industry standards, which will minimize interoperability concerns as you expand your network.