+ All Categories
Home > Documents > E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... ·...

E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... ·...

Date post: 06-Oct-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
20
E-Guide BIG DATA: IMPLICATIONS ON STORAGE
Transcript
Page 1: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

E-Guide

BIG DATA: IMPLICATIONS ON STORAGE

Page 2: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 2 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

ig data is a source of confusion for many storage professionals, and the lack of a standard definition of this popular buzz-word is partly responsible. This expert

eGuide explains big data in storage terms, examining the ways it stresses traditional IT capabilities and architecture. Find out how Hadoop fits into the big data equation and the storage features and functions needed to address current technology gaps and emerging requirements.

B

Page 3: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 3 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

DEALING WITH BIG DATA: THE STORAGE IMPLICATIONS

Whether it’s defining “big data,” understanding Hadoop or assessing the impact of large data stores, storage pros need a clear understanding of the big data trend.

It seems impossible to get away from the term “big data” nowadays. The challenge is that the industry lacks a standard definition for what big data is. Enterprise Strategy Group (ESG) defines big data as “data sets that exceed the boundaries and sizes of normal processing capabilities, forcing you to take a non-traditional approach.” We apply the term “big data” to any data set that breaks the boundaries and conventional capabilities of IT designed to support day-to-day operations.

These boundaries can be encountered on multiple fronts:The transaction volume can be so high that traditional data storage systems

hit bottlenecks and can’t complete operations in a timely manner. They simply don’t have enough processing horsepower to handle the volume of I/O requests. Sometimes they don’t have enough spindles in the environment to handle all the I/O requests. This often leads users to put less data on each disk drive and

Page 4: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 4 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

“short stroke” them. That means partially using them to increase the ratio of spindles per GB of data and to provide more disk drives to handle I/O. It also might lead users to deploy lots of storage systems side by side and not use them to their full capacity potential because of the performance bottlenecks. Or both. This is an expensive proposition because it leads to buying lots of disk drives that will be mostly empty.

The size of the data (individual records, files or objects) can make it so that traditional systems don’t have sufficient throughput to deliver data in a timely manner. They simply don’t have enough bandwidth to handle the transactions. We see organizations using short stroking to increase system bandwidth and add spindles in this case as well, which, again, leads to poor utilization and increased expense.

The overall volume of content is so high that it exceeds the capacity thresh-old of traditional storage systems. They simply don’t have enough capacity to deal with the volume of data. This leads to storage sprawl -- tens or hundreds of storage silos, with tens or hundreds of points of management, typically with poor utilization and consuming an excessive amount of floor space, power and cooling.

It gets very intimidating when these things pile on top of each other

Page 5: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 5 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

-- there’s nothing that says users won’t experience a huge number of I/O re-quests for a ton of data consisting of extremely large files.

SUPPORTING STORAGE ARCHITECTURES

We’re seeing an evolution in storage architectures to help deal with the increas-ing volume of data associated with big data. Each has slightly different, but overlapping, characteristics.

On the I/O-intensive, high-transaction volume end, ESG sees a broad adop-tion of architectures that can scale up by adding spindles. That’s the traditional approach and systems like EMC VMAX, Hitachi Data Systems VSP and IBM DS8000 do well here.

On the large data size front, bleeding-edge industries that have been deal-ing with big data for years were early adopters of scale-out storage systems designed with enough bandwidth to handle large file sizes. We’re talking about systems from DataDirect Networks, Hewlett-Packard Ibrix, Isilon (now EMC Isilon) and Panasas, to name a few. Traditionally, scale-up implied there were eventual limits; scale-out has far less stringent limits and much more flex-ibility to add capacity or processing power. As big data sizes become more of a mainstream problem, some of these systems are finding more mainstream

Page 6: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 6 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

adoption. These more mainstream environments can be a mix of I/O- and throughput-intensive performance demands, so both scale-up and scale-out are often needed to keep up.

Finally, on the content volume front, we’re seeing more adoption of scale-out, object-based storage archive systems to make it easier to scale to billions of data objects within a single, easily managed system. The advantage of these systems is that they enable robust metadata for easier content management and tracking, and are designed to make use of dense, low-cost disk drives (Dell DX 6000 series is a good example here).

WHAT ABOUT HADOOP?

No column on big data would be complete without a discussion of Hadoop. The ability to accelerate an analytics cycle (cutting it from weeks to hours or minutes) without exorbitant costs is driving enterprises to look at Hadoop, an open source technology that’s often run on commodity servers with inexpen-sive direct-attached storage (DAS).

Hadoop is used to process very large amounts of data and consists of two parts: MapReduce and the Hadoop Distributed File System (HDFS). Put (very) simply, MapReduce handles the job of managing compute tasks, while HDFS

Page 7: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 7 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

automatically manages where data is stored on the compute cluster. When a compute job is initiated, Map-Reduce takes the job and splits it into subtasks that can be run in parallel. It basically queries HDFS to see where the data re-quired to complete each subtask lives, and then sends the subtasks out to run on the compute node where the data is stored. In essence, it’s sending the compute tasks to the data. The results of each subtask are sent back to the MapReduce master, which collates and delivers the final results.

Now compare that with a traditional system, which would need a big expen-sive server with a lot of horsepower attached to a big expensive storage array to complete the task. It would read all the required data, run the analysis and write the results in a fairly serial manner, which at these volumes of data, takes a lot longer than the Hadoop-based MapReduce job would.

The differences can be summed up in a simple analogy. Let’s say 20 people are in a grocery store and they’re all processed through the same cash register line. If each person buys $200 worth of groceries and takes two minutes to have their purchases scanned and totaled, $4,000 is collected in 40 minutes by the star cashier hired to keep up. Here’s the Hadoop version of the scenario: Ten register lines are staffed by low-cost, part-time high school students who take 50% more time to finish each separate transaction (three minutes). It now takes

Page 8: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 8 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

six minutes to ring up the same 20 people but you still get $4,000 when they hand in their cash drawers. From a business standpoint, what’s the impact of reducing a job from 40 minutes to six minutes? How many more jobs can be run in that 34 minutes you just gained? How much more insight can you get and how much quicker can you react to market trends? This is equivalent to business-side colleagues not having to wait long for the results of analytical queries.

Hadoop isn’t perfect. Clustered file systems are complex, and while much of this complexity is hidden from the HDFS admin, it can take time to get a Hadoop cluster up and running efficiently. Additionally, within HDFS, the data map (called the NameNode) that keeps track of where all the data lives (metadata) is a single point of failure in the current release of Apache Hadoop -- something that’s on the top of the list to be addressed in the next major release. Data protection is up to the admin to control; a data replication setting deter-mines how many times a data file is copied in the cluster. The default setting is 3, which can lead to a capacity overhead of 3x the required usable capacity. And that’s to protect in the local cluster; backup and remote disaster recovery (DR) need to be considered outside of the current versions of Hadoop. There’s not a large body of trained Hadoop professionals on the market; while firms like

Page 9: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 9 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

Cloudera, EMC and MapR are doing a good job on the education front, it’ll take time to build a trained workforce. This last point shouldn’t be taken lightly. Re-cent studies show that projects planning to leverage contractors/consultants should budget as much as $250,000 per developer per year.

BIG DATA, BIGGER TRUTH

This laundry list of shortcomings, combined with the potential commercial analytics market opportunity, is driving big storage companies like EMC, IBM and NetApp to look at the big data opportunity. Each company has introduced (or will, you can count on it) storage systems designed for Hadoop environ-ments that help users cover the manageability, scalability and data protection angles that HDFS lacks. Most offer a replacement to the HDFS storage layer with open interfaces (such as NFS and CFS), while others provide their own version of a MapReduce framework that performs better than the open source distribution. Some offer features that fill in the open source HDFS gaps, like the ability to share data between other apps via standard NFS and CFS interfaces or, much better, data protection and DR capabilities.

NetApp is actually taking a radically different approach from most vendors. They’re embracing the open Hadoop standard and the use of data nodes with

Page 10: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 1 0 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

DAS. Instead of using their own file system with a wrapper for Hadoop, they’re turbo-charging the DAS with SAS-connected JBOD based on the low end of the Engenio platform. And for the NameNodes they’re using an NFS-attached FAS box to provide a quick recovery from a NameNode failure. It’s a “best of both worlds” hybrid approach to the problem.

Whether or not the market will pay a premium for the better availability and broader potential application leverage still remains to be seen, as we’re in the early days yet.

Big data is a reality, and not all big data was created equal: various types of big data need different storage approaches. Even if you have a big data problem and are hitting those barriers that indicate you need to do something differ-ently, the best way for users to talk to vendors about their requirements is to cut right through the fluff and not talk about big data at all. Instead, you should talk about the business problem and the use cases that will ultimately narrow the spectrum to a specific set of workload characteristics. The right storage approach will quickly become evident.

Page 11: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 1 1 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

STORAGE FOR BIG DATA

Big data analytics will place new burdens on data storage systems. Here are some of the key features those systems will need to meet the challenges of big data.

"Big data" refers  to data sets that are too large to be captured, handled, analyzed or stored in an appropriate timeframe using traditional infrastruc-tures. Big is, of course, a term relative to the size of the organization and, more importantly, to the scope of the IT infrastructure that’s in place. Big data also infers analysis, driven by the expectation that there’s value in all the informa-tion businesses are accumulating -- if there was just a way to pull that value out.

Perhaps it follows from the notion that storage capacity is cheap, but in the effort to cull business intelligence from the mountains of new data created every day, organizations are saving more of it. They’re also saving data that’s already been analyzed, which could potentially be used for trending against future data collections.

WHY BIG, WHY NOW?

Page 12: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 1 2 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

Aside from the ability to keep more data than ever before, we have access to more types of data. These data sources include Internet transactions, social networking activity, automated sensors, mobile devices and scientific instru-mentation, among others. In addition to static data points, transactions can create a certain “velocity” to this data growth. As an example, the extraordi-nary growth of social media is generating new transactions and records. But the availability of ever-expanding data sets doesn’t guarantee success in the search for business value.

Page 13: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 1 3 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

DATA IS NOW A FACTOR OF PRODUCTION

Data has become a full-fledged factor of production, like capital, labor and raw materials, and it’s not just a requirement for organizations with obscure ap-plications in special industries. Companies in all sectors are combining and comparing more data sets in an effort to lower costs, improve quality, increase productivity and create new products. For example, analyzing data supplied directly from products in the field can help improve designs. Or a company may be able to get a jump on competitors through a deeper analysis of its cus-tomers’ behavior compared with that of a growing number of available market characteristics.

STORAGE MUST EVOLVE

Big data has outgrown its own infrastructure and it’s driving the development of storage, networking and compute systems designed to handle these specific new challenges. Software requirements ultimately drive hardware functional-ity and, in this case, big data analytics are impacting the development of data storage infrastructures.

This could mean an opportunity for storage and IT infrastructure com-panies. As data sets continue to grow with both structured and unstructured

Page 14: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 1 4 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

data, and analysis of that data gets more diverse, current storage system designs will be less able to meet the needs of big data. Storage vendors have begun to respond with block- and file-based systems designed to accommodate many of these requirements. Here’s a listing of some of the characteristics big data storage infrastructures need to incorporate to meet the challenges presented by big data.

Capacity. “Big” often translates into petabytes of data, so big data storage systems certainly need to be able to scale. But they also need to scale easily, adding capacity in modules or arrays transparently to users, or at least without taking the system down. Scale-out storage is becoming a popular alternative for this use case. Scale-out’s clustered architecture features nodes of storage capacity with embedded processing power and connectivity that can grow seamlessly, avoiding the silos of storage that traditional systems can create.

Big data also means a large number of files. Managing the accumulation of metadata for file systems at this level can reduce scalability and impact per-formance, a situation that can be a problem for traditional NAS systems. Ob-ject-based storage architectures, on the other hand, can allow big data storage systems to expand file counts into the billions without suffering the overhead problems that traditional file systems encounter. Object-based storage systems

Page 15: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 1 5 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

can also scale geographically, enabling large infrastructures to be spread across multiple locations.

Latency. Big data may also have a real-time component, especially in use cases involving web transactions or finance. For example, tailoring web ad-vertising to each user’s browsing history requires real-time analytics. Storage systems must be able grow to the aforementioned proportions while maintain-ing performance because latency can produce “stale data.” Here, too, scale-out architectures enable the cluster of storage nodes to increase in processing power and connectivity as they grow in capacity. Object-based storage systems can parallelize data streams, further improving throughput.

Many big data environments will need to provide high IOPS performance, such as those in high-performance computing (HPC) environments. Server virtualization will drive high IOPS requirements, just as it does in traditional IT environments. To meet these challenges, solid-state storage devices can be implemented in many different formats, from a simple server-based cache to all-flash-based scalable storage systems.

Access. As companies get better at understanding the potential of big data analysis, the need to compare differing data sets will bring more people into the data sharing loop. In the quest to create business value, firms are looking

Page 16: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 1 6 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

at more ways to cross-reference different data objects from various platforms. Storage infrastructures that include global file systems can help address this issue, as they allow multiple users on multiple hosts to access files from many different back-end storage systems in multiple locations.

Security.  Financial data, medical information and government intelli-gence carry their own security standards and requirements. While these may not be different from what current IT managers must accommodate, big data analytics may need to cross-reference data that may not have been co-mingled in the past, which may create some new security considerations.

Cost. “Big” can also mean expensive. And at the scale many organizations are operating their big data environments, cost containment will be an im-perative. This means more efficiency “within the box,” as well as less expensive components. Storage deduplication has already entered the primary storage market and, depending on the data types involved, could bring some value for big data storage systems. The ability to reduce capacity consumption on the back end, even by a few percentage points, can provide a significant return on investment as data sets grow. Thin provisioning, snapshots and clones may also provide some efficiencies depending on the data types involved.

Many big data storage systems will include an archive component,

Page 17: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 1 7 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

especially for those organizations dealing with historical trending or long-term retention requirements. Tape is still the most economical storage medium from a capacity/dollar standpoint, and archive systems that support multiterabyte cartridges are becoming the de facto standard in many of these environments.

What may have the biggest impact on cost containment is the use of com-modity hardware. It’s clear that big data infrastructures won’t be able to rely on the big iron enterprises have traditionally turned to. Many of the first and larg-est big data users have developed their own “white-box” systems that leverage a commodity-oriented, cost-saving strategy. But more storage products are now coming out in the form of software that can be installed on existing systems or common, off-the-shelf hardware. In addition, many of these companies are sell-ing their software technologies as commodity appliances or partnering with hardware manufacturers to produce similar offerings.

Persistence. Many big data applications involve regulatory compliance that dictates data be saved for years or decades. Medical information is often saved for the life of the patient. Financial information is typically saved for seven years. But big data users are also saving data longer because it’s part of an historical record or used for time-based analysis. This requirement for lon-gevity means storage manufacturers need to include on-going integrity checks

Page 18: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 1 8 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

and other long-term reliability features, as well as address the need for data-in-place upgrades.

Flexibility. Because big data storage infrastructures usually get very large, care must be taken in their design so they can grow and evolve along with the analytics component of the mission. Data migration is essentially a thing of the past in the big data world, especially since data may be in multiple locations. A big data storage infrastructure is essentially fixed once you begin to fill it, so it must be able to accommodate different use cases and data scenarios as it evolves.

Application awareness. Some of the first big data implementations in-volved application-specific infrastructures, such as systems developed for gov-ernment projects or the white-box systems invented by large Internet services companies. Application awareness is becoming more common in mainstream storage systems as a way to improve efficiency or performance, and it’s a tech-nology that should apply to big data environments.

Smaller users. As a business requirement, big data will trickle down to organizations that are much smaller than what some storage infrastructure marketing departments may associate with big data analytics. It’s not only for the “lunatic fringe” or oddball use cases anymore, so storage vendors playing in

Page 19: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 1 9 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

the big data space would do well to provide smaller configurations while focus-ing on the cost requirements.

Page 20: E-Guide BIG DATA: IMPLICATIONS ON STORAGEdocs.media.bitpipe.com/io_11x/io_110050/item_728721... · 2013. 7. 10. · PAGE 4 OF 20 SPOSOED Y Home Dealing with big data: The storage

PA G E 2 0 O F 2 0 S P O N S O R E D B Y

Home

Dealing with big data: The storage implications

Storage for big data

BIG DATA: IMPLICATIONS ON STORAGE

FREE RESOURCES FOR TECHNOLOGY PROFESSIONALSTechTarget publishes targeted technology media that address your need for information and resources for researching prod-ucts, developing strategy and making cost-effective purchase decisions. Our network of technology-specific Web sites gives you access to industry experts, independent content and analy-sis and the Web’s largest library of vendor-provided white pa-pers, webcasts, podcasts, videos, virtual trade shows, research

reports and more —drawing on the rich R&D resources of technology providers to address market trends, challenges and solutions. Our live events and virtual seminars give you ac-cess to vendor neutral, expert commentary and advice on the issues and challenges you face daily. Our social community IT Knowledge Exchange allows you to share real world information in real time with peers and experts.

WHAT MAKES TECHTARGET UNIQUE?TechTarget is squarely focused on the enterprise IT space. Our team of editors and net-work of industry experts provide the richest, most relevant content to IT professionals and management. We leverage the immediacy of the Web, the networking and face-to-face op-portunities of events and virtual events, and the ability to interact with peers—all to create compelling and actionable information for enterprise IT professionals across all industries and markets.


Recommended