+ All Categories
Home > Documents > MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... ·...

MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... ·...

Date post: 31-Jul-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
26
#HASHTAG Twitter on #OpenStack SURVEY SAYS Disaster Recovery MI Modern Infrastructure Creating tomorrow’s data centers JUNE 2015, VOL. 4, NO. 6 OVERHEARD @ MI Summit 2015 END USER ADVOCATE Lock Down Those Desktops EDITOR’S LETTER En Vogue INFRASTRUCTURE MANAGEMENT Virtualization Under Siege APPLICATION ARCHITECTURE Man Up for Microservices THE NEXT BIG THING Machine Learning for IT Dummies IN THE MIX Automation in Name Only The Hyperconvergence Effect Compute and storage are better together than they are apart.
Transcript
Page 1: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

Citrix Synergy and Modern Infrastructure Decisions Summit

#HASHTAG

Twitter on #OpenStack

SURVEY SAYS

Disaster Recovery

MIModern InfrastructureCreating tomorrow’s data centers

JUNE 2015, VOL. 4, NO. 6

OVERHEARD

@ MI Summit 2015

END USER ADVOCATE

Lock Down Those Desktops

EDITOR’S LETTER

En Vogue

INFRASTRUCTURE MANAGEMENT

Virtualization Under Siege

APPLICATION ARCHITECTURE

Man Up for Microservices

THE NEXT BIG THING

Machine Learning for IT Dummies

IN THE MIX

Automation in Name Only

The Hyperconvergence Effect

Compute and storage are better together than they are apart.

Page 2: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 2

THRITY-TWO YEARS after her first song topped Billboard’s U.S. dance charts, Madonna made pop music history with her 45th No. 1 song. With “Ghosttown,” Madonna surpasses former record holder country music legend George Strait. You go, Material Girl. (And to think my mom dismissed her as a flash in the pan.)

It’s hard to predict which novelties will become main-stays, and which hot trends will turn in to has-beens. And journalists aren’t very good at it. Back in 2005 when Goo-gle bought YouTube for $1.65 billion, a former colleague exclaimed, “I wouldn’t pay five bucks for YouTube!” But regardless of whether we get things right, evaluating trends for their relevance and staying power is a major part of the work that journalists do.

One of the current hot technologies that I’m on the fence about is hyperconvergence, the subject of my article “The Hyperconvergence Effect.” Is hyperconvergence a transformative approach to delivering IT, or just a nifty way of packaging compute and storage? On the one hand, hyperconvergence’s ability to take commodity compute,

hard disk drives and flash, and then add some clustering software, fundamentally transforms the buying decision for on-premises hardware. Hyperconvergence is also great for virtualized environments, but is it a fit elsewhere, and how far can it actually scale? Early adopters are in love and sales are exploding. But how relevant is hyperconvergence in a world where we outsource most of our compute and storage needs to a cloud provider?

In the meantime, it’s clear that there are a lot of op-erational implications to going down the microservices route, writes contributing writer George Lawton: service discovery, network provisioning and release automation, to name a few. Moving to microservices isn’t without its challenges, but Lawton found early adopters that say doing so can deliver significant ROI.

Virtualization, on the other hand, has proved itself to be transformative over the years, even as it’s evolved. Right now, we are witnessing a shift away from hypervisor-based server virtualization to the container-based approach pop-ularized by Docker, finds contributing editor Ed Scannell. But while that transition may have business implications for VMware and the IT shops that have invested in it, it’s hard to imagine a world without virtualization – if only for the management and automation opportunities it brings to the table. n

ALEX BARRETT is Modern Infrastructure’s editor in chief.

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

EDITOR’S LETTER

En Vogue

Page 3: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

The Hyperconvergence

EffectCompute and storage are

better together than they are apart. BY ALEX BARRETT

MODERN INFRASTRUCTURE • JUNE 2015 3

THINKING ABOUT BUYING a new storage area network? With hyperconverged offerings’ increased popularity, you might want to reconsider.

Recently, data center infrastructure has consisted of standalone servers running virtualization and scale-up storage arrays connected over a network, usually Fibre Channel or iSCSI. But a new generation of hypercon-verged infrastructure is challenging that model, creating virtual storage area networks out of locally attached flash and hard disk drive storage. Alternately dubbed virtual storage area network (SAN), server SAN and SAN-free storage, this new approach is causing many IT professionals to rethink their assumptions about how to approach their on-premises infrastructure needs—espe-cially storage.

The hyperconverged trend dovetails with virtual-ization, where traditional storage is problematic, said David Friedlander, senior director of product marketing at Panzura, a cloud storage company. “Traditional SANs were bad for virtualization,” he said. SANs were designed for environments where you know what to expect from I/O patterns, but virtual environments are characterized by random I/O, and arrays are quickly overwhelmed. Flash has alleviated the problem, but increasingly, Fried-lander said, “the storage industry is moving away from

DATA CENTER INFRASTRUCTURE

HOMEMAMANAMSAI/FOTOLIA

Page 4: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

MODERN INFRASTRUCTURE • JUNE 2015 4

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

monolithic storage arrays.”Today’s hyperconverged offerings range from soft-

ware-only products targeted at small and medium-sized businesses (SMBs), enterprise-grade hardware appliances designed to take on mission-critical workloads, and every-thing in between. The players include early- and late-stage startups, tier-one server and storage OEMs, and name-brand enterprise software vendors.

And the market is booming. In 2014, analyst firm Wiki-bon predicted hyperconvergence sales of $487 million. Actual sales turned out to be in the $500 million to $600 million range. That’s virtually nothing compared with the overall enterprise storage systems market, which IDC put at $36.2 billion in 2014. But virtual SAN more than doubled last year. “And this year is when the big players are starting to get into it,” said Stu Miniman, Wikibon senior analyst.

There’s even a chance that if you fast forward a cou-ple of years, hyperconvergence could put a serious dent in ex ternal disk sales, said Arun Taneja, founder of the Taneja Group. If the market continues to grow like it has recently, back-of-the-envelope calculations suggest that hyperconvergence could actually consume 30% of the external storage array market within three years, he said.

THE EARLY ADOPTERS

Virtual SAN adoption usually starts small, designed to im-prove IT capabilities at a remote location. Take Driscoll’s, a $3 billion-a-year berry distributor based in Watsonville, Calif. The company has 60 distribution centers across North America, and the servers need to connect back to ERP and supply chain systems running in the central data center. Having servers or storage fail at the distribu-tion centers is not an option, said Soumitra Ghosh, vice president of infrastructure. “In a lot of ways, our business is a lot harder than Amazon’s because we’re dealing with a perishable commodity. If we go down, that means we cannot ship the berries, and they have to be thrown away.”

So all infrastructure at the remote site must be highly available—even if there’s no IT staff on site. To that end, Driscoll’s upgraded the server and storage at its distri-bution centers last year. It considered pre-packaged hyperconverged appliances. Because it already had some servers, it instead chose software from Maxta, running on a four-node cluster.

Ghosh is pleased with the performance and manage-ability of the Maxta stack, and the initial deployment may bear fruit back in the primary data center. “We have seen enough performance out of them that we would consider putting them in to our data center,” he said, as

n Hyperconverged offerings have many in IT rethinking whether to buy a new storage area network.

n Hyperconverged infrastructure creates virtual SAN’s out of locally attached drives and flash.

n The storage industry is moving away from monolithic storage arrays.

HIGHLIGHTS

Page 5: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 5

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

the infrastructure behind a development environment.Hyperconverged appliances also find a home in small

IT environments that don’t have the stomach for a full-fledged SAN. That’s what the City of West Chicago found a couple of years ago when it dipped its toe in VMware virtualization waters.

“We bought some servers, we bought a SAN, we bought the virtualization licenses,” to the tune of $60,000, re-called Peter Zaikowski, IT manager for the city, which virtualized six of its 25 servers. Everything worked fine,

but then he realized he needed to invest another $100,000 to virtualize the remaining systems. “We’re a small city. Having to go back to the City Council and ask for that money wasn’t something I wanted to do.”

For one-fifth the cost, Zaikowski implemented a three-node cluster of hyperconverged appliances from Scale Computing. Later this year when the VMware licenses run out, he’ll migrate those VMs to the Scale cluster, and repurpose the SAN storage as a backup-to-disk target.

Even some SAN stalwarts choose the hyperconverged

Storage, Compute and DR tooSOME HYPERCONVERGENCES plays give more than just

on-premises compute and storage capacity—they

double as backup and disaster recovery solutions too.

A few years ago, construction company The Neenan

Co. needed to upgrade its VMware ESX server farm and

its aging Dell EqualLogic SAN. At the same time, George

Dial, IT manager at the firm, knew that the firm’s disas-

ter recovery (DR) posture wasn’t great. “If we suffered

an outage, we could have a three- or four-day loss if

things went wrong,” he said.

Dial solved all of those problems at once by purchas-

ing a pair of SimpliVity OmniCubes for its office. It put

another pair at a sister real-estate firm in Denver, and

set the two clusters to replicate to one another.

Improved DR was the deciding factor. “I looked

at buying new HP servers and another EqualLogic,

and on paper it was actually less expensive, but [the

system] wasn’t very smart,” Dial said. There was no

deduplication of data, the performance was average,

and it meant having to administer backups through a

separate interface. In contrast, SimpliVity includes de-

duplication as a baseline feature, performance is great,

and all administration—even backup and DR—happens

through VMware vCenter.

“It was sort of a leap of faith,” Dial said. But even

though SimpliVity was a very new company at the

time, the system has proved itself, and “everybody is

over the moon.” n

Page 6: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 6

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

path. San Mateo County in California bought a Nutanix cluster for a VDI deployment, but has migrated over 500 VMs in its VMware farm to the environment, retiring a variety of server and SAN platforms along the way. “We really liked Fibre [Channel], but it’s just a lot more work, a lot more management complexity,” said Erik Lar-son, storage and virtualization architect for the county. Making matters worse is that people with SAN skills are becoming increasingly hard to come by. “We just don’t have that many people comfortable adding an HBA [host bust adopter] or zoning a LUN [logical unit number],” he said. When the project concludes, the only servers still connected to a SAN will be legacy applications running on Unix and IBM System i, Larson predicted.

HYPERCONVERGED HARDWARE BREW

One of the defining features of hyperconverged systems is their reliance on commodity parts, both CPUs and disk drives. CPUs, in particular, are so powerful that it makes sense to leverage them for both compute and storage pro-cessing, Taneja said.

“There’s so much surplus CPU today, and you don’t need more than 20% of it to run the storage layer,” he said. With that, you can use the remaining CPU to run VMs.

Using locally attached disk drives, meanwhile, elimi-nates the cost and latency overhead associated with ac-cessing storage over a network. “Direct connections over SCSI or PCI are always going to be the fastest,” Taneja said. “If you think about it, the only reason we went to SAN in the first place was because DAS [direct-attached storage]

was too limiting.”But low-cost commodity disks drives alone don’t pro-

vide the I/O performance needed for most virtualized workloads. That’s where super-fast flash and solid-state drives (SSDs) come in.

In recent years, the price of flash has dropped dramat-ically. Plus, it’s no longer limited to the most demanding workloads. “It’s gotten to the point where [flash] is viable for more use cases,” said Yoram Novick, CEO at Maxta.

Today, virtually all hyperconverged players make heavy use of flash technology. In fact, one hyperconverged player, GridStore, forgoes spinning disk drives altogether and relies exclusively on flash for its storage capacity. Data-efficiency techniques such as deduplication and compression help it achieve more useable capacity.

SCALE-OUT FTW

Still, without scale-out clustering technology, all the benefits of low-cost storage, high-performance flash and direct connections would be lost. The capacity couldn’t be shared. Clustering multiple nodes together into a virtual SAN means that capacity can be shared by multi-ple servers, while replication across nodes ensures data availability.

Indeed, the scale-out model has been having a moment, mainly in hyperscale compute environments. In the stor-age space, scale-out architectures figure prominently in several Network Attached Storage (NAS) designs, but the use of scale-out for block storage is a relatively new phenomenon.

Page 7: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 7

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

“As an industry, we haven’t enjoyed scale-out on the SAN side as we have on the NAS side,” Taneja said. But with the exception of the bottom end of the market, where capacity needs can be met by a couple of disk drives, “eventually, everything in compute and storage will be scale-out,” he predicted.

Scale-out has tremendous appeal to IT shops, allowing them to start small, and then grow big—without much initial capital or trouble.

“The thing about scale-out is that it separates out the hardware maintenance from the system presented above it,” said Andrew Warfield, CTO at Coho Data, a scale-out storage startup.

If there a catch to hyperconvergence, it might be in scalability. While many hyperconverged vendors advertise eye-popping scalability, those claims are hard to believe. For example, EMC’s VCE division announced its VxRack System in May based off its ScaleIO acquisition. It says the system can scale to “many thousands of nodes.” Not only is such a claim difficult to verify, but there may be practical limitations to how big of a cluster you really want to build. “For argument’s sake, let’s say a system scales to 120 nodes. A customer isn’t going to get near that,” Taneja said. “For business reasons, they’re going to build a 30- or 40-node cluster.”

Nor is hyperconvergence necessarily the best fit if what you’re looking for is extreme capacity. “If you need to store petabytes of data, you’re not going to use a hyperconverged solution—or at least, you shouldn’t,” said Jason Collier,

CTO at Scale Computing. He believes that there will always be a need for capacity-centric storage, although it might not look like the NAS platforms that dominate the space today.

But hyperconverged players have started to address another knock on their systems: the need to scale storage and compute together symmetrically. That was true of the early days of hyperconvergence. If you needed more capac-ity, you had to buy a full node, even if you hadn’t maxed out your CPU. Hyperconverged vendors have gotten the message, and offer differently sized nodes. For instance, San Mateo County recently added storage-heavy NS-6000 units to its Nutanix cluster. “We didn’t need any more compute,” Larson said.

In the short term, the biggest problem potential hy-perconverged customers might face is determining from whom to buy. Most, if not all, top IT vendors have offer-ings besides the startups previously listed, many based on VMware’s EVO:RAIL offering.

“We’re on volley number two,” Taneja said. “There are a lot of good products on the market, each with their own differentiator, and they all have a good shot at the market.”

And while the hyperconvergence market is by no means mature, “We’re not in the infant phase either—more of a school child. You can start to see their strengths and personalities.” n

ALEX BARRETT is editor in chief of Modern Infrastructure. Email her at [email protected].

Page 8: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

zzzzzz

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

MODERN INFRASTRUCTURE • JUNE 2015 8

Craig Tracey

@craig_tracey

The state of #open-stack distro packaging would be laughable if it weren’t so disturbing.

Rich Bayliss

@rbbayliss

People who like the complexity in #Open-Stack ‘either have Stockholm Syndrome or are consultants’

Jason Farnsworth

@jason_farns

Adopting IaaS/PaaS at @AmericanExpress found that trying to fit it on existing en-vironment caused more problems than it solved. #OpenStack

Rick Melick

@RickMelick2000

#OpenStack will con-tinue to evolve to be-come the data center API, supporting not only cloud-native but traditional workloads as well.

Tim Crawford

@tcrawford

Interesting to watch the discussions about #OpenStack being “doomed.” Yes, there are issues, but oppor-tunities if managed well. #cloud

Sebastien Goasguen

@sebgoa

Still hard to believe that almost ten years after #S3, #open-stack still gets people excited... isn’t it sup-posed to be boring mainstream

natishalom

@natishalom

Moving from private to public cloud is a 5-10 years journey #openstack #getcloudify

Richard Appleby

@rmappleby

“You shouldn’t need a PhD to run a cloud” Acknowledgment it’s still too hard at the moment. HP #Open-Stack Keynote—I agree!

#Hashtag Twitter on #OpenStack

Page 9: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 9

NO ONE IS ready to declare virtualization technology dead, but some say it’s beginning to smell funny.

All of the major virtualization players have started to feel the tremors of change stomping their way. The change agents doing most of the stomping are armed with a variety of container technologies and revitalized approaches to bare-metal computing. This has forced companies, including VMware, IBM, Microsoft and Red Hat, to make significant course corrections in order to remain relevant.

“Virtualization is not through innovating, but it’s at a point where that innovation isn’t moving anyone forward anymore,” said Carl Brooks, an analyst with 451 Research. “Bare metal is more interesting because you can orches-trate it the same way you orchestrate virtual machines, and the delta with capacity and resource consumption is orders of magnitude greater than what you get with virtualization.”

Users committed to traditional virtualization technol-ogies but that want to explore containers or bare metal have more than a few questions. Most swirl around how to transition to, or integrate with and manage such a hybrid environment. It is a situation with too many questions and

HOME

INFRASTRUCTURE MANAGEMENT

DABOOST/ISTOCK

Virtualization Under Siege

The combination of bare-metal servers and containers is giving traditional

server virtualization a run for its money.BY ED SCANNELL

Page 10: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 10

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

not enough answers.“When you talk about making containers work with

older products like VMware or IBM’s, and bringing bare-metal servers into that mix, that’s a pretty complex stack you are talking about,” said one IT professional with a large manufacturing company. “How do I manage all that? And what is the net-net of what I gain or lose in switching from one to the other?”

The old guard virtualization players hope to supply some of those answers with their next generation of products, which include the ability to protect users’ in-vestments in existing virtualization platforms via a tran-sitional environment for testing and development.

THE POSTER CHILD OF VIRTUALIZATION

Some industry observers point to VMware, the poster child for traditional virtualization, as the latest example of a virtualization company not committing to just one approach. The company has unveiled plans to deliver its own scaled-down version of Linux specifically crafted to manage containers, along with two open source projects intended to encourage corporate users to adopt cloud-na-tive applications.

The Linux operating system, Project Photon, was

inspired by the realization that VMware users increasingly used containers in concert with vSphere, as well as the greater reliance on open source for building their own applications. Project Photon makes it possible to run both containers and VMs natively from a single platform.

“We built this OS [Project Photon] right from the Linux kernel because we knew users are running containers on top of vSphere,” said Mike Adams, a VMware marketing director overseeing vSphere. “We thought this would be the most efficient way to go after that opportunity.”

Some analysts see VMware’s move, and that of other virtualization suppliers, as a necessary, if not well timed, evolution.

“For the major legacy players in the virtualization space, especially VMware and Red Hat, we see a move away from legacy virtualization and toward emerging products—es-pecially management products for hybrid computing,” said Andrew Smith, a software analyst with Technology Business Research. “The erosion of [the virtualization] business is real, and companies have to seek out new rev-enue growth streams.”

VMware’s first quarter earnings revealed it has reached an important tipping point between its traditional virtual-ization business and its forward-looking hybrid cloud and end-user computing offerings. About 45% of VMware’s

n Virtualization companies have adapted their products as containers and bare-metal grow in popularity.

n Users committed to virtualization that want to explore containers have more than a few questions.

n IT can orchestrate bare-metal the same way as VMs, with more capacity and resources.

HIGHLIGHTS

Page 11: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 11

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

billings were generated by vSphere and EXSi products, while 55% were produced by its hybrid cloud manage-ment, networking and storage products.

BARE METAL’S PROFITABLE NICHE

Over the past year IBM has aggressively pursued both bare-metal server and container strategies with its Soft-Layer cloud platform and through a relationship with Docker, respectively. The company is in the process of porting all of its core software products to fully work with SoftLayer, which it has optimized to exploit the perfor-mance of bare-metal servers.

The company also has an alliance with OpenStack, which it sees as a key component in allowing corporate users to smoothly integrate traditional virtualized envi-ronments with containers and bare metal.

“We recommend virtualization users manage their environments with OpenStack, which then allows them to bring Docker in for containers,” said Angel Diaz, IBM’s vice president in charge of cloud architecture. “One of the reasons we are working with Docker is to help users take advantage of SoftLayer right to the bare metal, which completes the integration picture.”

Rackspace jumped into the market last year with On-Metal Cloud Servers. This API-focused infrastructure as a service offering is aimed at organizations dealing with a fast growing infrastructure that wanted both the flexibility and feel of a cloud as well as raw performance.

“Users are billed by the second, so it’s treated like a cloud server from a utility-billing standpoint. And it is

treated like a cloud server from a provisioning and auto-mation standpoint,” said John Engates, CTO of Rackspace. “The only difference is there is no virtualization layer.”

One reason bare metal can carve out a profitable niche is the growing emphasis among corporate users for col-lecting and analyzing voluminous amounts of big data, a performance-intensive activity. And with the blossoming popularity of the Internet of Things, it won’t be slowing anytime soon.

“The speed you need for analytic queries lends itself to fast RISC or CICS processors, so you want to strip out the overhead that bogs down the CPU through the software extraction layer,” Smith said. “So using bare metal to get a faster analytics engine is a good use case.”

Besides raw processing speed, bare metal brings other advantages over traditional virtualization technologies. It gives users a dedicated server that has quicker access to all available IT resources, greater predictability in how ap-plications perform, and, more recently, offers the proper degree of isolation among multiple applications needed to ensure security and diminish the “noisy neighbors” problem.

“Historically, with dedicated servers you had to commit to long periods of time or outright own them, but that is no longer true,” Rackspace’s Engates said. “Now you can rent them like a cloud server and offer the best of both worlds. More and more people like that.”

Containers have improved the ability of dedicated bare metal servers to take advantage of all the available re-sources, just as virtualization did in the past, Engates said. The difference, however, is bare-metal servers coupled

Page 12: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 12

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

with containers more deftly accomplish the task in larger complex corporate environments.

“The challenge for a large-scale, multi-core server is getting applications to soak up all the available resources. And if it couldn’t, you felt like you were wasting those re-sources,” Engates said. “But containers can take advantage of those resources in a much lighter weight manner. It can also provision faster and requires fewer resources to run.”

MONEY REMAINS IN VIRTUALIZATION

Supporters of traditional virtualization say bare-metal servers are focused almost solely on performance. They also point out that those using bare metal can expect to lose some technical flexibility when moving from estab-lished virtual environments. Plus, there’s the added invest-ment of developing the proper skill set among existing IT staff.

Mixing Containers with Bare MetalONE COMPANY THAT appears to have successfully

blended containers with bare-metal servers is Pan-

theon. The company operates over 250,000 websites,

including the custom content management sites of

Drupal and WordPress. (Pantheon says the Drupal and

WordPress sites are isolated from each other in terms

of security and resources.)

Pantheon decided against a VM-centric computing

model because it believed it a bare-metal/container

model built on a single platform would be both simpler

and more efficient.

“The bare-metal and container model has helped

us be much more efficient in terms of operating our

infrastructure compared to competitors who are using

VM-based approaches,” said Zach Rosen, Pantheon’s

CEO and co-founder. “We can then pour this efficiency

back into our product, which is what I think the appeal

is to developers for our platform.”

But many larger companies adventurous enough to

attempt integrating bare-metal servers and containers

with their virtualization infrastructure end up with an

overly complex stack that is hard to manage. They

often must isolate bare metal, for instance, to just

one part of the company, with traditional virtualiza-

tion stacks elsewhere so they can then be managed

separately.

“Mixing and matching different technologies is a

major challenge, requiring dedicated people in all the

management and implementation aspects of each

of them—especially in larger companies,” said Steve

Brasen, research director with Enterprise Management

Associates. n

Page 13: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 13

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

“Virtualization is so pervasive now; you have a lot of choices,” said Dana Gardner, principal analyst with Inter-arbor Solutions. “If you go to a managed hoster and ask for bare-metal support, they might say, ‘Sorry, that’s not in our spec sheets.’ There will always be a need for speeds and feeds but [bare metal] is a [high-performance computing] play, a niche market.”

Proponents of bare metal punch back, saying their approach can be less costly. They propose that an organi-zation consolidating a dozen virtual servers down to two physical bare-metal servers would see significant savings in hardware, as well as in the management tools needed to handle a dozen virtual servers.

With the consensus among analysts that half to two-

thirds of all companies are virtualized, there is still money to be made in server virtualization. Not only that, oppor-tunities to virtualize networks and storage systems are now just emerging. Those systems stand to gain the same benefits servers have over the past decade, supporters note.

“Users are looking for the same higher productivity and utilization with lower complexity for their storage and networking systems they got with virtualization for their server hardware,” Gardner said. “And those markets are just now taking off.” n

ED SCANNELL is a senior executive editor at TechTarget. He can be reached at [email protected].

Page 14: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

w

MODERN INFRASTRUCTURE • JUNE 2015 14

Survey SaysDisaster Recovery Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

D What challenges do you want to address with your DR deployment/expansion projects?*

D What are the most important factors when evaluating DR management/monitoring products?

*MULTIPLE SELECTION ALLOWED; SOURCE: TECHTARGET DISASTER RECOVERY, BUSINESS CONTINUITY SURVEY; BASED OFF RESPONSES FROM 117 IT AND BUSINESS PROFESSIONALS.

57D Percentage of respondents who are considering remote copy/data replication for their data centers

SOURCE: TECHTARGET DISASTER RECOVERY, BUSINESS CONTINUITY SURVEY; BASED OFF RESPONSES FROM 79 IT AND BUSINESS PROFESSIONALS.

Add efficiency into current DR plan

Improve recovery time

Improve visibility into state of recovery plan

Account for new/expanding virtualization project

Account for data center consolidation project

Account for new data sites/offices

Outsource DR/long-term archive

68%

67%

31%

19%

19%

14%

11%

$92% Price

75% Integration, training and support capabilities of vendor

64% Compatibility with existing backup/storage infrastructure

62% Software interface’s ease of use

*MULTIPLE SELECTION ALLOWED; SOURCE: TECHTARGET DISASTER RECOVERY, BUSINESS CONTINUITY SURVEY; BASED OFF RESPONSES FROM 47 IT AND BUSINESS PROFESSIONALS.

Page 15: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 15

MICROSERVICES HAVE EMERGED as a software design pattern that breaks large applications into suites of loosely cou-pled components. These principles have been adopted by companies such as Nike and Netflix to enable faster deployment, better scalability and a more agile develop-ment process. But operations teams face a host of new management challenges to keep microservices infrastruc-ture running smoothly, from service discovery to release automation.

“We knew that we wanted to optimize for scale without over-architecting too early,” said John Sheehan, CEO of Runscope, an API monitoring service. The company began with a few key services broken down by functions (i.e., manage identity, store test data) to ship small, and iter-ate. Runscope now has more than 50 internal services of varying sizes and averaged over 30 deploys per day in 2014.

“As individual services required more capacity, we were able to independently scale them without having to allocate resources to the entire cluster,” Sheehan said. “We are also able to independently deploy each service more quickly.”

To make this work, Runscope invested significantly in automation, deployment and realm management tools,

HOMEVALOVALO/ISTOCK

Man Up for Microservices

As microservices emerge, enterprises face a whole new set of challenges to keep

infrastructure running smoothly.BY GEORGE LAWTON

APPLICATION ARCHITECTURE

Page 16: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 16

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

and libraries/frameworks for app developers to build and consume services consistently. Without its investment, the overhead of managing those services may have out-weighed the benefits.

“If you’re willing to invest in infrastructure, the benefits you get from small, reusable services can help you achieve significant ROI,” Sheehan said.

NEW OPERATIONAL CONCERNS

With any new technology, there is a tradeoff. Microser-vices introduce a network-level separation of concerns, which causes a whole new set of problems, including la-tency concerns and network unavailability, Sheehan said. Operations teams must look at more than how any given service performs, and understand the combined picture of services that make up app performance, which is why API monitoring and testing is so critical.

All of these little parts make up the application experience.

“If you don’t look at the performance of how all of these parts are interacting, you’ll miss out on a signifi-cant amount of operational data that will help you run better applications,” Sheehan said. “Because of the addi-tional variables the network introduces, you have to pay

attention to another class of problems that likely weren’t tracked as closely before.” Monitoring and testing of these pieces are essential to solve these problems.

Microservices also allow every service to run its own technology stack. Operations teams must be aware of how to operationalize and maintain different technology stacks that are the correct choice for any service.

RETHINK THE RELEASE PROCESS

A microservice approach decomposes monolithic applica-tions into atomic microservice silos, and divides the soft-ware work effort across multiple, loosely coupled teams. “Teams following a microservice architecture approach must scale up software-release management processes to address service dependencies, network distribution and autonomous release schedules,” said Chris Haddad, plat-form evangelist at WSO2, a service-oriented architecture middleware provider.

End-user applications will often use multiple microser-vices (e.g., product catalog, user profile, and inventory), and microservices may interact with other microservices. Distributing microservices across multiple teams and across a distributed network topology introduces release challenges. “Successful teams will introduce service

n Companies such as Nike and Netflix have adopted microservices to enable faster deployment.

n Microservices create a host of new management challenges to keep infrastructure running smoothly.

n If you’re willing to invest in infrastructure, the benefits from small services can achieve significant ROI.

HIGHLIGHTS

Page 17: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 17

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

versioning, release testing, incremental upgrade, and release rollback into their software release process play-book,” Haddad said.

Operations personnel should establish processes to perform an incremental upgrade. This type of upgrade means deploying a new microservice version alongside the last version, and then incrementally dialing up traffic to the new version.

An incremental upgrade partitions effects to a user base subset, and it enables the team to perform a smoke test in the live production environment. When a newly deployed microservice fails or delivers a poor user experi-ence (based on A/B test analysis), teams should safely roll back the microservice release. Teams should incorporate a safe and sane rollback capability into their release man-agement process.

PREPARE FOR DYNAMIC NETWORK PROVISIONING

Microservices also can change the speed and composition of virtual networks and services that support them. Op-erations teams and developers must think about how to decompose network services into those that are more ap-plication centric (load balancing, caching, performance, monitoring, app security) and those that are not (DDoS protection, VPNs), said Lori MacVittie, principal at F5 Networks, an application infrastructure provider.

Those services that are more application centric be-come tied to the application, resulting in a per-applica-tion service model. In essence, every microservice can be provisioned with a set of application services that go

Scenarios Driving MicroservicesENTERPRISES HAVE MANY reasons to adopt microser-

vices, said Roman Iuvshin, lead DevOps engineer

at Codenvy, a cloud-based integrated develop-

ment platform that uses microservices.

1 The service can be self-maintained by a small

group of people with few or no couplings to other

services. Microservice teams can deploy and de-

velop on their own, and own the results.

2 The client application must perform at max-

imum speeds. Microservices allow Codenvy to

perform different developer tasks on different

clusters, even though it feels like accessing a

single VM. The net impact is less thrashing and

blocking, and a more seamless experience.

3 The skills of the technology teams that own dif-

ferent components vary. Codenvy has specialists

in big data, distributed systems and Web develop-

ment who are not always on the same team. These

specialists can build microservices in a stack that

is optimized for their interests, skill sets and the

needs of the service itself. n

Page 18: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 18

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

along with it. That bundle of application and application services needs to be provisioned and configured during the release process. “Coordination between dev and ops has to ramp up to make sure that when the microservice is ready to be released, so are its services,” MacVittie said.

Provisioning these microservices to the cloud also raises questions of predictability. When applications run into problems most public providers are not upfront at sharing details about their network infrastructure with customers, MacVittie said.

Security also needs to be considered when planning operations infrastructure. Every malicious request that is stopped by a security service is one that is not causing undue stress on the application and, in turn, harming application performance. This is probably the least often considered variable for application performance that can be remedied by specifying the right security service.

IMPLEMENT SERVICE DISCOVERY

Loose coupling is a key principle of microservice archi-tectures. Individual components need a separate service discovery infrastructure to find the IP address of the ser-vice to which they need to connect. The service discovery layer must be just as dynamic as the infrastructure in a cloud environment, where VMs are frequently created and destroyed.

“You can think of a service discovery tool as the connec-tive webbing of your infrastructure,” said Kevin Fishner, director of customer success at HashiCorp, a data center management tool provider. Good service discovery tools

include Consul, etcd and Zookeeper. Automation tools to help configure dynamically provisioned services include Puppet, Chef, Ansible and Salt.

Organizations should also consider infrastructure cre-ation and management tools such as Terraform, Cloud- Formation and Heat, Fishner said.

PLAN FOR SUCCESS

The path of transitioning to microservices is not necessar-ily easy, despite what some might say. “While the theory of microservices is about giving each team that owns the service the ability to independently deploy their service as long as they adhere to their contract, that rarely happens,” said Tyler Jewell, CEO of Codenvy (See: “Scenarios Driv-ing Microservices”).

Codenvy’s users interact with an application that is a single system that depends upon dozens of microservices. When they release functionality to the end user, that func-tionality depends upon a variety of microservices, many of which also need to evolve their capabilities. This requires that the collective set of capabilities (application and mi-croservices) must deploy simultaneously. “Because of this, we do have to invest in planning and release coordination; it’s not quite as easy as a free for all of any deployment at any stage,” Jewell said. n

GEORGE LAWTON has written over 3,000 technology news stories over the last 20 years. He lives in the San Francisco Bay area. You can reach him directly at [email protected] or follow him on Twitter: @glawton.

Page 19: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 19

w

Overheard @ MI Summit 2015Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

“ I have the games they play in the storage indus-try. My least favorite term is ‘effective capacity.’ The key is useable capacity.” SCOTT LOWE, founder The 1610 Group

“ The key promise of cloud computing is reduced costs.” DAVID HOFF, CTO Cloud Sherpas

“ I was supposed to have a flying car by now.” PATRICK BENSON, principal sales engineer, Coho Data

“ If you have a lot of established infrastructure, you probably have a lot of infrastructure that isn’t changing.” DARWIN SANOY, CTO and DevOps Engineer, Qompat

“ Who cares about the server? The server’s part of an overall cluster.” ROBERT GREEN, principal of Enfinitum Consulting

Page 20: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 20

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

OUR MACHINES, THANKS to all the data we put into them, are getting smarter. Judging by some of the media coverage of machine learning, you’d think that IT is coming alive with artificial intelligence—Terminator style. Should we be afraid to step alone into the data center late at night?

In all seriousness, machine learning is a key part of how big data is bringing operational intelligence into the heart of our organizations. But while machine learning is fascinating, it gets complex very quickly. We can’t all be data scientists, but we in IT need to learn about how our machines are learning.

DEMYSTIFYING MACHINE LEARNING

Machine learning isn’t really about our IT infrastructure suddenly growing independent artificial intelligence that surpasses human reasoning (in science fiction, this is known as the singularity.) What we are seeing more and more are practical and achievable goals for machine learning, such as finding usable patterns in our data and then applying those pat-terns in ways to make predictions. Often these predictive models are used in operational processes to optimize some kind of ongoing decision-making, but they can also provide key insight and information to inform strategic decisions.

The basic premise of machine learning is to train an algorithm so that when it is given specific input data it will predict an output value within some probabilistic bounds. It’s important to keep in mind that machine learning today is inductive, not deductive. It leads to probabilistic correlations, not definitive conclusions.

The process of building these algorithms is often called predictive modeling. Once you have “learned” such a model on the data you have, you can sometimes examine it directly for insight into that original data, and/or apply the model to new data to predict something important to the business. Broadly, a model’s output can be a classification of something, a likely outcome, a hidden relationship or attribute, or even an estimate of value.

THE NEXT BIG THING

Machine Learning for IT DummiesBelieve it or not, machine learning is not just what happens when the Terminator protects John Connor. BY MIKE MATCHETT

Page 21: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 21

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

Typically, we are trying to predict a value that is cate-gorical like a label, color, membership or quality. Does our subject belong to a set of customers that we should try to retain, or that will buy something, or that will respond fa-vorably to an offer? Our prediction can also be numerical if we are concerned with estimating quantities or value on a continuous scale. The output type helps determine the best learning method, and affects the measurements we might use to judge the quality of the modeling.

WHO SUPERVISES?

Machine learning methods are usually divided into two groups: supervised and unsupervised. The difference isn’t whether or not algorithms are free to misbehave, but rather whether they learn from training data that has the true outcome available—previously determined and added to the data set to provide supervision—or instead just try to discover any natural patterns that are present within a given set of data. Most business use cases of pre-dictive modeling exploit supervised methods on training data, and usually aim to predict if a given instance (e.g., an email, person, company, or transaction) belongs to an interesting category (e.g., spam, likely buyer, good for credit, gets follow-up offer).

Unsupervised methods can be useful to gain new insight, especially if we don’t know what exactly we

are looking for before we start. Unsupervised learning can produce clustering and hierarchy charts that show inherent relationships in the data, and can also be used to discover which fields of data seem dependent or inde-pendent, or rules that describe, summarize or generalize the data. In turn, these insights can be used to help build better predictive models.

Obviously, this is just the jumping off point into a lot of deeper data science. Building models is an iterative exer-cise, and can take a lot of data scrubbing and experimen-tation. There are some automated and “guided” modeling tools emerging that promise to reduce the need for data scientists, but we expect those to have the most payback in areas that are well understood and common across in-dustries. For real differentiation, it’s likely that you’ll need to dig in yourself.

MIKE MATCHETT is a senior analyst and consultant at Taneja Group. Contact him via email at [email protected].

OBVIOUSLY, THIS IS JUST THE JUMPING OFF POINT INTO A LOT OF DEEPER DATA SCIENCE.

Page 22: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 22

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

THE MORE I work with IT automation tools the more I realize that very few organizations are actually doing meaningful automation. Sometimes there’s the illusion of automation, such as when folks use a front-end like VM-ware vRealize Automation. A tool like that has extensive automation capabilities, and even has “automation” in its name. Look closely, though, and you’ll see the automation is being used for things like approval workflows and send-ing email to humans to get them to do work, like entering items in a configuration management database, or setting up replication. I thought automation was supposed to reduce the work humans do.

I’ve been thinking about this quite a bit lately, and I

think it’s due to one big problem: IT people don’t know anything about programming the computers they work with.

There was a time, a long time ago, when you absolutely needed to understand the computers you used at a very deep level. Actually, it wasn’t all that deep—machines had many fewer layers of abstraction, much thinner operating systems, and applications were much closer to the hard-ware. Commercial software was scarce, and businesses tended to write their own applications. Sure, there were non-programmers in IT back then, too, but the ratio of programmers to others was much higher, mostly out of necessity.

Then personal computers and commercial software came along. With off-the-shelf software programmers weren’t a necessity anymore. IT staff stopped learning to code and started focusing on vendor certifications. It was enough to know how something worked, and much less important to know why it worked. And over time, the computer science types that could be found in old-school IT departments were replaced by business school gradu-ates. Don’t get me wrong—I’m not saying the MBAs don’t serve a valuable purpose in IT. They do. But very few of them know how to program in a way that’s meaningful in a cloud setting. And now, as we announce our desires to have private clouds and automation, we look around and see that there’s nobody left in our own organization to do

IN THE MIX

Automation in Name OnlyMany organizations think they have meaningful automation in place. They don’t. BY BOB PLANKERS

Page 23: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 23

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

this kind of work, even at the most basic level.How do we fix this? It seems that there are two options.

One option is that we don’t fix it, and just let professional services do all this integration for us. I’m skeptical,

though. Consultants don’t have our organization’s best interests in mind. They want to do a job, call it done and move on. They’re not going to be there when something breaks. They’re not going to be there when it needs a se-curity patch. And they don’t improve our organization’s understanding of the technology we rely on.

The other way to fix it is to resurrect the idea of the systems programmer, and hire some programmers of our

own. Should they have everything we usually look for in an IT staff hire? Yes. But instead of the business degree perhaps we look to the computer sciences and software development fields. We need people who understand why computers do what they do, and can make them do things for us on our own terms, not a vendor’s.

We also need to support these programmers well. We need to hire more than one programmer, to provide col-laboration, backup and internal support. We also need a promotion track for technical folks that doesn’t have to lead into management: Programmers should be able to gain seniority, earn promotions and work as team leaders without being forced to choose between resigning and a traditional management role.

I truly believe that IT will only realize its automation and cloud dreams if we re-embrace our programming roots, especially by making our organizations more tech-nical again. n

BOB PLANKERS is a virtualization and cloud architect at a major Midwestern university.

WE NEED TO HIRE MORE THAN ONE PROGRAMMER, TO PROVIDE COL LABORATION, BACKUP AND INTERNAL SUPPORT.

Page 24: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • MAY 2015 24

FOR YEARS WE’VE talked about the “locked-down desktop” as a major goal of desktop management—whether you’re using virtual or physical desktops. The locked-down desktop (also called the non-persistent desktop) means Windows desktops are fully secured and locked down. A user can’t make any changes (apart from simple things like setting desktop wallpapers and changing colors and fonts). Anything else they change is wiped away the next time they log on.

The benefits of locked-down desktops are huge. They lessen support costs because users can’t break things. They improve security because viruses and malware can’t raise

havoc with the users’ admin rights. And, when all desktops are the same, software updating and patching becomes far simpler.

The biggest reason to lock down desktops is to restrict what we call user-installed apps, or UIAs. Quite simply, users can’t install “their” apps onto “their” desktops if the desktop is locked down. But while we’ve recognized the value of the tightly controlled desktop for decades, it’s been difficult to implement. The reason for this is simple: user rebellion. Users’ desktops are personal to them (even when it’s corporate-owned hardware), and most users object to IT locking them out of “their” desktops.

Several software vendors have tried to solve the UIA problem through all sorts or wizardry, from virtualization to application bubbles and layering. Unfortunately, these products have gained no significant traction, and the “UIA problem” is still a problem.

Or is it?I’ve worked with enterprise desktops for 20 years. What

I’ve started to notice lately is that the UIA problem doesn’t seem like much of a problem anymore. Five years ago it was all anyone could talk about. But today? Not so much.

In 2015, most of the non-corporate apps that users want access to are not traditional apps at all. They’re websites and Web apps. So while in 1995 users would walk up to a locked-down desktop and get mad because they couldn’t install PointCast, in 2015, they say, “Hey,

END-USER ADVOCATE

Lock Down Those DesktopsIt’s 2015, and the locked-down desktop is finally a realistic possibility. BY BRIAN MADDEN

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

Page 25: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

MODERN INFRASTRUCTURE • JUNE 2015 25

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

does that locked-down desktop have a browser? Great! I’m fine.”

The second change is that every user has a smartphone now, and many have iPads. I can’t tell you how much time I spent on user complaints about not having iTunes on locked-down desktops in 2005. It’s not a problem now because users have access to their entire music libraries—not to mention most of the other apps they care about—in their pockets.

Think about your own collection of non-corporate apps. If you walked into a job in 2005 and they said, “Here’s your desktop. It’s locked down. You can change nothing,” you

might have quit right there on the spot! But in 2015, your reaction would be more like, “Does it have a browser? Can I have my iPhone on my desk while I’m working? Meh. It’s fine then.”

So if you’ve avoided locking down desktops for the past 20 years, maybe now is the time to revisit the idea. The benefits are huge and users’ objections are mostly a thing of the past. n

BRIAN MADDEN is an opinionated, supertechnical, fiercely indepen-dent desktop virtualization and consumerization expert. Write to him at [email protected].

Page 26: MI Modern Infrastructure - Bitpipedocs.media.bitpipe.com/io_12x/io_124806/item_1168275/MI... · 2015-06-23 · Modern Infrastructure Creating tomorrow’s data centers JUNE 2015,

Home

Editor’s Letter

The Hyperconver-gence Effect

#Hashtag: Twitter on #OpenStack

Virtualization Under Siege

Survey Says: Disaster Recovery

Man Up for Microservices

Overheard @ MI Summit 2015

The Next Big Thing: Machine Learning for IT Dummies

In the Mix: Automation in Name Only

End User Advocate: Lock Down Those Desktops

MODERN INFRASTRUCTURE • JUNE 2015 26

Modern Infrastructure is a SearchDataCenter.com e-publication.

Margie Semilof, Editorial Director

Alex Barrett, Editor in Chief

Adam Hughes, Managing Editor

Phil Sweeney, Managing Editor

Patrick Hammond, Associate Features Editor

Linda Koury, Director of Online Design

Joe Hebert, Production Editor

Rebecca Kitchens, Publisher, [email protected]

TechTarget, 275 Grove Street, Newton, MA 02466 www.techtarget.com

© 2015 TechTarget Inc. No part of this publication may be transmitted or reproduced in any form or by any means without written permission from the publisher. TechTarget reprints are available through The YGS Group.

About TechTarget: TechTarget publishes media for information technology professionals. More than 100 focused websites enable quick access to a deep store of news, advice and analysis about the technologies, products and processes crucial to your job. Our live and virtual events give you direct access to independent expert commentary and advice. At IT

Knowledge Exchange, our social community, you can get advice and share solutions with peers and experts.

COVER PHOTOGRAPH AND PAGE 3: MAMANAMSAI/FOTOLIA

Follow

@ModernInfra

on Twitter!


Recommended