Date post: | 18-Nov-2015 |
Category: |
Documents |
Upload: | non-sequitur-ii |
View: | 21 times |
Download: | 1 times |
UNITED STATES PATENT AND TRADEMARK OFFICE
____________
BEFORE THE PATENT TRIAL AND APPEAL BOARD
____________
Dot Hill Systems Corp.
Petitioner,
v.
Crossroads Systems, Inc.
Patent Owner.
____________
IPR2015-00822
U.S. Patent No. 6,425,035
____________
PETITION FOR INTER PARTES REVIEW
ii
Table of Contents
EXHIBIT LIST ...................................................................................................... iv I. This Petition Presents the Same Grounds as IPR2014-01197 ................... 1 II. INTRODUCTION ......................................................................................... 1 III. MANDATORY NOTICES ........................................................................... 2
A. Real Party-In-Interest ...................................................................................................... 3 B. Related Matters ............................................................................................................... 3 C. Lead and Back-Up Counsel ............................................................................................ 3 D. Service Information ........................................................................................................ 4
IV. PAYMENT OF FEES ................................................................................... 4 V. REQUIREMENTS FOR INTER PARTES REVIEW ................................ 4
A. Grounds for Standing ...................................................................................................... 4 B. Identification of Challenge ............................................................................................. 5
1. The Specific Art and Statutory Ground(s) on Which the Challenge Is Based ................ 5 2. How the Construed Claims Are Unpatentable Under the Statutory Grounds Identified
in 37 C.F.R. 42.204(b)(2) and Supporting Evidence Relied Upon to Support the Challenge ........................................................................................................................ 6
VI. THE 035 PATENT ....................................................................................... 6 A. The Preferred Embodiment of the 035 Patent ............................................................... 6 B. Reexamination of the 035 Patent and the Parent and Grandparent of the 035 Patent .. 8
VII. BROADEST REASONABLE CONSTRUCTION ..................................... 9 VIII. GROUNDS OF UNPATENTABILITY ..................................................... 11
A. Claims 1-14 are Rendered Obvious by 35 U.S.C. 103(a) by CRD-5500 User Manual in view of CRD-5500 Data Sheet and Smith ............................................................................ 12
1. Introduction of the CRD-5500 References ................................................................... 12 2. Introduction of the Smith Reference ............................................................................. 15 3. The Combined System of CRD-5500 User Manual, CRD-5500 Data Sheet and Smith
...................................................................................................................................... 16 4. Correspondence Between Claims 1-14 and the Combined System of CRD-5500 User
Manual, CRD-5500 Data Sheet and Smith ................................................................... 19 B. Claims 1-4 and 7-14 Are Rendered Obvious by Kikuchi taken in Combination with Bergsten .................................................................................................................................... 29
1. Introduction of the Kikuchi Reference .......................................................................... 29 2. Introduction of the Bergsten Reference ........................................................................ 30
iii
3. The Combined System of Kikuchi and Bergsten .......................................................... 32 4. Correspondence Between Claims 1-4 and 7-14 and the Combined System of Kikuchi
and Bergsten ................................................................................................................. 34 C. Claims 5 and 6 Are Rendered Obvious by Kikuchi taken in Combination with Bergsten and Smith ................................................................................................................................... 42 D. Claims 1-4 and 7-14 Are Rendered Obvious by Bergsten taken in Combination with Hirai 44
1. Introduction of the Hirai Reference .............................................................................. 44 2. The Combined System of Bergsten and Hirai .............................................................. 46 3. Correspondence Between Claims 1-4 and 7-14 and the Combined System of Bergsten
and Hirai ....................................................................................................................... 49 E. Claims 5 and 6 Are Rendered Obvious by Bergsten taken in Combination with Hirai and Smith ................................................................................................................................... 57
IX. EXPLANATION OF NON-REDUNDANCY ........................................... 58 X. CONCLUSION ............................................................................................ 60
iv
EXHIBIT LIST1
1001 U.S. Patent No. 6,425,035 (the 035 Patent)
1002 Select Portions of File History of the 035 Patent
1003 CRD-5500 SCSI RAID Controller Users Manual (CRD-5500 User Manual)
1004 CRD-5500 SCSI RAID Controller Data Sheet (CRD-5500 Data Sheet)
1005 Smith et al., Tachyon: A Gigabit Fibre Channel Protocol Chip, Hewlett-Packard Journal, October 1996 (Smith)
1006 U.S. Patent No. 6,219,771 to Kikuchi et al. (Kikuchi)
1007 U.S. Patent No. 6,073,209 to Bergsten (Bergsten)
1008 JP Patent Application Publication No. Hei 5[1993]-181609 to Hirai (Hirai)
1009 Infringement Contentions in Crossroads Systems, Inc. v. Oracle Corporation, W.D. Tex. Case No. 1-13-cv-00895, Crossroads Systems, Inc. v. Huawei Technologies Co. Ltd. et al., W.D. Tex. Case No. 1-13-cv-01025, and Crossroads Systems, Inc. v. NetApp, Inc., W.D. Tex. Case No. 1-14-cv-00149
1010 Declaration of Professor Chase, Professor of Computer Science at Duke University
1011 Cheating the I/O Bottleneck: Network Storage with Trapeze/Myrinet 1012 Interposed Request Routing for Scalable Network Storage 1013 Cut-Through Delivery in Trapeze: An Exercise in Low-Latency
1 All of the following exhibits are found in Case No. IPR2014-01197, the proceeding Petitioner seeks to join. Petitioner has included all exhibits and kept their original numbering for the purposes of consistency and accurate internal cross-citations.
v
Messaging 1014 Structure and Performance of the Direct Access File System 1015 Implementing Cooperative Prefetching and Caching in a Globally- Managed Memory System 1016 Network I/O with Trapeze 1017 A Cost-Effective, High-Bandwidth Storage Architecture 1018 RAID-II: A High-Bandwidth Network File Server 1019 Payload Caching: High-Speed Data Forwarding for Network Intermediaries 1020 Petal: Distributed Virtual Disks 1021 File Server Scaling with Network-Attached Secure Disks 1022 Failure-Atomic File Access in an Interposed Network Storage System 1023 U.S. Patent No. 6,308,228 to Yocum et al. (Yocum) 1024 Select Portions of File History of Reexamination Control No.
90/007,123 (U.S. Patent No. 5,941,972) 1025 Select Portions of the File History of Reexamination Control No.
90/007,124 (U.S. Patent No. 6,421,753) 1026 Plaintiff Crossroads Systems, Inc.s Objections and Responses to
Defendants First Set of Common Interrogatories in Crossroads Systems, Inc. v. Oracle Corporation, W.D. Tex. Case No. 1-13-cv-00895, Crossroads Systems, Inc. v. Huawei Technologies Co. Ltd. et al., W.D. Tex. Case No. 1-13-cv-01025, and Crossroads Systems, Inc. v. NetApp, Inc., W.D. Tex. Case No. 1-14-cv-00149
1027 Storagepath Fibre Channel Drive System, SWS/Storagepath,
available at
vi
web.archive.org/web/19970114010450/http://www.storagepath.com/fibre.htm, archived January 14, 1997
1028 Technology Brief Strategic Direction for Compaq Fibre Channel-
Attached Storage, Compaq Computer Corporation, October 14, 1997 1029 Tantawy (ed.), Fibre Channel (Ch. 5) of High Performance Networks,
Kluwer Academic Publishers, 1994 1030 Deel et al., Moving Uncompressed Video Faster Than Real Time,
Society of Motion Picture and Television Engineers, Inc., December 1996
1031 Emulex LightPulse Fibre Channel PCI Host Adapter, Emulex
Corporation, available at web.archive.org/web/19980213052222/http://www.emulex.com/fc/lig
htpulse2.htm, archived February 13, 1998 1032 Select Portions of File History of Reexamination Control Nos.
90/007,125 and 90/007,317 (U.S. Patent No. 6,425,035) 1033 Local Area Networks Newsletter, Vol. 15, No. 2, Information
Gatekeepers Inc., February 1997 1034 Litigation Complaint in Crossroads Systems, Inc. v. Oracle
Corporation, W.D. Tex. Case No. 1-13-cv-00895 1035 Litigation Complaint in Crossroads Systems, Inc. v. Huawei
Technologies Co. Ltd. et al., W.D. Tex. Case No. 1-13-cv-01025 1036 Litigation Complaint in Crossroads Systems, Inc. v. NetApp, Inc.,
W.D. Tex. Case No. 1-14-cv-00149 1037 Declaration of Monica S. Ullagaddi authenticating Ex. 1004, Ex. 1027
and Ex. 1031 1038 Litigation Complaint in Crossroads Systems, Inc. v. Dot Hill Systems
Corp., W.D. Tex. Case No. 1:13-cv-00800
1
I. This Petition Presents the Same Grounds as IPR2014-01197
This inter partes review petition presents challenges which are identical to
those on which trial was instituted in IPR2014-01197. Paper No. 13. This petition
copies verbatim the challenges set forth in the petition in IPR2014-01197 and
relies upon the same evidence, including the same expert declaration. This petition
is accompanied by a motion for joinder.
II. INTRODUCTION
Petitioner Dot Hill Systems Corp. (Petitioner) respectfully requests inter
partes review for claims 1-14 of U.S. Patent No. 6,425,035 (the 035 Patent,
attached as Ex. 1001) in accordance with 35 U.S.C. 31119 and 37 C.F.R.
42.100 et seq.
The 035 Patent is directed to a storage router that serves as a bridge
between Fibre Channel host devices and storage devices. More specifically, the
035 Patent states that the storage router of the present invention is a bridge
device that enables the exchange of SCSI command set information between
application clients on SCSI bus devices and the Fibre Channel links. (Ex. 1001 at
5:34-38) The 035 Patent explains that this method is accomplished with native
low level block protocols (NLLBP), which enhances system performance because
such an approach does not involve the overhead of high level protocols and file
systems required by network servers. (Id. at 5:1-5) The storage router [also]
2
applies access controls such that virtual local storage can be established in remote
SCSI storage devices for [w]orkstations on the Fibre Channel link. (Id. at 5:38-
41)
Systems corresponding closely to the 035 Patents preferred embodiment
were taught by prior art which were not before the Examiner or were not applied in
a prior art rejection. The CRD-5500 SCSI RAID Controller by CMD Technology,
Inc. was detailed in product manuals and data sheets released more than a year
before the earliest priority date. Additionally, several other combinations of
priority art predictably yield combined systems in which a storage controller
bridges between a Fibre Channel (FC) host device and a storage disk array and
provides access controls and virtual local storage for host devices connected to FC
transport links. For instance, a skilled artisan would have readily combined the
teaching of access controls in U.S. Patent No. 6,219,771 to Kikuchi with the
virtualized storage controllers taught in U.S. Patent No. 6,073,209 to Bergsten.
(See Ex. 1010 at 87-98)The access control techniques taught in JP Patent
Application Publication No. Hei 5[1993]-181609 to Hirai would likewise have
been readily and predictably combined with the storage controllers of Bergsten
system. (See Ex. 1010 at 144-51)
III. MANDATORY NOTICES
3
Pursuant to 37 C.F.R. 42.8(a)(1), Petitioner provides the following
mandatory disclosures.
A. Real Party-In-Interest
The real party-in-interest is Dot Hill Systems Corporation.
B. Related Matters
As of the filing date of this petition, U.S. Patent No. 6,425,035 (the 035
Patent) is subject to inter partes review in case nos. IPR2014-01197, filed July
23, 2014, IPR2015-00777, filed February 19, 2015, and IPR2014-01226, filed July
31, 2014.
Pursuant to 37 C.F.R. 42.8(b)(2), Petitioner states that the 035 Patent is
asserted in a co-pending litigation matter captioned Crossroads Systems Inc. v. Dot
Hill Systems Corp., W.D. Tex. Case No. 1:13-cv-00800 (Ex. 1038). All other
related and co-pending litigation matters are set forth in Exhibit 1026 and Exhibits
1034-36.
C. Lead and Back-Up Counsel
Pursuant to 37 C.F.R. 42.8(b)(3), Petitioner provides the following
designation of counsel: Lead counsel is Orion Armon (Reg. No. 65,421) and back-
up counsel is J. Adam Suppes. Petitioner requests authorization to file a motion for
J. Adam Suppes, an experienced patent litigator and counsel for Petitioner, to
appear pro hac vice.
4
D. Service Information
Pursuant to 37 C.F.R. 42.8(b)(4), papers concerning this matter should be
served on the following.
Address: Orion Armon and J. Adam Suppes Cooley LLP ATTN: Patent Group 1299 Pennsylvania Ave., NW, Suite 700 Washington, DC 20004
Email: [email protected], [email protected] Telephone: (720) 566-4119 Fax: (720) 566-4099
IV. PAYMENT OF FEES
This Petition requests review of claims 1-14 of the 035 patent and is
accompanied by a payment of $23,000. 37 C.F.R. 42.15. No excess claims fees
are required. This Petition meets the fee requirements of 35 U.S.C. 312(a)(1).
V. REQUIREMENTS FOR INTER PARTES REVIEW
As set forth below and pursuant to 37 C.F.R. 42.104, each requirement for
inter partes review of the 035 Patent is satisfied.
A. Grounds for Standing
Petitioner certifies that it is not estopped or barred from requesting inter
partes review of the 035 Patent because this petition is accompanied by a motion
for joinder. The one-year time bar of 35 U.S.C. 315(b) does not apply to a
request for joinder. 35 U.S.C. 315(b) (final sentence) ([t]he time limitation set
forth in the preceding sentence shall not apply to a request for joinder under
subsection (c)); 37 C.F.R. 42.122(b).
5
B. Identification of Challenge
Pursuant to 37 C.F.R. 42.104(b) and (b)(1), Petitioner requests inter
partes review of claims 1-14 of the 035 Patent, and further requests that the Patent
Trial and Appeal Board (PTAB) invalidate the same.
1. The Specific Art and Statutory Ground(s) on Which the Challenge Is Based
Pursuant to 37 C.F.R. 42.204(b)(2), inter partes review of the 035 Patent
is requested in view of the following grounds:
(a) Claims 1-14 are rendered obvious under 35 U.S.C. 103(a) by the
combination of The CRD-5500 SCSI RAID Controller Users Manual (CRD-5500
User Manual, Ex. 1003), CRD-5500 SCSI RAID Controller Data Sheet (CRD-
5500 Data Sheet, Ex. 1004), and Smith et al., Tachyon: A Gigabit Fibre Channel
Protocol Chip, Hewlett-Packard Journal, October 1996 (Smith, Ex. 1005);
(b) Claims 1-4 and 7-14 are rendered obvious under 35 U.S.C. 103(a)
by U.S. Patent No. 6,219,771 to Kikuchi et al. (Kikuchi, Ex. 1006) in view of
U.S. Patent No. 6,073,209 to Bergsten (Bergsten, Ex. 1007);
(c) Claims 5 and 6 are rendered obvious under 35 U.S.C. 103(a) by
Kikuchi in view of Bergsten and Smith;
(d) Claims 1-4 and 7-14 are rendered obvious under 35 U.S.C. 103(a)
by Bergsten in view of JP Patent Application Publication No. Hei 5[1993]-181609
to Hirai (Hirai, Ex. 1008); and
6
(e) Claims 5 and 6 are rendered obvious under 35 U.S.C. 103(a) by
Bergsten in view of Hirai and Smith.
2. How the Construed Claims Are Unpatentable Under the Statutory Grounds Identified in 37 C.F.R. 42.204(b)(2) and Supporting Evidence Relied Upon to Support the Challenge
Pursuant to 37 C.F.R. 42.204(b)(4), an explanation of how claims 1-14 of
the 035 Patent are unpatentable, including the identification of where each claim
element is found in the prior art, is provided in Section VII below. Pursuant to 37
C.F.R. 42.204(b)(5), the exhibit numbers of the supporting evidence relied upon
to support the challenges and the relevance of the evidence to the challenges
raised, including identifying specific portions that support the challenges, are
provided in Section VII.
VI. THE 035 PATENT
A. The Preferred Embodiment of the 035 Patent
The 035 Patent specification states that the storage router of the present
invention is a bridge device thatenables the exchange of SCSI command set
information between application clients on SCSI bus devices and the Fibre
Channel links. (Ex. 1001 at 5:34-38) According to this preferred embodiment,
storage network 50 includes a Fibre Channel high speed serial interconnect 52
(id. at 3:67-4:2) and a storage router 56 that enables a large number of
workstations 58 to be interconnected on a common storage transport and to access
7
common storage devices 60, 62 and 64 through native low level, block protocols
(id. at 4:2-6).
Storage router 56 also includes enhanced functionality to implement
security controls and routing such that each workstation 58 can have access to a
specific subset of the overall data stored in storage devices 60, 62 and 64 which
has the appearance and characteristics of local storage and is referred toas
virtual local storage. (Id. at 4:7-11) Storage router 56 performs access control and
routing such that each workstation 58 has controlled access to only the specified
partition of storage device 62 which forms virtual local storage for the workstation
58. (Id. at 4:28-31)
To accomplish this function, storage router 56 can include routing tables
and security controls that define storage allocation for each workstation 58. (Id. at
4:62-64) This provides the advantage that collective backups and other collective
administrative functions may be performed more easily. (Id. at 5:67-6:1)
Further, [b]ecause storage access involves native low level, block protocols and
does not involve the overhead of high level protocols and file systems required by
network servers, this approach does not impede or slow system performance. (Id.
at 5:1-5)
8
B. Reexamination of the 035 Patent and the Parent and Grandparent of the 035 Patent
The 035 Patent was challenged in an ex parte reexamination (see Ex. 1032)
in which the Patent Owner distinguished over Spring and Oeda. (See Ex. 1032 at
pp. 76-116, Patent Owners Response dated July 22, 2005 at pp. 7-41) More
particularly, the Patent Owner argued that Spring and Oedaeither do not
provide remote access to storage devices or, for embodiments of those systems that
may be able to provide remote access to storage devices, require the use of higher
level network protocols (and therefore cannot allow access to the remote storage
devices using NLLBPs) and, further, that Spring and Oeda fail to disclose
mapping and access controls. (Id. at p. 91, Patent Owners Response dated July
22, 2005 at p. 16)
In response to this argument, the Examiner issued a NIRC which provided
the following reasons for confirmation:
The prior art disclosed by the patent owner and cited by the
Examiner fail to teach or suggest, alone or in combination, all the
limitations of the independent claims (claims 1, 7 and 11),
particularly the map/mapping feature which is a one-to-one
correspondence, as given in a simple table, the map physically
resident on a router, whereby the router forms the connection
between two separate entities over different transport mediums,
such that neither entity determines where data is to be sent, but
rather, the router solely dictates where the data will be sent; also the
9
NLLBP feature referring to a fundamental low level protocol
defined by a specification/standard that is well known to one of
ordinary skill in the art, where the NLLBP is used at the router for
communications with both the first and second transport medium.
The SCSI protocol/standard is considered a NLLBP. TCP/IP, e.g.,
used in Ethernet communications, however, is not considered to be
a NLLBP. (Id. at p. 13, Notice of Intent to Issue a Reexam
Certificate NIRC at p. 3)
As such, the Examiner agreed that Springs Ethernet-to-SCSI system did not
satisfy the NLLBP limitation because the Ethernet side of the bridge used TCP/IP.
(Id.) The Examiner also found that Springs Ethernet-to-SCSI bridge did not teach
a map/mapping feature that is a one-to-one correspondence given in a simple table.
(Id.) Furthermore, in these related reexaminations of grandparent U.S. Patent No.
5,941,972 and parent U.S. Patent No. 6,421,753, the Patent Owner argued that
Springs Ethernet-to-SCSI system does not allow access using NLLBP. (See Ex.
1024 at p. 2066-67, Patent Owners Response dated July 22, 2005 at p. 21-22; see
Ex. 1025 at pp. 498-99, Patent Owners Response dated July 22, 2005 at pp. 19-
20) The Examiner presented near identical comments in NIRCs issued in the
reexaminations of grandparent U.S. Patent No. 5,941,972 and parent U.S. Patent
No. 6,421,753.
VII. BROADEST REASONABLE CONSTRUCTION
10
Petitioner bases this petition upon the U.S. Patent and Trademark Offices
(USPTO) broadest reasonable interpretation standard applied in PTAB
proceedings. All claimed terms not specifically addressed in this section have been
accorded their broadest reasonable interpretation in light of the 035 Patent
including their plain and ordinary meaning. Petitioners position regarding the
scope of the claims under their broadest reasonable interpretation is not to be
taken as stating any position regarding the appropriate scope to be given the claims
in a court or other adjudicative body under the different claim interpretation
standards that may apply to such proceedings. In particular, Petitioner notes that
the standard for claim construction used in district courts differs from the standard
applied before the USPTO. Any claim construction offered by Petitioner in this
petition is directed to the USPTO standard, and Petitioner does not acquiesce or
admit to the constructions reflected herein for any purpose outside of this
proceeding.
Native low-level block protocol is described in the 035 Patent as being
distinct from higher-level protocols that require translation to NLLBP. (Ex. 1001 at
1:15-28; 3:14-25 and 5:1-5) Examples of NLLBPs in the 035 Patent include
SCSI-2 commands and SCSI-3 Fibre Channel Protocol (FCP) commands. (See
e.g., Ex. 1001 at 6:39-58) The 035 Patent distinguishes prior art systems that
11
provided access through network protocols that the [network] server must
translate into low level requests to the storage device. (Id. at 1:51-54)
During the reexamination of the grandparent patent the Patent Owner argued
that a NLLBP is a set of rules or standards that enables the exchange of
information without the overhead of high-level protocols and file systems typically
required by network servers, citing the Markman Order of the U.S. District Court
for the Western District of Texas in Crossroads v. Chaparral Network Storage,
Inc., Civil Action No. A-00-CA-217-SS and Crossroads Systems (Texas), Inc., v.
Pathlight Technology, Inc., Civil Action No. A-00CA-248-JN. (Ex. 1025 at p. 500,
Patent Owner Response at p. 21) Consistent with this, the Examiner found that
[t]he SCSI protocol/standard is considered a NLLBP. TCP/IP, e.g., used in
Ethernet communications, however, is not considered to be a NLLBP. (Id. at p.
14, NIRC at p. 3)
For the foregoing reasons, the broadest reasonable interpretation of NLLBP
includes a protocol, such as the SCSI protocol for SCSI commands, that enables
the exchange of information without the overhead of high-level protocols and file
systems typically required by network servers.
VIII. GROUNDS OF UNPATENTABILITY
The explanations set forth below summarize the grounds of unpatentability.
Each reference is introduced in turn and those introductions are followed by an
12
explanation of the combined system or method and the supporting rationale.
Thereafter, the correspondence between the combined system or method and each
claim element is explained. Pinpoint citations are provided to the declaration of
Professor Chase (Ex. 1010), which describes in further detail the combined system,
supporting rationale, and the correspondence to the claimed subject matter.
A. Claims 1-14 are Rendered Obvious by 35 U.S.C. 103(a) by CRD-5500 User Manual in view of CRD-5500 Data Sheet and Smith
1. Introduction of the CRD-5500 References
The CRD-5500 SCSI RAID Controller User Manual (CRD-5500 User
Manual, Ex. 1003) and CRD-5500 SCSI RAID Controller Data Sheet (CRD-
5500 Data Sheet, Ex. 1004) were published on November 21, 1996 and December
26, 1996, respectively, over a year before the earliest priority date of the 035
Patent (December 31, 1997). Therefore, the CRD-5500 User Manual and CRD-
5500 Data Sheet are prior art to the 035 Patent under 35 U.S.C. 102(b). The
CRD-5500 User Manual was before the Examiner but was not discussed by the
Examiner in any office action or referenced in any rejection. The Patent Owner
initially presented the CRD-5500 User Manual in the list of references submitted
in relation to the ex parte reexamination of parent patent U.S. Patent No.
6,421,753. (See Ex. 1025 at p. 649, List of References Cited by Application dated
May 24, 2005) The CRD-5500 Data Sheet has never been before an Examiner.
13
The CRD-5500 User Manual may be presumed authentic under Fed.R.Evid.
901(b)(4) given that it was submitted by the Patent Owner as prior art and is self-
authenticating under Fed.R.Evid. 902(7) given that it bears trade inscriptions
demonstrating that the document is a publication by CMD Technology, Inc.
released on a date certain. The CRD-5500 Data Sheet is authenticated by the
declaration of Monica Ullagaddi, Exhibit 1037.
The CRD-5500 User Manual describes a RAID controller which couples
one or more host devices to virtual local storage on a RAID storage disk array. (Ex.
1003 at 1-1) Devices are connected to the CRD-5500 controller through a number
of I/O module slots configured to receive both host device interface modules and
storage device interface modules. (Id. at 2-1)
Figure 1-1 of the CRD-5500 User Manual illustrates how the controller's
RAID set configuration utility can be used to configure virtual or logical storage
regions by assigning individual disk drives to RAID sets and partitioning the RAID
sets into logical storage regions called redundancy groups. (Ex. 1003 at 1-2) Each
redundancy group may have a particular purpose and, as such, a particular
configuration including, in some examples, striped partitions, data mirroring, or a
combination thereof. (Id.; see also id. at 1-5 and 1-7)
The CRD-5500 controllers Host LUN [Logical Unit Number] Mapping
feature makes it possible to map RAID sets or redundancy groups (a RAID set
14
or portion/partition thereof) differently to each host. (Id. at 1-1; see also id. at
1-10; see also id. at 4-5) As illustrated in the Host LUN Mapping utility
disclosed in the CRD-5500 User Manual, a particular host device (identified as
Channel 0) is allotted access to one or more RAID redundancy groups (e.g.,
redundancy groups 0, 1, 5, and 6 through 31). The host device is provided an
address for accessing each RAID redundancy group through a Host LUN
(logical unit number, an addressing mechanism). (See e.g., id. at 4-5; 4-10; and
6-10) An administrator can allocate a particular disk as a redundancy group, such
that a host LUN maps to a single physical disk or partition thereof. (See, e.g., id.
at 2-3, 2-4, 3-3, 3-4). Accordingly, the Host LUN Mapping utility of the CRD-
5500 controller provides virtual local storage to a host device by presenting
access to one or more RAID redundancy groups using LUN-based addressing.
(Id. at 4-5) Further, the Host LUN Mapping utility allows the CRD-5500
controller to restrict a particular hosts access to a given memory region on the
RAID array by withholding addresses (i.e., Host LUNs) for particular RAID
redundancy groups to that host (e.g., redundancy groups 2 through 4 have been
excluded from the list of redundancy groups for which Host LUNs have been
assigned to the host illustrated). (See id.; see also id. at 1-1, You make the same
redundancy group show up on different LUNs to different hosts, or make a
redundancy group visible to one host but not to another.; id. at 1-11, [T]he
15
CRD-5500 defines each RAID set or partition of a RAID set as a redundancy
group. These redundancy groups may be mapped to host LUNs, either in a direct
one-to-one relationship or in a manner defined by the user.)
The CRD-5500 Data Sheet notes that the modular design of the CRD-5500
controller supports interfacing with host and/or storage devices via a high speed
serial connection such as a FC transport medium:
Unlike other RAID controllers, CMD's advanced Viper RAID
architecture and ASICs were designed to support tomorrow's
high speed serial interfaces, such as Fiberchannel (FCAL). (Ex.
1004 at p. 1 (emphasis added))
2. Introduction of the Smith Reference
Smith et al., Tachyon: A Gigabit Fibre Channel Protocol Chip, Hewlett-
Packard Journal, October 1996 (Smith) was published in October of 1996,
over a year before the earliest priority date of the 035 Patent (December 31,
1997). Smith is therefore prior art under 35 U.S.C. 102(b).
Smith describes the off-the-shelf Tachyon controller which is used in the
preferred embodiment of the 035 Patent. (Ex. 1001 at 6:30) The Tachyon chip is
designed to serve as, among other things, a target adapter between FC host devices
and, for example, a SCSI-based storage router by de-encapsulating SCSI
commands and responses received at the storage router for internal processing and
16
by encapsulating the SCSI commands and responses prior to sending over a FC
link connected a FC device. (Ex. 1005 at 4) Indeed,
[t]he second major design goal was that Tachyon should support
SCSI encapsulation over Fibre Channel (known as FCP). From the
beginning of the project, Tachyon designers created SCSI assists to
support SCSI initiator transactions.Early in the design, Tachyon
only supported SCSI initiator functionality with its SCSI hardware
assists. It became evident from customer feedback, however, that
Tachyon must support SCSI target functionality as well, so SCSI
target functionality was added to Tachyon hardware assists. (Id.)
3. The Combined System of CRD-5500 User Manual, CRD-5500 Data Sheet and Smith
It would have been obvious to one of ordinary skill in the art to combine the
CRD-5500 User Manual, the CRD-5500 Data Sheet, and Smith to enhance the
communication and storage options of a host device on a FC transport medium,
benefit from the Host LUN Mapping feature of the CRD-5500 controller, and
avail the host computing device of ubiquitous mass storage applications (e.g.,
RAID). (Ex. 1010 39-44) This combination is specifically suggested in the
CRD-5500 Data Sheet, which explains that CMD's advanced Viper RAID
architecture and ASICs were designed to support tomorrow's high speed serial
interfaces, such as Fiberchannel. (Ex. 1004 at p. 1) Fibre Channels high
bandwidth and capability to extend the distances between hosts and the storage
17
controller each provided a strong motivation to adopt the CRD-5500 Data Sheets
suggestion to enhance the CRD-5500 controller with FC connectivity for host
and/or storage device modules designed with Tachyon chips. (See generally Ex.
1004 at pp. 1-2)
In the combined system, the Tachyon chip is incorporated into FC-enabled
host device interface modules installed in I/O slots of the CRD-5500 controller.
(See e.g., Ex. 1010 at 47) Professor Chase explains that the Tachyon chip
encapsulates and de-
encapsulates SCSI
commands transported
over FC media to enable
intercommunication
between FC host devices
and SCSI-based storage
device arrays on FC
transport media. (See e.g.,
id. at 36, 38, 43, 45;
see also Ex. 1004 at pp. 1-2) The CRD-5500 controller, in the combined system, is
configured to provide virtual local storage to up to four FC host device interface
modules (each interfacing with a host computing device) through the Host LUN
18
Mapping feature. (See e.g., id. at 37, 43-44, 46) A figure representing the
combined system is shown at right. (See e.g., Ex. 1003 at Fig. 1-2)
In operation, the CRD-5500 controller coordinates the following process for
managing storage commands from a host device. (Ex. 1010 at 44) A FC protocol
(FCP) message containing a SCSI storage access command (e.g., read or write
request) is transmitted to the CRD-5500 controller by a host device. (Id.) At the
host device interface module, the Tachyon chip de-encapsulates the FCP message
to access the SCSI command. (Id.) The host devices identity can be derived from
the incoming message (e.g., via FCP header or SCSI header) and/or from the
channel of the host module slot receiving the communication, if such is
recognized. The Tachyon chip passes the host device identity, as well as the SCSI
payload, to the CRD-5500 controller processor, where the host device information
is cross-referenced with the Host LUN Mapping maintained by the CRD-5500
controller to identify a redundancy group of the RAID array corresponding to the
host devices storage address. (Id. at 44, 45) The CRD-5500 controller routes
the SCSI command to the corresponding disk drive in the RAID storage disk array,
via a FC storage device interface module. The FC storage device interface module
encapsulates the SCSI command in a FCP wrapper and issues the command to the
SCSI-based storage device array. (Id. at 44, 45)
19
4. Correspondence Between Claims 1-14 and the Combined System of CRD-5500 User Manual, CRD-5500 Data Sheet and Smith
The discussion below demonstrates the correspondence between the 035
Patent claim terms and the structure and/or operation of the combined system of
CRD-5500 User Manual, CRD-5500 Data Sheet and Smith. Arabic letter element
identifiers have been inserted into the claim language in a manner consistent with
that used in Exhibit 1010, the declaration of Professor Chase.
1. a) A storage router for providing virtual local storage on remote storage devices, comprising:
The CRD-5500 controllers functions as a RAID storage router. (Ex. 1010
at 46; see generally id. at 55) The Host LUN Mapping utility allows an
administrator to assign RAID redundancy groups to host LUN addresses in order
to provide the host devices with virtual local addressing capability to the remote
storage partitions such that, from the viewpoint of the host device, the remote
storage device appears as local storage. (See Ex. 1003 at 1-1, 4-5; see also Ex.
1010 at 44)
Especially in view of Patent Owners assertion in litigation that this
limitation covers systems that connect to storage via a host serial network
transport medium, making the storage remote from the hosts in which [t]he
storage appears to the hosts to be local, such limitation is provided by the
combined system under the USPTOs broadest reasonable interpretation
20
standard that is applied in PTAB proceedings. (Ex. 1009 at p. 9; see also Ex.
1010 at 44, 46, and 55)
b) a buffer providing memory work space for the storage router; The CRD-5500 controller includes an onboard cache with up to 512
megabytes of memory. (See Ex. 1003 at 1-4; see also Ex. 1010 at 45)
c) a first controller operable to connect to and interface with a first transport medium; d) a second controller operable to connect to and interface with a second transport medium; and
In the combined system, the FC-enabled host device interface module
installed in the CRD-5500 controller is coupled to FC links, which are serial
transport media, on the host device side; thus, the combined system includes the
first controller. (See e.g., Ex. 1010 at 46) Further, the combined system
includes a second controller through a FC-enabled storage device interface
module configured with a Tachyon chip, which is also connected to a FC link on
the storage device side. (Id. at 47; see also Ex. 1005 at Fig. 8)
In light of Patent Owners assertion in litigation that these limitations cover
systems that include a host controller that is operable to connect to and interface
with a Fibre Channel transport medium and a second controller operable to
connect to and interface with a Fibre Channel transport medium, such
limitations are provided by the combined system under the USPTOs broadest
21
reasonable interpretation standard that is applied in PTAB proceedings. (Ex.
1009 at p. 9 (emphasis added); see also Ex. 1010 at 46-7)
e) a supervisor unit coupled to the first controller, the second controller and the buffer, f) the supervisor unit operable to map between devices connected to the first transport medium and the storage devices, g) to implement access controls for storage space on the storage devices and h) to process data in the buffer to interface between the first controller and the second controller to allow access from devices connected to the first transport medium to the storage devices using native low level, block protocols.
The CRD-5500 references in combination with Smith also meet the
limitations of the claimed supervisor unit recited in 035 Patent claim 1. More
particularly, the CRD-5500 User Manual describes a central processing unit
(CPU) that is coupled to a host device interface module, a storage interface
module, and a buffer memory (e.g., the onboard cache discussed supra with
respect to the claimed buffer). (See Ex. 1010 at 48) In the combined system,
the CPU maps between the host devices connected on one side of the CRD-
5500 controller and storage devices connected on the other side of the CRD-
5500 controller. More particularly, the CPU is programmed to maintain a Host
LUN Mapping to assign each host channel (representing a host device) to one or
more RAID redundancy groups (representing storage space on one or more
physical disks in the RAID storage device array), in which a separate host LUN
22
address is allocated for each RAID redundancy group. (Id. at 49) An
administrator can allocate a single disk or a partition thereof as a redundancy
group, such that the redundancy group represents storage space on a physical disk
in the RAID array and the host LUN maps to the single disk or partition thereof.
(Id. at 31)
As discussed in detail in the declaration of Dr. Chase, the combined system
controls access to the storage devices by making particular RAID redundancy
groups visible to only particular host devices through the Host LUN Mapping.
(See e.g., id. at 50) The CRD-5500 controller of the combined system issues
commands sent by the host device to only those RAID redundancy groups with
respect to which the host device has an assigned Host LUN. (Id.)
The Host LUN Mapping utility functions, in the combined system, to allow
FCP-based communications between host devices and associated, non-restricted
redundancy groups. In his declaration, Dr. Chase explains how communications
that are compliant with FCP meet the limitation of native low level block
protocol (NLLBP) as claimed in 035 Patent claim 1. (Id. at 51-2)
Further in view of Patent Owners assertion in litigation that these
limitations cover systems which assignLUNs to subsets of storage, create a
correspondence between a host computer and LUN.through LUN mapping,
and receive SCSI commands (native low level block storage commands)
23
transported over Fibre Channel which do not include the overhead of high level
protocols or file systems, such limitations are provided by the combined system
under the USPTOs broadest reasonable interpretation standard that is applied
in PTAB proceedings. (Ex. 1009 at p. 9; see also Ex. 1010 at 48-52)
2. The storage router of claim 1, wherein the supervisor unit maintains an allocation of subsets of storage space to associated devices connected to the first transport medium, wherein each subset is only accessible by the associated device connected to the first transport medium.
The same RAID redundancy group can be mapped to a first LUN address
for a first host device and a second LUN address for a second host device. (Ex.
1003 at 4-5; see also Ex. 1010 at 53) As discussed supra, the Host LUN
Mapping utility allows the CRD-5500 controller to restrict a particular host
devices access to a given memory region on the RAID array by preventing
addresses (i.e., Host LUNs) for particular RAID redundancy groups from being
visible to that host device (e.g., redundancy groups 2 through 4 have been
excluded from the list of redundancy groups for which Host LUNs have been
assigned to the host illustrated in the figure on p. 4-5 of the CRD-5500 User
Manual). (See e.g., Ex. 1003 at 1-1; 1-11; and 4-5; see also Ex. 1010 at 53)
In view of Patent Owners assertion in litigation that this limitation covers
systems which maintain an allocation of subsets of storage to hosts by mapping
one or more LUNs to one or more hosts, such limitation is provided by the
24
combined system under the USPTOs broadest reasonable interpretation
standard that is applied in PTAB proceedings. (Ex. 1009 at p. 10; see also Ex.
1010 at 53)
3. The storage router of claim 1, wherein the devices connected to the first transport medium comprise workstations.
Smith describes how hosts connecting to FC networks and RAID storage
arrays may be workstations. (See Ex. 1005 at 1, Fibre Channel supporters and
developers include HP, Sun Microsystems, SGI, and IBM for workstations. See
also Ex. 1010 at 54)
4. The storage router of claim 1, wherein the storage devices comprise hard disk drives.
The CRD-5500 controller may be used to assign redundancy groups that
include storage space from one or more physical hard drives of the RAID storage
device array to a host device. (Ex. 1010 at 55)
5. The storage router of claim 1, wherein the first controller comprises: a) a first protocol unit operable to connect to the first transport medium; b) a first-in-first-out queue coupled to the first protocol unit; and c) a direct memory access (DMA) interface coupled to the first-in-first-out queue and to the buffer.
6. The storage router of claim 1, wherein the second controller comprises: a) a second protocol unit operable to connect to the second transport medium; b) an internal buffer coupled to the second protocol unit; and c) a direct memory access (DMA) interface coupled to the internal buffer and to the buffer of the storage router.
25
Smith describes a Frame Manager that transmits and receives FCP
communications (e.g., FCP frames and primitives) and implements the FCP
specification(s). (See Ex. 1005 at 9; see also Ex. 1010 at 56) Thus, the combined
system, and in particular, Smith, teaches the first protocol unit as claimed in
claim 5 and the second protocol unit as claimed in claim 6. (Ex. 1010 at 56,
59)
Smiths illustration of the Tachyon chip in Fig. 4 includes multiple first-in-
first-out queue[s] (as claimed in 035 Patent claim 5) including an inbound data
queue, an outbound data queue, and acknowledgment queue. (See Ex. 1005 at 5,
Fig. 4; see also Ex. 1010 at 57)
Fig. 4 of Smith illustrates an inbound sequence manager and an outbound
sequence manager that perform DMA transfers of inbound data into buffers as
well as DMA transfers of outbound data from host memory to the outbound
sequence manager. (Ex. 1003 at 7, 9; see also Ex. 1010 at 58) Thus, the
combined system, and in particular, Smith, also teaches the direct memory
access (DMA) interface as set forth in claims 5 and 6. (Ex. 1010 at 58, 59)
Further, one or more of the inbound data FIFO, acknowledgment FIFO, and
outbound Frame FIFO described and illustrated in Fig. 6 of Smith as being internal
to the Tachyon chip also teaches the internal buffer coupled to the second
protocol unit as recited in claim 6. (Ex. 1005 at Figs. 5 and 6; Ex. 1010 at 59)
26
7. a) A storage network, comprising: b) a first transport medium; c) a second transport medium; d) a plurality of workstations connected to the first transport medium; e) a plurality of storage devices connected to the second transport medium; f) a storage router interfacing between the first transport medium and the second transport medium, the storage router providing virtual local storage on the storage devices to the workstations and operable: g) to map between the workstations and the storage devices; h) to implement access controls for storage space on the storage devices; and i) to allow access from the workstations to the storage devices using native low level, block protocol in accordance with the mapping and access controls.
Claim 7 recites similar limitations as in claims 1, 3, and 4, and so the
discussion set forth above for claims 1, 3, and 4 applies with equal force to claim 7.
See also Ex. 1010 at 46, 60-6.
10. The storage network of claim 7, wherein the storage router comprises: a) a buffer providing memory work space for the storage router;
The CRD-5500 controller includes an onboard cache with up to 512
megabytes of memory. (See Ex. 1003 at 1-4; see also Ex. 1010 at 69)
b) a first controller operable to connect to and interface with the first transport medium, c) the first controller further operable to pull outgoing data from the buffer and to place incoming data into the buffer; d) a second controller operable to connect to and interface with the second transport medium, e) the second controller further operable to pull outgoing data from the buffer and to place incoming data into the buffer; and
27
As discussed supra, in the combined system, the FC-enabled host device
interface module installed in the CRD-5500 controller is coupled to FC, a serial
transport media, on the host device side and/or on the storage device side. (See
e.g., Ex. 1010 at 70) In the combined system, the FC-enabled host device
interface module installed in the CRD-5500 controller is coupled to FC links, a
serial transport media, on the host device side and/or on the storage device side;
thus, the combined system includes the first controller as claimed in 035
Patent claim 1. (Id.; see e.g., Ex. 1005 at Fig. 8) Further, through a FC-enabled
host device interface module configured with a Tachyon chip, the combined
system includes a second controller as claimed in 035 Patent claim 1also
connected to a FC link. (Id. at 72)
Moreover, in the combined system, the Tachyon chip transfer[s] outbound
data from [CRD-5500 Controller] memory to the outbound sequence manager via
DMA (see Ex. 1005 at 12; see also Ex. 1010 at 71, 73) transfersinbound
data into buffers specified by the multiframe sequence buffer queue, the single
frame sequence buffer queue, the inbound message queue, or the SCSI buffer
manager (see Ex. 1005 at 9; see also Ex. 1010 at 71, 73).
f) a supervisor unit coupled to the first controller, the second controller and the buffer, the supervisor unit operable: g) to map between devices connected to the first transport medium and the storage devices,
28
h) to implement the access controls for storage space on the storage devices and i) to process data in the buffer to interface between the first controller and the second controller to allow access from workstations to storage devices.
Claim 10 recites similar limitations as in claim 1, elements e-h and so the
discussion set forth above for claim 1 applies with equal force to claim 10. See
also Ex. 1010 at 74-77.
11. a) A method for providing virtual local storage on remote storage devices connected to one transport medium to devices connected to another transport medium, comprising: b) interfacing with a first transport medium; c) interfacing with a second transport medium; d) mapping between devices connected to the first transport medium and the storage devices and that implements access controls for storage space on the storage devices; and e) allowing access from devices connected to the first transport medium to the storage devices using native low level, block protocols.
Claim 11 recites similar limitations as in claim 1, and so the discussion set
forth above for claim 1 applies with equal force to claim 11. See also Ex. 1010 at
78-83.
Dependent claims 8, 9, and 12-14 Dependent claims 8, 9, and 12-14 correspond to dependent claims 2-4.
The discussion set forth above for claims 2-4 therefore applies with equal force to
claims 8, 9, and 12-14. (See also Ex. 1010 at 67, 68, and 84-86)
29
B. Claims 1-4 and 7-14 Are Rendered Obvious by Kikuchi taken in Combination with Bergsten
1. Introduction of the Kikuchi Reference
U.S. Patent No. 6,219,771 to Kikuchi (Ex. 1006) was filed on August 18,
1997, before the earliest priority date of the 035 Patent (December 31, 1997).
Therefore, Kikuchi is prior art to the 035 Patent under 35 U.S.C. 102(e). Kikuchi
was among the hundreds of references before the Examiner but was not discussed
by the Examiner in any office action or referenced in any rejection.
Kikuchi describes a control device that receives commands from host
devices via a host device interface, determines whether the host device is
authorized to access a specified storage device, and interprets and executes the
commands from the host device. (Ex. 1006 at the Abstract) The host devices can
be connected, for example, via a FC or SCSI transport medium to the control
device and the control device is, in turn, connected to a storage unit via, for
example, a FC or SCSI transport medium. (Id. at 1:31-36; see also id. at 5:37-39)
Kikuchi executes access control by extracting a host address from each host device
command and determining whether the address is registered in an address
registration unit. (Id. at 4:35-44; see also id. at 5:3-6)
Kikuchi enables multiple host devices to access different partitions of a
single storage device, which may be a storage area. (Id. at 1:31-43) Kikuchi
describes extracting a host address from a command sent from a host device and
30
using the host address to obtain offset information, which indicates a disk partition
corresponding to each host device. (See e.g., id. at 2:52-62; 3:6-17) Kikuchi also
describes extracting a disk partition address from each command. (Id. at 7:50-63)
An actual disk partition address is obtained by combining the offset information
and the disk partition address. (Id. at 7:64-8:1) Thus, in Kikuchi, a single storage
device can appear as a different disk to each host device. (Id. at 8:43-45)
2. Introduction of the Bergsten Reference
U.S. Patent No. 6,073,209 to Bergsten (Ex. 1007) was filed on March 31,
1997, before the earliest priority date of the 035 Patent (December 31, 1997).
Therefore, Bergsten is prior art to the 035 Patent under 35 U.S.C. 102(e).
Bergsten was among the hundreds of references before the Examiner but was not
discussed by the Examiner in any office action or referenced in any rejection.
Like Kikuchi, Bergsten describes serial communication transport mediums,
including FC, which connect a host device to a storage controller and that connects
storage devices to the storage controller. (Ex. 1007 at 4:22-38) As was well known
by those of ordinary skill in the art during the relevant timeframe, the FCP
messages as well as the SCSI messages are in a NLLBP. (Ex. 1010 at 106)
Access by the host devices to the storage devices is facilitated by the storage
controller, which operates using standard SCSI commands. (Ex. 1007 at 4:19-22)
31
An exemplary arrangement shown in Figure 1 of Bergsten illustrates a series
of storage controllers daisy-chained together, where each storage controller is
dedicated to service one or more host devices and a particular storage device array.
(See, e.g., Ex. 1007 at Fig. 1) The storage controllers are capable of inter-
communicating to store back-up copies of data on each others disk arrays and to
access the back-up copies, if necessary. (Id.)
Bergsten also teaches that the storage controller virtualizes the remote
storage devices for each host device such that each host device can access the
remote storage devices using virtual addressing, independent of which physical
storage device is eventually accessed. (Id. at 3:14-15; 4:47-50)
The storage controller emulates a local storage array for the
host computer system which it services and emulates a host
computer system for the local storage array which it accesses.
Host computer systems access stored data using virtual device
addresses, which are mapped to real device addresses by the
storage controllerA local host computer accesses data by
transmitting a (virtual) host address to its local storage
controller. The host address is then mapped to a real address
representing a location on one or more physical MSDs[.] . . .
The mapping is completely transparent to all of the host
computers. . . . [I]n the above described mapping process, a
single host address may map to multiple physical addresses[.]
32
Insofar as data is replicated across a number of storage device arrays
controlled by a number of storage controllers, Bergsten employs a tree-style
mapping construct to map logical addresses to physical data locations. (Id. at 3:14-
15; 4:47-50; and Fig. 8)
3. The Combined System of Kikuchi and Bergsten
In the combined system
of Kikuchi and Bergsten, multi-
protocol intercommunication
capabilities of the command
and interpretation unit
described in Kikuchi are
enhanced by incorporating
Bergstens emulation drivers
21 and physical drivers 22,
which are detailed in Bergsten
with a high degree of
specificity. (Ex. 1010 at 94)
To the extent that Patent
Owner may argue that Kikuchi fails to explicitly detail every nuance of FCP-based
encapsulation and de-encapsulation, the details of Bergstens emulation drivers 21
33
and physical drivers 22 more than sufficiently provide specific details. (Id.) A
figure representing the combined system is illustrated at right.
Additionally, the correlation chart and address conversion units described in
Kikuchi are modified to include the virtual mapping functionality of Bergstens
storage controller. (Ex. 1010 at 125) In the combined system, host devices send
NLLBP storage commands along a FC link to the storage controller. (Ex. 1010 at
124) The storage controller allows multiple host devices to access different
partitions of a single storage device and also assigns various storage partitions to
various host devices in a manner which is transparent to the host devices. (Ex.
1010 at 127, 129, and 144) At the storage controller, Kikuchis address
verification unit provides host-level access controls, denying any host device not
registered in the system via the address registration unit access to the storage array.
(Ex. 1006 at 7:13-27) The storage controller includes the correlation chart which
associates each host device with certain logical units to which the host has access
and further associates each logical address to a physical address in the SCSI-based
storage device array. (Ex. 1010 at 125-7) As explained by Professor Chase,
because it is sufficient to consider only a single Bergsten controller in isolation
from the others, the mapping tree described in Bergsten is rendered unnecessary.
(Ex. 1010 at 126) Instead, the correlation chart of Kikuchi may be enhanced to
implement the logical addressing constructs of Bergsten.
34
An artisan skilled in network storage during the relevant timeframe would
combine the Kikuchi and Bergsten systems in this way in order to improve the
Kikuchi system with the advantage of virtualized, networked storage. As explained
in Bergsten, as of early 1997 it was desirable that such a storage controller not be
dependent upon any particular hardware or software configuration of any host
computer or mass storage device which it services. (Ex. 1007 at 1:48-51)
Professor Chase explains that a skilled storage engineer would have been
motivated to incorporate the virtual storage emulation of Bergsten into the disk
apparatus of Kikuchi to increase both the number of storage devices accessible to
hosts connecting to the disk apparatus and the storage address range available
within the combined system. (Ex. 1010 at 127) The combined system also
benefits from increased restructuring capabilities, because an administrator could
replace or update equipment and reassign host storage regions without requiring
host-side involvement. (Id.) He further explains that the functionality described in
Bergsten could have been readily added to the architectures are generally
compatible with one another and the modifications could have been routinely
carried out by a person of ordinary skill in the art. (See generally id. at 123-8)
4. Correspondence Between Claims 1-4 and 7-14 and the Combined System of Kikuchi and Bergsten
The discussion below demonstrates the correspondence between the claims
terms and the structure and/or operation of the combined system of Kikuchi and
35
Bergsten. As before, Arabic letter element identifiers have been inserted into the
claim language in a manner consistent with that used in Exhibit 1010, the
declaration of Professor Chase.
1. a) A storage router for providing virtual local storage on remote storage devices, comprising:
As noted above and explained in the declaration of Professor Chase, in the
combined system, the storage controller provides a FC host device with
transparent, virtualized access to an array of SCSI disks. (Ex. 1010 at 97)
Especially in view of Patent Owners assertion in litigation that this
limitation covers systems that connect to storage via a host serial network
transport medium, making the storage remote from the hosts in which [t]he
storage appears to the hosts to be local, such limitation is provided by the
combined system under the USPTOs broadest reasonable interpretation
standard that is applied in PTAB proceedings. (Ex. 1009 at p. 9; see also Ex. 1010
at 97)
b) a buffer providing memory work space for the storage router; In the combined system, a random access memory (RAM) acts as a buffer
that provides a memory work space. (See Exh. 1006 at 5:20-22; see also Ex. 1010
at 100)
c) a first controller operable to connect to and interface with a first transport medium;
36
d) a second controller operable to connect to and interface with a second transport medium; and
In the combined system, the limitation of a first controller is met by the
emulation drivers 21 described in Bergsten, which are included within the host
device interface described Kikuchi and which are coupled to FC, a serial transport
media. (Ex. 1010 at 101) The second controller is met by the physical drivers
22 described Bergsten and included within the disk interface of Kikuchi, as
discussed supra with respect to the combined system.
In light of Patent Owners assertion in litigation that these limitations cover
systems that include a host controller that is operable to connect to and interface
with a Fibre Channel transport medium and a second controller operable to
connect to and interface with a Fibre Channel transport medium, such
limitations are provided by the combined system under the USPTOs broadest
reasonable interpretation standard that is applied in PTAB proceedings. (Ex.
1009 at p. 9 (emphasis added); see also Ex. 1010 at 101)
e) a supervisor unit coupled to the first controller, the second controller and the buffer, f) the supervisor unit operable to map between devices connected to the first transport medium and the storage devices, g) to implement access controls for storage space on the storage devices and h) to process data in the buffer to interface between the first controller and the second controller to allow access from devices connected to the first transport medium to the storage devices using native low level, block protocols.
37
In the combined system of Kikuchi and Bergsten, a CPU is programmed to
enact certain functionalities of the address registration unit, address verification
unit, address offset information conversion unit, actual partition address
conversion unit, and command interpretation and execution unit, and is coupled to
the first controller, second controller, and the buffer as recited in claim 1.
(Ex. 1010 at 103) As described above, the combined system includes an
enhanced correlation chart to map between host addresses and partition address
offsets in a two-stage, virtualization mapping described in Bergsten in which host
addresses are mapped to logical addresses and subsequently, to physical addresses
of associated storage device(s). (Id. at 104) The combined system identifies a
requesting host address, and subsequently, if the host is authorized, the requesting
host address is allowed to access allocated storage space matching a virtual
address supplied by the host device; if no match exists in the correlation chart, the
host is not allowed to access stored data using the requested address. (Id. at 105)
When a match does exist within the correlation chart, the host accesses data stored
in the storage space, using the virtual host address mapping described in Bergsten,
to map the requested address to a remote storage device via commands issued
over FCP, which is an NLLBP. (Id. at 106) In the combined system, Kikuchi
and Bergsten describe a RAM to temporarily buffer commands and data that pass
38
between an OS 20 and the emulation drivers 21 and the physical drivers 22
described in Bergsten. (Id. at 107)
Further in view of Patent Owners assertion in litigation that these
limitations cover systems which assignLUNs to subsets of storage, create a
correspondence between a host computer and LUN.through LUN mapping,
and receive SCSI commands (native low level block storage commands)
transported over Fibre Channel which do not include the overhead of high level
protocols or file systems, such limitations are provided by the combined system
under the USPTOs broadest reasonable interpretation standard that is applied
in PTAB proceedings. (Ex. 1009 at p. 9; see also Ex. 1010 at 103-7)
2. The storage router of claim 1, wherein the supervisor unit maintains an allocation of subsets of storage space to associated devices connected to the first transport medium, wherein each subset is only accessible by the associated device connected to the first transport medium.
As described above, in the combined system of Kikuchi and Bergsten, the
correlation chart described in Kikuchi that maps between host device addresses
and partition address offsets is modified to include the two-stage virtualization
mapping described in Bergsten. (See Ex. 1006 at 3:18-29; see also Ex. 1007 at
8:62-9:8; see also Ex. 1010 at 108) Moreover, as described above, if a match
exists between a requesting host device address and an allocated storage space,
39
the host device is allowed to access stored data using the requested address,
otherwise, the host device is denied access. (See Ex. 1010 at 108)
In view of Patent Owners assertion in litigation that this limitation covers
systems which maintain an allocation of subsets of storage to hosts by mapping
one or more LUNs to one or more hosts, such limitation is provided by the
combined system under the USPTOs broadest reasonable interpretation
standard that is applied in PTAB proceedings. (Ex. 1009 at p. 10; see also Ex.
1010 at 108)
3. The storage router of claim 1, wherein the devices connected to the first transport medium comprise workstations.
The combined system of Kikuchi and Bergsten describes host devices; as
described in the Chase declaration, the host devices include multiple
workstations. (Id. at 109)
4. The storage router of claim 1, wherein the storage devices comprise hard disk drives.
Bergsten describes magnetic hard disk drives. (See 1007 at 4:34-36; see
also Ex. 1010 at 110)
7. a) A storage network, comprising: b) a first transport medium; c) a second transport medium; d) a plurality of workstations connected to the first transport medium; e) a plurality of storage devices connected to the second transport medium;
40
f) a storage router interfacing between the first transport medium and the second transport medium, the storage router providing virtual local storage on the storage devices to the workstations and operable: g) to map between the workstations and the storage devices; h) to implement access controls for storage space on the storage devices; and i) to allow access from the workstations to the storage devices using native low level, block protocol in accordance with the mapping and access controls.
Claim 7 recites similar limitations as in claims 1, 3, and 4, and so the
discussion set forth above for claims 1, 3, and 4 applies with equal force to claim 7.
See also Ex. 1010 at 111-17.
10. The storage network of claim 7, wherein the storage router comprises: a) a buffer providing memory work space for the storage router;
Both Kikuchi and Bergsten describe a RAM that server as a buffer and
provides a memory work space. (See Ex. 1006 at 5:20-22; see also Ex. 1010 at
120)
b) a first controller operable to connect to and interface with the first transport medium, c) the first controller further operable to pull outgoing data from the buffer and to place incoming data into the buffer; d) a second controller operable to connect to and interface with the second transport medium, e) the second controller further operable to pull outgoing data from the buffer and to place incoming data into the buffer; and
In the combined system, the data storage controller/disk apparatus described
in Kikuchi interfaces with an FC-enabled host device and an FC-enabled storage
41
device. (See Ex. 1006 at 5:37-39; see also Ex. 1007 at 4:25-28; see also Ex. 1010
at 1010 at 121) Bergsten details data transfer, via emulation drivers 21 and via
physical drivers 22, which includes moving data into and out of an attached
memory; this meets the limitation of pull[ing] outgoing data from the buffer
andplac[ing] incoming data into the buffer, as recited in 035 Patent claim 10
with respect to both the first and second controllers. (See Ex. 1010 at 122, 124)
f) a supervisor unit coupled to the first controller, the second controller and the buffer, the supervisor unit operable: g) to map between devices connected to the first transport medium and the storage devices, h) to implement the access controls for storage space on the storage devices and i) to process data in the buffer to interface between the first controller and the second controller to allow access from workstations to storage devices.
Claim 10 recites similar limitations as in claim 1, elements e-h and so the
discussion set forth above for claim 1 applies with equal force to claim 10. See
also Ex. 1010 at 125-128.
11. A method for providing virtual local storage on remote storage devices connected to one transport medium to devices connected to another transport medium, comprising:
interfacing with a first transport medium; interfacing with a second transport medium; mapping between devices connected to the first transport medium and the
storage devices and that implements access controls for storage space on the storage devices; and
allowing access from devices connected to the first transport medium to the storage devices using native low level, block protocols.
42
Claim 11 recites similar limitations as in claim 1, and so the discussion set
forth above for claim 1 applies with equal force to claim 11. See also Ex. 1010 at
129-134.
Dependent claims 8, 9, and 12-14 Dependent claims 8, 9, and 12-14 correspond to dependent claims 2-4.
The discussion set forth above for claims 2-4 therefore applies with equal force to
claims 8, 9, and 12-14. (See also Ex. 1010 at 118, 119, and 135-137)
C. Claims 5 and 6 Are Rendered Obvious by Kikuchi taken in Combination with Bergsten and Smith
Smith discloses each of the features of claims 5 and 6 of the 035 Patent, as
discussed supra. (See also Ex. 1010 at 138, 139) The combined system of
Kikuchi, Bergsten, and Smith therefore meets these claims as well. More
particularly, one of ordinary skill would understand that the emulation and physical
drivers of Bergsten are designed to incorporate the functionality of the Tachyon
chip of Smith. (Id.) One of skill in the art would also have understood that the
internal architecture of the Tachyon chip was readily combinable with the disk
apparatus of Kikuchi to provide for communication between the FC interface (e.g.,
the physical drivers and emulation drivers) and the internal operating system of
Kikuchis disk apparatus. (Id.)
5. The storage router of claim 1, wherein the first controller comprises: a) a first protocol unit operable to connect to the first transport medium;
43
b) a first-in-first-out queue coupled to the first protocol unit; and c) a direct memory access (DMA) interface coupled to the first-in-first-out queue and to the buffer.
6. The storage router of claim 1, wherein the second controller comprises: a) a second protocol unit operable to connect to the second transport medium; b) an internal buffer coupled to the second protocol unit; and c) a direct memory access (DMA) interface coupled to the internal buffer and to the buffer of the storage router.
Smith describes a Frame Manager that transmits and receives FCP
communications (e.g., FCP frames and primitives) and implements the FCP
specification(s). (See Ex. 1005 at pp. 5, 9; see also Ex. 1010 at 140) Thus, the
combined system, and in particular, Smith, teaches the first protocol unit as
claimed in claim 5 and the second protocol unit as claimed in claim 6. (Ex. 1010
at 140, 143)
Smiths illustration of the Tachyon chip in Fig. 4 includes multiple first-in-
first-out queue[s] (as claimed in claim 5) including an inbound data queue, an
outbound data queue, and acknowledgment queue. (See Ex. 1005 at 5, Fig. 4
(reproduced in part above); see also Ex. 1010 at 141)
Fig. 4 of Smith illustrates an inbound sequence manager and an outbound
sequence manager that perform DMA transfers of inbound data into buffers as
well as DMA transfers of outbound data from host memory to the outbound
sequence manager. (Ex. 1003 at 7, 9; see also Ex. 1010 at 142) Thus, the
44
combined system, and in particular, Smith also teaches the direct memory access
(DMA) interface as set forth in claims 5 and 6. (Ex. 1010 at 142-3)
Further, one or more of the inbound data FIFO, acknowledgment FIFO, and
outbound Frame FIFO described and illustrated in Fig. 6 of Smith as being internal
to the Tachyon chip also teaches the internal buffer coupled to the second
protocol unit as recited in claim 6. (Ex. 1005 at Figs. 5 and 6; Ex. 1010 at 59)
D. Claims 1-4 and 7-14 Are Rendered Obvious by Bergsten taken in Combination with Hirai
1. Introduction of the Hirai Reference
JP Patent Application Publication No. Hei 5[1993]-181609 to Hirai (Ex.
1008) was published in 1993, more than one year before the earliest priority date of
the 035 Patent (December 31, 1997). Therefore, Hirai is prior art to the 035
Patent under 35 U.S.C. 102(b). Hirai was among the hundreds of references
before the Examiner but was not discussed by the Examiner in any office action or
referenced in any rejection.
Hirai describes a microcomputer system that has a magnetic disk controlling
device that allows sharing of multiple magnetic disk devices by multiple
microcomputers by handling data of a size that exceeds the capacity of one
magnetic disk device by accessing all the memory regions of multiple magnetic
disk devices as if the multiple memory regions were the memory region in 1
magnetic disk device from individual personal computers and managing the access
45
right of individual personal computers. (See Ex. 1008 at 5) Hirai describes
managing access to remote storage partitions using a partition control table; the
security management divides the memory region of the virtual magnetic disk
device described above, sets up the access right for each personal computer in each
divided part (will be referred to as partition), and prevents illegal access. (See id.
at 12) This is implemented by preparing a partition control table 7. (Id.) The
access right to a partition includes R (read), W (write), C (create), D (delete), and
X (execute). (Id.)
Through Hirais partition control table, access rights are assigned to host
computing systems on a per-partition basis, such that a first host system may have
read and write access, while a second host system may have only read access. (Id.
at 12, 13, Fig. 2) Using the table of Figure 2, the controller maps hosts to
virtual local storage and translates commands from the hosts and sends them to the
virtual local storage locations. Hirai further describes how an access request from
the personal computers 1 and 2 to the magnetic disk devices 8-12 is notified to the
magnetic disk controlling mechanism 6 through the magnetic disk interface boards
4 and 5 and is converted to an access request to a virtual magnetic disk device that
extends over the magnetic disk devices 8-12 in the magnetic disk controlling
mechanism 6. (Id. at 11) Through the process above, the magnetic disk devices
8-12 can be handled from the personal computer main body as one virtual
46
magnetic disk device with all of the memory regions of the magnetic disk devices
8-12 as its own [memory] region. (Id.)
2. The Combined System of Bergsten and Hirai
In combining features of Bergsten and Hirai, an exemplary configuration for
Bergstens storage controller includes a FC host device and a SCSI-based storage
device array. As explained by Bergsten, transport media on either side of the
storage controller may be FC:
Such emulation is implemented, in part, by using a common
communication interface for data communication paths 7 and 8,
such as SCSI. Again, in other embodiments, the data
communication paths 7 and 8 may conform to other protocols
and standards, such as serial SCSI, Fibre Channel, or ESCON.
(Ex. 1007 at 6:4-9)
Furthermore, a common protocol [is] implemented by all storage controllers in
the system to support intercommunication of storage commands. (Id. at 4:55-58)
Professor Chase discusses, in his declaration, the functionality of the emulation
drivers 21 and physical drivers 22 of Bergstens storage controller in allowing a
plug and play system where devices communicating in any supported (SCSI-
compatible) transport medium may be connected to each of the host device side
and the storage device side of the storage controller, with emulation drivers 21 and
physical drivers 22 performing translation with the SCSI-compatible core
47
operating system of Bergstens storage controller. (Ex. 1010 at 147) As
explained by Bergsten, the emulation drivers 21 convert host commands into a
format recognized by the OS of the storage controller and the physical drivers 22
transform commands from the format used by the storage controller [OS] to the
format recognized by the local storage array such as FCP. (Ex. 1007 at 7:35-49)
In the combined system, Hirais access controls are incorporated into
Bergstens storage controllers. Bergsten notes that the storage controller is
configured to allow[] data blocks to be write protected, so that a block cannot be
modified from any host computer. (Ex. 1007 at 15:40-42) To the extent that
Patent Owner attempts to argue that Bergsten may lack explicit and nuanced detail
regarding the implementation of access controls and the ramifications of write-
protecting data upon a single storage controller of a daisy-chained storage
controller network, the access control map described in Hirai is detailed with a
high degree of particularity. In the resulting system, a host computer sends a FCP
(NLLBP) message containing a SCSI command along a FC transport medium to
the storage controller. (Ex. 1010 at 147-151) The emulation drivers 21
described in Bergsten de-encapsulate the SCSI command from the FCP message
and provide the command to the processing system of the storage controller. The
storage controller, in turn, maps the host address within the message to a logical
storage location and verifies that the access type (which is requested within the
48
storage command) matches the access controls specified for the host device for the
particular logical storage location. (Id.) If so, the storage controller further maps
the logical storage location to a physical storage location in the SCSI-based storage
device array and sends the SCSI (i.e., NLLPB) storage command to the physical
drivers which encapsulate the command in an FCP wrapper prior to issuing the
command to the appropriate disk in the storage array. (Id.)
An artisan skilled in network storage during the relevant timeframe would
combine the Bergsten and Hirai teachings in the above-described manner in order
to provide additional levels of granularity to the access controls of the Bergsten
system based on the mapping-based access controls of Hirai. (See generally Ex.
1010 at 147-151) Professor Chase explains that, in the combined system, blocks
of host computers may be allocated varying levels of access to particular sets of
data, such that certain systems within a business entity could modify the data while
other systems within the business entity have read-only access, and further systems
within the business entity may be denied access altogether (e.g., due to a level of
sensitivity of the stored data). (See e.g., Ex. 1010 at 149)
In applying access controls at the logical addressing level, as Professor
Chase explains, access controls may be applied uniformly throughout all of the
copies of particular data within a daisy-chained networking system of storage
controllers, thus supporting and enhancing Bergstens goal of creat[ing] and
49
manag[ing] multiple back-up copiesin a manner that is both non-disruptive of,
and transparent to, the host computer systems and their users. (Ex. 1007 at 3:4-8;
Ex. 1010 at 150) He further explains that these functionalities could have been
readily added to Bergsten predictably by a person having routine skill in the field
of network storage design. (Ex. 1010 at 151)
3. Correspondence Between Claims 1-4 and 7-14 and the Combined System of Bergsten and Hirai
The discussion below demonstrates the correspondence between the claims
terms and the structure and/or operation of the combined system of Bergsten and
Hirai. Here again, Arabic letter element identifiers have been inserted into the
claim language in a manner consistent with that used in Exhibit 1010, the
declaration of Professor Chase.
1. A storage router for providing virtual local storage on remote storage devices, comprising:
As noted above and explained in the declaration of Professor Chase, in the
combined system, Bergstens storage controller emulates a local storage space for
attached host devices, thus providing the host devices with virtualized local
access to a remote FC storage disk array. (See, e.g., Ex. 1007 at 5:65-6:1; see
also Ex. 1001 at 152)
Especially in view of Patent Owners assertion in litigation that this
limitation covers systems that connect to storage via a host serial network
50
transport medium, making the storage remote from the hosts in which [t]he
storage appears to the hosts to be local, such limitation is provided by the
combined system under the USPTOs broadest reasonable interpretation
standard that is applied in PTAB proceedings. (Ex. 1009 at p. 9; see also Ex. 1010
at 152)
b) a buffer providing memory work space for the storage router; Bergsten describes a RAM that buffers message data communications
between the storage controller and a storage device. (See Ex. 1007 at 6:24-26;
see also id. at 10:23-29; see also Ex. 1010 at 153)
c) a first controller operable to connect to and interface with a first transport medium; d) a second controller operable to connect to and interface with a second transport medium; and
Bergstens storage controller interfaces with an FC-enabled host device and
an FC-enabled storage device array. (See Ex. 1007 at 4:25-28; see also Ex. 1010 at
154-5) Bergstens emulation drivers, which constitute the claimed first
controller, and physical drivers, which constitute the claimed second controller,
enable communications between host device(s) and storage device(s) in the FC-
enabled storage device array. (Ex. 1010 at 154-5)
In light of Patent Owners assertion in litigation that these limitations cover
systems that include a host controller that is operable to connect to and interface
51
with a Fibre Channel transport medium and a second controller operable to
connect to and interface with a Fibre Channel transport medium, such
limitations are provided by the combined system under the USPTOs broadest
reasonable interpretation standard that is applied in PTAB proceedings. (Ex.
1009 at p. 9 (emphasis added); see also Ex. 1010 at 154-5)
e) a supervisor unit coupled to the first controller, the second controller and the buffer, f) the supervisor unit operable to map between devices connected to the first transport medium and the storage devices, g) to implement access controls for storage space on the storage devices and h) to process data in the buffer to interface between the first controller and the second controller to allow access from devices connected to the first transport medium to the storage devices using native low level, block protocols.
Bergsten describes a supervisor unit that resides in the operating system
and that is coupled to the emulation drivers (i.e., the first controller), the physical
drivers (i.e., the second controller), and local memory including the RAM (i.e., the
buffer). (See Ex. 1007 at 7:29-31; see also id. at 7:24-28; see also Ex. 1010 at
156) As described above, Bergstens multi-stage virtualization mapping a logical
address from a requesting host device to an internal logical address and