+ All Categories
Home > Documents > A Throughput-World View of Information Systems: By Rick Denison…€¦ ·  · 2010-11-15A...

A Throughput-World View of Information Systems: By Rick Denison…€¦ ·  · 2010-11-15A...

Date post: 30-May-2018
Category:
Upload: dinhthien
View: 214 times
Download: 0 times
Share this document with a friend
9
A Throughput-World View of Information Systems: By Rick Denison, 2010 1 A Throughput-World View of Information Systems: By Rick Denison, 2010 Throughput Accounting (TA) is an accounting method that is meant to support the Theory of Constraints (TOC) as originally described in the book, The Goal, by Eli Goldratt and Jeff Cox. This wasn’t the first shot fired across the bow, as Dr. Goldratt presented a paper in the 1983 APICS conference titled “Cost Accounting; Enemy Number One to Productivity.” There have been studies and books written on the subject of Throughput Accounting for the last 20 plus years. There have been software packages written as add on software to Enterprise systems. There have been many attempts at creating TA reporting in organizations across the world. However, all these are based upon an assumption on the main information systems. Mainly, TA is an afterthought to the Enterprise System. All the functions of the system are based upon the structures of Cost Accounting that has been written into the software over the time and evolution of the Enterprise system. There have been refinements, Activity Based Accounting (ABC), Lean Accounting (Value Stream Accounting); however these are just a different view of the base allocation accounting methodology that as currently the driving force behind the modern Enterprise information system. Throughput Accounting as a basic premise has no allocation of cost to products produced by the system. All cost associated with TA are very simple: Throughput – The rate at which the system generates money through sales, calculated as (Sales Price minus Total Variable Cost (TVC)). Investment (Inventory) – all the money the system invests in purchasing items the system intends to sell. Operating Expense – All the money the systems sends on turning Investment into Throughput. I am proposing taking a fresh look at the subject of Throughput Accounting. I will attempt to implement a system that is first and foremost concerned with the flow of materials through the system. The idea is to increase the throughput of the system by configuring the information system for flow first, then apply Throughput Accounting as the system for Management Accounting, then once these are functioning, then to apply Financial Accounting. The reasoning behind this approach is to go back to basics. Then have a simple and effective transaction system that subordinates to the flow of the operations that produce value in the organization, then to support a Process of Ongoing Improvement (POOGI) have the main view of all working within the system for Sales, Operations, Supply Chain, and Transportation having the system present the reports that support the Throughput Accounting method. Once configuration accomplished this view of the system, then add the functionality to accommodate Generally Accepted Accounting Practices (GAAP) and Sarbanes-Oxley compliance, and the other financial instruments.
Transcript

A Throughput-World View of Information Systems: By Rick Denison, 2010

1

A Throughput-World View of Information Systems: By Rick Denison, 2010

Throughput Accounting (TA) is an accounting method that is meant to support the Theory of Constraints (TOC) as originally described in the book, The Goal, by Eli Goldratt and Jeff Cox. This wasn’t the first shot fired across the bow, as Dr. Goldratt presented a paper in the 1983 APICS conference titled “Cost Accounting; Enemy Number One to Productivity.” There have been studies and books written on the subject of Throughput Accounting for the last 20 plus years. There have been software packages written as add on software to Enterprise systems. There have been many attempts at creating TA reporting in organizations across the world. However, all these are based upon an assumption on the main information systems. Mainly, TA is an afterthought to the Enterprise System. All the functions of the system are based upon the structures of Cost Accounting that has been written into the software over the time and evolution of the Enterprise system. There have been refinements, Activity Based Accounting (ABC), Lean Accounting (Value Stream Accounting); however these are just a different view of the base allocation accounting methodology that as currently the driving force behind the modern Enterprise information system. Throughput Accounting as a basic premise has no allocation of cost to products produced by the system. All cost associated with TA are very simple:

• Throughput – The rate at which the system generates money through sales, calculated as (Sales Price minus Total Variable Cost (TVC)).

• Investment (Inventory) – all the money the system invests in purchasing items the system intends to sell.

• Operating Expense – All the money the systems sends on turning Investment

into Throughput. I am proposing taking a fresh look at the subject of Throughput Accounting. I will attempt to implement a system that is first and foremost concerned with the flow of materials through the system. The idea is to increase the throughput of the system by configuring the information system for flow first, then apply Throughput Accounting as the system for Management Accounting, then once these are functioning, then to apply Financial Accounting. The reasoning behind this approach is to go back to basics. Then have a simple and effective transaction system that subordinates to the flow of the operations that produce value in the organization, then to support a Process of Ongoing Improvement (POOGI) have the main view of all working within the system for Sales, Operations, Supply Chain, and Transportation having the system present the reports that support the Throughput Accounting method. Once configuration accomplished this view of the system, then add the functionality to accommodate Generally Accepted Accounting Practices (GAAP) and Sarbanes-Oxley compliance, and the other financial instruments.

A Throughput-World View of Information Systems: By Rick Denison, 2010

2

The reason to set up a system this way is due to the switch in system design over the years. Information Systems have transitioned from systems designed to simplify the production planning function to systems designed to simplify the accounting process. Although both of these requirements are necessary in today’s organizations, this switch has caused the operational aspects of the organization to subordinate to the accounting methods. There are many reasons this constrains an organization in the effort to increase throughput. This was not a malicious attempt by anyone. This is just the nature of adding functionality to systems once the current limitation is overcome. Since the data in a system has many purposes depending upon the question a function is trying to answer, the change over time is very subtle. Before we discuss the growth of information systems we need to review the first organizational approximation of Allocation Costing. This was done by giants of organizational management DuPont and General Motors in the early 20th century to create a reasonable approximation of “Product Cost”, and what we today call Cost Accounting. These are actually based upon the approximation of a 10:1 ratio of Direct Labor employees to indirect employees in early corporations. This allocation accounting actually goes back further in history to the Merchant fleets of the East India Company in England in the late 16th and early 17th century, but for our purposes we will restrict this narrative to the effects on modern information systems. This view of information doesn’t become apparent until much later on in the development of information systems, but it will come about and change the course of development and understanding of the data within the information system. To fully understand the reasoning I have, you have to go back to the very first generation of information systems. One of the first generation systems was known as the Bill of Material Processor from IBM. The problem this system was trying to overcome was the amount of work that was required to figure out how much material was required to make large complex assemblies such as battleships and aircraft carriers. To figure out all the requirements, it took a huge staff of clerks’ weeks to summarize and simplify material requirements across the many levels of product structure by summarizing the material requirements contained within the assembly bills of material. All material requirements before the BOMP were calculated and summarized manually. This is one subject Eli Goldratt was talking about in the book, Necessary, But Not Sufficient (NBNS). In this early system (BOMP), there were no financial restrictions in processing a bill of material, there were no organizational lines that the materials crossed, and there was no mark-up of inventory cost or cost allocation of any kind to the inventory planning process. The limited processing power of the computers used to calculate material requirements, and the limited method for data storage retrieval limited the amount of transactions that could be calculated in a reasonable timeframe. So simplicity of calculations was necessary to accomplish this once very labor intensive tasks.

A Throughput-World View of Information Systems: By Rick Denison, 2010

3

Behind the scenes, the accounting function still calculated the cost of materials through the use of large journals in which material costs were collected and summarized for accounts payable, inventory valuation, and for product cost. As computing power, and data storage and retrieval improved, the abilities of the software could also be improved. This improvement lead to what was termed Material Requirements Planning (MRP I). The new addition “time phasing” was now possible, and so the lead-time of materials and sequence of operations was required for the planners to be able to determine when the need for materials was required for a level-by-level view of inventory requirements. The other aspect added was something called perpetual inventory, so that orders (Purchase Orders and Work Orders) would be able to predict what the inventory would be over time. Simple lot sizing techniques were also available such as Order Point and Min-Max methods which allowed organizations to simplify the inventory ordering process (internal and external). Also, more complicated lot sizing techniques were created such as Economic Order Quantity (EOQ) formulas so that inventory levels and investment in inventory could be optimized. Since Lead-time was available, these systems could also create crude capacity models to check and see if there were overloads in areas, and provide the ability to address future problems proactively. The basis of the system was infinite capacity, so there was no optimization of plans based upon limitations in the organization. If you entered the order, it would plan it based upon the configuration rules you used. Still no burden and overhead were added to the system, and again artificial or real organizational barriers did not restrict the planning of material and process. However, the conflict of sub-optimization of the system first becomes institutionalized in the Lot Sizing techniques and Static Lead-time calculations. These were accepted as a reasonable accommodation for material and capacity planning purposes. These methods would provide a good enough view of the planning events, and operations could be organized around the plans. The effect upon the company was it no longer needed an army of inventory planners to calculate material requirements. The computer could do what it does best, crunch the numbers and summarize the requirements. The emphasis moved from calculation to executing the plans. As time marched on, and the computers capabilities improved, new functions were again possible. This lead to a system termed Manufacturing Resources Planning (MRP II), higher level capacity planning was now possible, again in the infinite mode. In the previous generation of system, the functions were not all connected meaning, if data changed in one part of the system, the all the effected systems wouldn’t automatically update. This was called “Open-Loop” processing. The system was heavily dependent upon the planners and schedulers to manually realign plans once something changed. In MRP II, these processing loops were all connected, and the system would suggest to

A Throughput-World View of Information Systems: By Rick Denison, 2010

4

the planners and schedulers to adjust the plans so they would be in phase with the required activities. This was called “Closed-Loop” processing. Finite scheduling just starting to be used outside the MRP systems, but MRP Finite Capacity Planning was still not fully developed. The improvements to the software brought about the attention of other functions within the company to take notice. If all material and labor transactions were tracked within the system, we could add the Cost Accounting Calculations to the system, and have a simplification of the Accounting Process. Direct Labor transactions and Overhead assignments to inventory costs were stored in the inventory master data, the ability to add the financial aspects to the system were now possible. These were only a single company basis, or if you looked at your multi-plant organization as a single entity you could account for all transactions in an organization. Transactions across legal entities were typically handled by creating two separate companies and the transactions that were used for dealing with outside vendors and customers were also used to deal with intercompany transactions as well. Additionally, cost roll-up was possible, and basic financial reporting became part of the system, but in order to allow the system to process information more effectively, the software architects and programmers started to take some short cuts in data processing to speed up these systems. These summarizations were having inventory transact into the system at a Standard Cost. To simplify the rolling up of product cost, the inventory cost would then be stored at the Standard Cost level, and so finding a Total Variable Cost (TVC) was no longer possible due to this simplification. In the actual transactions, the costs existed, but since it was not a needed functionality, this summarization was again a reasonable accommodation to the user community and accounting community. At this time the summarization of cost was accomplished on a “level-by-level” bases, and not “Order by Order.” Moving across company lines was difficult, as the systems did not have much multi-company functionality. The computers and storage capabilities still limited much of the Multi-Entity functionality of the larger organizations. The General Ledger and other financial instruments were added to help with Variances because the data was perpetual and not on-line real time. Accounting process has to be created to deal with the variances of the information systems. The inventory transactions took place based upon standards in the Item Master. Part of the End-of-Month process was for accounting to deal with the variances this type of summarization of data produced for the reconciliation of financial data. Again, this was a reasonable accommodation for the company as this would speed up the individual company financial closing process. This did require a level of scrutiny for making sure the “Standards” were a good enough approximation of inventory value. The decisions to evaluate individual inventory was now possible, and a company could compare internal costs with the costs of the outside world. This capability helped purchasing and operations in the Make/Buy decisions. Of course, with the level of summarization the system was doing, these decisions were made on a sub-system level, and the effect upon the complete system was not fully understood.

A Throughput-World View of Information Systems: By Rick Denison, 2010

5

If a company has multi-site operations, internal Purchase Orders and Inventory Receivers would be done to create the Transfer pricing between business entities. Again, this was a reasonable accommodation to be able to transact materials across the organization from one entity to another. Since all the functionality of Transfer Pricing fit within the structure of buying materials outside the company, this was a good enough answer to the problem. The companies with each individual financial reports could be consolidated at a higher organizational level, but typically as another level in the organizational structure, or through these new fangled Spreadsheet applications like VisiCalc, and Lotus 123. Now we could compare one business unit with another. The ability to have the basic transactional accounting functions to be performed as a derivative of operations and sales processes opened up a whole new market for the software companies. The software could now be sold as a solution to Operations and to Finance. As the systems continued to add more and more functionality, the accommodations enacted allowed each company to function as a standalone company, a Profit Center, with its own Profit and Loss statements. The structures of the companies were changed to accommodate the limitations of the systems, and policies and procedures were put into place to overcome these limitations. The greatest area for these accommodations is when material moved between different business entities. Even though it was just one company they no longer acted as one company, but many separate companies. The issue at hand is the software companies kept building systems based upon the assumptions and accommodations of the past, and did not stop and reexamine these previous decisions. This further institutionalized the behavior into the companies buying the software. There many other issues not detailed in this narrative here that have nothing to do with the evolution of information systems such as the accommodations and approximations regarding “Cost”, “Direct Labor”, “Overhead”, “Variances”, “Budgeting”, and “Forecasting” I will not write about here, but these also play into the fact that once the limitations were overcome, the Structures for the accommodations were rarely reexamined or changed. Again these type of decisions were made based upon all the assumptions institutionalized in the information system from the previous generations and we started to see our companies moving toward what Goldratt calls the “Hollow Company Syndrome” which is the vertically integrated company was rapidly becoming a institution of the past. For example, when Henry Ford designed the Ford Motor Company, the vertical integration was one of the greatest strengths of the company, in his autobiography, the time it took from the time ore was mined until a Model T rolled of the assembly line was 88 Hours. He was probably the first manufacturer to have the car sold before he paid for the raw materials. Today the modern automobile is many time more complicated

A Throughput-World View of Information Systems: By Rick Denison, 2010

6

than the Model T, but just how long is the supply chain to supply the vehicle? At least many hundreds of days, if not decades in some manufacturers. Enterprise Resources Planning (ERP). Here is a system that has the processing power to link an enterprise together financially. However, this structure is still limited by of all the accommodations and limitations that have become the common practice of the modern enterprise. The Legal Business Entity is another subject that causes organizational barriers. Many larger companies have multiple business entities that make up the modern organization. These companies have periods were then diversify their product offering by buying other businesses, and bring them under the corporate structure. Then, several years later, the opposite effect happens, when money is short, these same companies tend to want to focus on their “Core Competency’, and they sell of these non-core businesses to strengthen the company financial statements. The tendency is to keep the companies as separate legal entities for simplicity of measurement. This also makes it easier to buy and sell companies, as there is a natural break in the financial statements. The typically implementation of these systems starts with the configuration of the financial system first, and then addresses the remaining organization. Sales and Operations are then subjected to the financial constraints imposed upon them rather than a logical structure to deliver value and service to the customers. This structure by nature is focused on measurement, and not flow. A tool commonly used to graphically illustrate the two sides of a conflict is through the use of a conflict diagram, also known as a ‘cloud’. In this case, we can see that this is the classic cloud for measuring the system in isolation verse measuring the system as a whole. This is the fundamental difference between the Cost World view of the organization and the Throughput World view of the organization.

A Throughput-World View of Information Systems: By Rick Denison, 2010

7

The cloud is read in the follow way: Upper Branch: In order to manage the organization well I must (upper need) Control Cost. In order to Control Cost I must judge decisions based on local impact. Lower Branch: In order to manage the organization well I must (lower need) Protect Throughput. In order to Protect Throughput I must not judge decisions based upon local impact. The Conflict: I cannot both judge decisions based on local impact, and not judge decisions based on local impact at the same time. Thus I have a conflict. This conflict has two important needs that do not have a reasonable compromise. This is why Throughput Accounting was crated; to overcome the compromise, and evaporate the conflict. If our information systems are built upon the conflict, how do we overcome the conflict? Our systems have been designed to accommodate decisions made on the local basis, and not global decision-making process. There are legal and regulatory issues such as Generally Accepted Accounting Practices (GAAP) and the Sarbanes-Oxley act (SOX) that must be addressed in system design; however, these are not barriers to Throughput Accounting. It is my proposal to implement Throughput Accounting as the main data the organization operate under. All transactions would take place at Total Variable Cost inside and outside the company. The parent organization would accommodate Tax, GAAP, and SOX compliance outside the transactional system. All internal reporting would be per Throughput Accounting methods and reporting. Including the TA aspects used to measure local performance of the system. These are Throughput Dollar Days and Inventory Dollar Days. This is in effort to insulate Operations and Sales from the distortions inherent in accounting methods institutionalized into our ERP systems. The structure of the company would be multi-site, company, and would have several legal entities. These structures would also be accounted for outside the formal system. It is not the issue that the software companies were aware that the evolution of the systems would make it difficult to accommodate TA. The evolution of the systems took 50 years to get to this point, in a series of reasonable accommodations to overcome true barriers to the organizations at the time the systems were built. As Einstein said, the definition of insanity is to repeat the behavior, and expect different results. We could try another way to overcome the accommodations and assumptions of the system using the current structure, or we could start over from scratch? Both of these are not good choices. If I have learned anything, a compromise is a result of misunderstanding about basic principles. We have all been trying to fit our view of

A Throughput-World View of Information Systems: By Rick Denison, 2010

8

Throughput Accounting into systems that were never designed to address the existence of TA. However, we cannot ignore the capabilities and processing power of these systems, we have to find a way that the construction of these systems is enhanced. That is when I came to the conclusion that rather than build another add-on or custom reporting function, use the systems inherent ability to function for a system of flow, but construct the configuration in such a way that the accounting accommodations of the past 40 years does not impact the configuration into areas where flow is subjected to artificial accounting and organizational constraints. Further, rather than have TA be an afterthought, have it as the primary driver of measurement for the organization. In this way, the operations and sales systems will be aligned for flow, and TA will support a holistic decision support structure for the organization. With the power and flexibility of these systems, once these two aspects (Flow and TA) are constructed, and then subordinate financial accounting to the system flow characteristics. Does this overcome the assumptions built into the system? Basically, rather than implement accounting first, let’s implement operations first. Base the implementation upon a view of the organization as one system, rather than a system of independent systems link by accounting, let’s link them by operations. The model in question will need to have a multi-site organization that has operations in two countries, has materials that begin in one site, then flow to other sites for completion, in some cases across borders, and then flow to a distribution system. The Distribution system will have operations in the two countries. The plants will have similar capabilities in some production sites, and have alternative routes. The system will operate TOC Replenishment, TOC Drum Buffer Rope scheduling, and most importantly Throughput Accounting that transcends all organizations into a holistic view of the organization. For purposes of this discussion I will choose a steel products company with, Coil Processing (3 locations), Tube Mills (5 locations), Roll Forming (1 location), and Distribution service centers (7 locations). The figure below is of the Supply Chain network of the plants. Like many companies of this nature, there have been a series of acquisitions over the years. So the machinery and capabilities of the plants can be quite different. The age and condition of the machinery is also quite variable. The Distribution portion of the company was added by buying a network within a region of the market, and the two distribution centers in Canada were recently added to simplify the delivery process to customers, as these products were delivered directly from the plants where the products were produced, and they had to bring other products to the tube mill for consolidating deliveries to the customers

A Throughput-World View of Information Systems: By Rick Denison, 2010

9

So, what do you see as the detractors to configuring a system in this way? Please feel free to send your comments to me on this web site.


Recommended