User Description Up to this point I have covered Application inventory as a cost savings initiative followed by a discussion of Application inventory starts with a definition.In our specific implementation, we started with a base set of attributes. Some of those were very obvious while others were necessary for managing some of our base enterprise capabilities. Items that were only captured in a 1:1 (one-to-one) relationship to any single specific application were: Description Owning Group Status (or state of the implementation) Component/Module Alias (alternate naming; the key to our success) Data Classifications (for information security and control) Manufacturer (if purchased) This was sufficient information for us to move along and begin consolidating data. As we engaged more and more teams and discovered localized stores of this data, our metamodel expanded to include a few more elements. Some of these also included associated increase in our own inventory tool capability. As this capability was implemented we were able to start turning off applications through consolidation (one of our key goals).Additional Items (one-to-one) Interface (consumption and providing) Type (of application) Product Line (for ease of grouping and management) Version Capability Importance (a tiered level detailing the impact to our company) Customer Located External (to Intel) End of Life Tracking (legal and recovery data) Cross-Site Consumption Cost (develop, host, support, license) User Count Additional Items (one-to-many) Customer Country/Region Disaster Recovery Details Contact Hosting Platform Name We also had some 1:M (one-to-many) related attributes which we cataloged in order to further build out the metadata for each instance. Network Ports/Protocol Support Link (to external data) Technology Product Testing (results, for future enterprise releases) Many of them are specific to how we do business inside our company, however, you might find value in some of our learning’s.As I mentioned we discovered pockets of data and some little (and big) applications utilizing some of this data. It has become increasingly easy to implement an additional module that relates and consumes the data from the larger metamodel. From an architecture stand-point, we need to be careful not to develop this into a “jack-of-all-trades” application that does everything for everyone.Up to this point we still only capture data (and functionality) that is related to the Application through direct relationship. As an example, we associate the application to what network port/protocol it uses, but not necessarily the network that is can pass across. We will capture the hosting platform name but not the specifics of that host. Instead we rely on interrelated systems to draw the larger picture of the whole enterprise.Are we done?Not even close. As noted in our Intel Information Technology 2007 Performance Report (page 12), this application and the associated capabilities we are developing is having a big impact. During 2007 we were instrumental in the end-of-life of over 450 applications. The metadata we capture and maintain have helped to identity instances of duplicity as well as opportunities where support and consumption have dropped to the point we can turn off the application.In my next entry I will talk about how we were able to use two people resources and build an application in four weeks to solve this problem. Also how that solution has been running non-stop, for fifteen months with no downtime or impact to customers while increasing capability and usability while doing releases on average of every two weeks. Future posts will talk about some future enhancements to get us through the next year and the further reduction in application inventory we are charged with.Have you had similar issues at your company? Do you currently have this challenge before you? I’m curious to hear some of those challenges and potential solutions.