Introduction:
A typical data center management solution will have a "data store" for storing
a. configuration data
b. monitoring data
On regular intervals it will collect the data and upload it to the central data store.
A host, database, listener, application server instance etc., are the managed entities that will need monitoring and management.
Before implementing the solution, every organization will try to "size" the infrastructure requirements for the solution.
The sizing involves estimating the resource consumption for the solution.
1. disk storage
2. memory
3. cpu cycles required
4. network bandwidth requirements
The Problem:
The complexity involved in sizing a data center management solution is mainly due to lack of clarity of definition.
for e.g., customer want to monitor 10 databases on 10 servers. (one DB on each server)
Depending on the database version and weather it uses ASM for storage or it is on a cold failover cluster and several other considerations the number of managed entities will vary. A database can have only one tablespace and a single datafile or a database can have 10K tablespaces with 100K datafiles. If customer want to monitor the database space usage by tablespace a database with 10K tablespaces will produce 10K times more data when compared to a database with 1 tablespace.
Each metric (the monitored datapoint) can also be collected once in every 5 minutes or once in an hour. The collection frequency may vary based on customer requirements.
A simple formula is number of metrics ( number is at lowest granule) * number of collections per day gives the total number of metric values per managed entity. Multiplying this number with bytes required to store an average metric value gives the bytes required to store the metric values per managed entity.
If customer wants to keep this data at this granularity for a week, then the storage requirement is number of days of retention at raw granularity * bytes required to store one day data. This multiplied by the number of managed entities gives the total storage requirement for this type of managed entitiy (e.g., database)
The same exercise need to be repeated for all types of managed entities to get the storage requirement.
Collecting that many bytes over a day means transferring that data over the network from the host where the managed entity resides to the host where the management data store resides. That gives the network bandwidth requirement.
This data need to be rolled up to a hourly average and daily average for keeping it for historical trending. One need to calculate the space requirements and processing requirements in this rolling up process.
The old data need to be purged away from the data store. It needs processing cycles.
All the managed entities should also have a set of configuration data. That need to be collected, compared on a regular basis and kept up-to-date. One need to compute the resources required to collect, compare and store the original and changed values of different configurations as they evolve.
Managing all the "heartbeats" between the central management solution and the monitored hosts requires a proactive two way mechanism. This needs processing capacity.
Monitoring the websites with synthetic transactions by periodically playing a pre-recorded transaction to make sure the application works require additional set of processing, storage, memory etc., This need to be added to the above estimate.
A provisioning solution that is used to deploy new instances of databases, application servers will need the GOLD IMAGES to be stored somewhere. Patching the enterprise applications needs considerable amount of patch download space and prorogation on regular intervals. This also need to be considered.
The other thing to consider is about the periodic tasks (jobs) that get executed to perform routine operations. Each job may produce few KB of output that need to be collect and stored in the central store for a period of time.
The next thing to consider is the number of users that use the application, number of concurrent users and the amount of data they retrieve from the data store for performing their operations.
Collecting all this information, adding the block header, segment header etc., overheads in the phisical structure of the target solution is a complex task. By the time this exercise is almost complete, the tool vendor would have released the next version of the data center management tool with some modifications!!
The solution:
It is nearly impossible to "size" any application to the last byte accuracy for storage, memory, Network and cpu utilization.
No managed entity should generate more than 25MB of data in the monitoring store. No managed entity should have more then 100kb of configuration data.
So, taking a 250GB storage for 1000 monitored entities and configuring the solution in such a way that it will not exceed this requirement is a wise man's solution.
Considering a 1000 monitored entities will have a maximum 100 administrators and at most 30 of them concurrently loggedin, An average db oltp application with 250GB database and 24 * 4 (considering 4 txn's an hour i.e., 15 min collection frequency) * 1000 (no of entities) would require a 2 processor DB server and a 2 processor Application server with 4GB RAM each.
Starting with this configuration, implementing the solution using an iterative model is the best approach to balance between the up-front sizing vs not impacting the service level of this critical service.
Conclusion:
In the current world with virtualization and cloud technology it is easier to build scalable Grid like applications and scaling the solutions horizontally is the trend that is going to stay and further pick-up more momentum. The data center management solutions are not exempt from this trend. For every next 1000 monitored entities we will have to add a node to the DB grid and another node to the AppServer Grid along with additional storage to the storage grid.
IS IT NOT??
Wednesday, December 9, 2009
Wednesday, December 2, 2009
Improved Application Performance Management
In April 2008 I have written this post : Application Performance Management
With
a. Oracle Enterprise Manager 10.2.0.5 Service Level Management,
b. REUI 6.0,
c. SOA mamagement pack and
d. Diagnostic Pack for middleware the solution is available.
A corporate can now implement a completely integrated APM solution based on the above four components of Oracle Enterprise Manager.
The new capabilities of REUI 6.0 include:
a. Full capture and replay of end user sessions.
b. Ability to link the slow performing JSP to the overall configuration in SOA management pack and then jump directly into the diagnostic tool (AD4J)
c. Customizable dashboards from the collected data.
What is REUI? "Real End User Experience Insight"! and What is 6.0? The new version released on 1st December 2009.
The ACTIVE MONITORING functionality is already provided by Enterprise Manager Service Level Management pack.
We hope these new tools will truly improve the Real End User Experience of the web applications.
With
a. Oracle Enterprise Manager 10.2.0.5 Service Level Management,
b. REUI 6.0,
c. SOA mamagement pack and
d. Diagnostic Pack for middleware the solution is available.
A corporate can now implement a completely integrated APM solution based on the above four components of Oracle Enterprise Manager.
The new capabilities of REUI 6.0 include:
a. Full capture and replay of end user sessions.
b. Ability to link the slow performing JSP to the overall configuration in SOA management pack and then jump directly into the diagnostic tool (AD4J)
c. Customizable dashboards from the collected data.
What is REUI? "Real End User Experience Insight"! and What is 6.0? The new version released on 1st December 2009.
The ACTIVE MONITORING functionality is already provided by Enterprise Manager Service Level Management pack.
We hope these new tools will truly improve the Real End User Experience of the web applications.
Tuesday, October 20, 2009
Exadata V2 - World's First OLTP database machine
Last year the Exadata Database machine was announced for making the data warehouse performance improvement by moving the database query processing to the storage layer.
This year, the new Exadata V2 is proven to be the fastest database machine for OLTP applications.
The trick lies in the Flash solid state storage and the 40GBPS infiniband internal network fabric used between the DB servers and the Storage servers. The full rack contains 8 database servers and 14 storage servers. There are other configurations available.
All the flash storage (600 GB per storage server) is viewed as the extended memory of the servers and it has 72GB of DDR3 memory as the cache per database server....
Guess what, an open challenge to IBM can be found at http://www.oracle.com/features/exadatachallenge.html
So, why not try out for a $10 Million prize?
This year, the new Exadata V2 is proven to be the fastest database machine for OLTP applications.
The trick lies in the Flash solid state storage and the 40GBPS infiniband internal network fabric used between the DB servers and the Storage servers. The full rack contains 8 database servers and 14 storage servers. There are other configurations available.
All the flash storage (600 GB per storage server) is viewed as the extended memory of the servers and it has 72GB of DDR3 memory as the cache per database server....
Guess what, an open challenge to IBM can be found at http://www.oracle.com/features/exadatachallenge.html
So, why not try out for a $10 Million prize?
Friday, September 4, 2009
Oracle Open World 2009 a mega event
Three years back I was at Oracle Open World 2006 staffing one demo pod on "Deployment Best Practices for Enterprise Manager"
Again this year, back to this event where I am presenting a session and three hands on lab sessions to attendees.
Link to the Sessions
S309533 - Applications Management Best Practices with Oracle Enterprise Manager
Oracle Enterprise Manager Hands-on Lab: Oracle Application Testing Suite
Oracle Enterprise Manager Hands-on Lab: Oracle Data Masking Pack
Oracle Enterprise Manager: Oracle Real Application Testing
Event dates: October 11-15 2009
Catch me there!!
Again this year, back to this event where I am presenting a session and three hands on lab sessions to attendees.
Link to the Sessions
S309533 - Applications Management Best Practices with Oracle Enterprise Manager
Oracle Enterprise Manager Hands-on Lab: Oracle Application Testing Suite
Oracle Enterprise Manager Hands-on Lab: Oracle Data Masking Pack
Oracle Enterprise Manager: Oracle Real Application Testing
Event dates: October 11-15 2009
Catch me there!!
Wednesday, September 2, 2009
Oracle Database 11g R2 released
Oracle has released the latest version of the worlds largest used database!
Key new features are:
"Shared Grid" with server pools for consolidating multiple applications into a single RAC cluster
"RAC One Node" for consolidating less critical application systems transparently get the benefits of easy movement etc.,
"ASM Cluster File System (ACFS)" in the storage management
"Instance Caging" for confining the database to specific cores in a SMP environments
"In-Memory Database Cache (IMDB Cache)" for moving the OLTP processing to the middle tier
capabilities to use the Disaster Recovery (Idle) infrastructure for Reposting and Backup operations.
It has several security enhancements as well.
This Oracle White paper has more details.
It is nice to hear about the new features from Tom Kyte. Being with oracle I had the opportunity to hear about the features directly from the "Architect" of those features!
Key new features are:
"Shared Grid" with server pools for consolidating multiple applications into a single RAC cluster
"RAC One Node" for consolidating less critical application systems transparently get the benefits of easy movement etc.,
"ASM Cluster File System (ACFS)" in the storage management
"Instance Caging" for confining the database to specific cores in a SMP environments
"In-Memory Database Cache (IMDB Cache)" for moving the OLTP processing to the middle tier
capabilities to use the Disaster Recovery (Idle) infrastructure for Reposting and Backup operations.
It has several security enhancements as well.
This Oracle White paper has more details.
It is nice to hear about the new features from Tom Kyte. Being with oracle I had the opportunity to hear about the features directly from the "Architect" of those features!
Saturday, August 8, 2009
Another Year....
Another year into the time making it 20 + 1 = 21 years of association with the computers.
This year is mostly focussed on the data center software infrastructure provisioning and public and private clouds. Application quality management and Application performance Management solutions.
This year is mostly focussed on the data center software infrastructure provisioning and public and private clouds. Application quality management and Application performance Management solutions.
Tuesday, July 21, 2009
Operations Management vs Quality Management
One of the challenges Information Technology faces eternally is to:
a. reduce the operational costs of a system/application in production
b. improve the quality of system/application in development
They are inter related. If any company manages one of these areas perfectly, the other area is redundant.
Let us assume company A has a perfect operations management. They can operationally manage any sort of product implemented, let the product has unlimited bugs, the operational management process is so strong that the business is never impacted due to the bugs in the IT due to its efficiency of operations management process and techniques. This is a nirvana of IT Asset operations management. All the systems are proactively monitored, the capacity problems are predicted upfront and fixed before hitting the limit; all the business KPAs are always met that too within the limited budget of operations management.
Let us assume company B has a perfect quality management. Right from the architecture up till implementation all the quality processes are perfected and the product goes live with absolutely no defects. Once implemented there is no need for any operational management. Absolutely no monitoring is required, no performance issue will be there and no bugs to fix.
The "Perfection" in one of the areas means the "Redundancy" of the other area.
Company A's method of perfecting the operations is a "short term solution" until all the applications are replaced by Company B's method of perfect quality.
The FACT is, none of the today's corporates have patience to perfect any one of them. They concentrate on "improving" both the areas slightly in cycles. This attitude leads to never ending cycles of replacing the applications and continuing to spend a lot of money in maintaining them.
When the management starts to reduce the operational costs without any improvement of QUALITY of new IT implementations, the quality of operations will go down and impact the business KPAs.
So, sure method of reducing the operational expenditure is to Perfect the quality of IT systems/applications. That is the only way....
a. reduce the operational costs of a system/application in production
b. improve the quality of system/application in development
They are inter related. If any company manages one of these areas perfectly, the other area is redundant.
Let us assume company A has a perfect operations management. They can operationally manage any sort of product implemented, let the product has unlimited bugs, the operational management process is so strong that the business is never impacted due to the bugs in the IT due to its efficiency of operations management process and techniques. This is a nirvana of IT Asset operations management. All the systems are proactively monitored, the capacity problems are predicted upfront and fixed before hitting the limit; all the business KPAs are always met that too within the limited budget of operations management.
Let us assume company B has a perfect quality management. Right from the architecture up till implementation all the quality processes are perfected and the product goes live with absolutely no defects. Once implemented there is no need for any operational management. Absolutely no monitoring is required, no performance issue will be there and no bugs to fix.
The "Perfection" in one of the areas means the "Redundancy" of the other area.
Company A's method of perfecting the operations is a "short term solution" until all the applications are replaced by Company B's method of perfect quality.
The FACT is, none of the today's corporates have patience to perfect any one of them. They concentrate on "improving" both the areas slightly in cycles. This attitude leads to never ending cycles of replacing the applications and continuing to spend a lot of money in maintaining them.
When the management starts to reduce the operational costs without any improvement of QUALITY of new IT implementations, the quality of operations will go down and impact the business KPAs.
So, sure method of reducing the operational expenditure is to Perfect the quality of IT systems/applications. That is the only way....
Wednesday, June 17, 2009
Private Clouds Again
As mentioned in the previous blog post on Private Clouds, the vendors are coming out with some products on this technology.
IBM launched the CloudBurst and HP BladeSystem Matrix.
All the three types of private clouds look promising at the current moment; IaaS, PaaS and SaaS. Service oriented, self-provisioning model of Cloud and the pay-as-you-go billing are key factors of success for Clouds.
IBM launched the CloudBurst and HP BladeSystem Matrix.
All the three types of private clouds look promising at the current moment; IaaS, PaaS and SaaS. Service oriented, self-provisioning model of Cloud and the pay-as-you-go billing are key factors of success for Clouds.
Monday, June 15, 2009
Google Fusion Tables
Q. Is Google experimenting with "Clouding" the "database"?
It looks to be.....
One can now upload upto 100MB table with a total limit of 250MB as of now to this new experiment.
It is surely a step towards taking the spreadsheets to the cloud. But will it pose a problem to BIG database vendors too in future?
Let us wait and watch.....
To know more about Fusion Tables go through FAQs and play around.
It looks to be.....
One can now upload upto 100MB table with a total limit of 250MB as of now to this new experiment.
It is surely a step towards taking the spreadsheets to the cloud. But will it pose a problem to BIG database vendors too in future?
Let us wait and watch.....
To know more about Fusion Tables go through FAQs and play around.
Thursday, June 4, 2009
Next dimension in search - Google Squared
Google has released a new search tool - Google Squared. This provides a structure to the unstructured data. It searches for all the related information and puts the data into a "Square" - a typical spreadsheet format. One can add rows and columns to it and save the Square.
It looks very good at the first look. All the best Google!!
http://www.google.com/squared
It looks very good at the first look. All the best Google!!
http://www.google.com/squared
Wednesday, May 13, 2009
Private Clouds
An interesting trend emerging is the concept of "Private Cloud" in the Data Center Management Area.
This trend eventually combines the strengths of
a. Standardization of overall infrastructure based on open standards
b. Virtualization of server resources
c. Seemless Provisioning of applications
d. Pay-as-you-go billing models and charge back of IT resources based on consumption
e. Services oriented architecture (SOA)
Their main advantages over the "PUBLIC clouds" are
1. less risk of security related issues
2. more control over the network bandwidth and availability
In my opinion, some big IT houses will adopt to "Private Clouds" to get the benefits of cloud computing while keeping the control in-house in the short term. As the technology advances and matures, it will also benefit the PUBLIC CLOUD providers!
This trend eventually combines the strengths of
a. Standardization of overall infrastructure based on open standards
b. Virtualization of server resources
c. Seemless Provisioning of applications
d. Pay-as-you-go billing models and charge back of IT resources based on consumption
e. Services oriented architecture (SOA)
Their main advantages over the "PUBLIC clouds" are
1. less risk of security related issues
2. more control over the network bandwidth and availability
In my opinion, some big IT houses will adopt to "Private Clouds" to get the benefits of cloud computing while keeping the control in-house in the short term. As the technology advances and matures, it will also benefit the PUBLIC CLOUD providers!
Thursday, April 23, 2009
Desktop Widgets
"Desktop Widgets" is an emerging design pattern in delivering desktop applications with a Rich Interface to the Internet applications running on different client environments like Windows, Mac and Linux.
AIR - provides an runtime environment for running such applications.
How will it compare with the Browser? - As per Adobe, the comparison is here.
As per the AIR blog it claims it has passed 100 Million installations few months back!!!
AIR - provides an runtime environment for running such applications.
How will it compare with the Browser? - As per Adobe, the comparison is here.
As per the AIR blog it claims it has passed 100 Million installations few months back!!!
Tuesday, March 3, 2009
Oracle Enterprise Manager 10g Release 5
The latest release of Oracle Enterprise Manager 10g called Release 5 or OEM 10gR5 released... It is more stable, manageable with new features in overall enterprise management.
Wednesday, February 25, 2009
New Technology and its Manageability
New Technology and its Manageability
However great the new technology is, it requires a manageability solution along with it. "Manageability" is the overall ease of the technology starting at its commissioning until its decommission.
Even with great manageability features, the Oracle RAC took some time to move into mainstream production. [Ref. Gartner Report on Oracle RAC claims it took nearly eight years]
In that case, what is the importance of "Manageability" of a "Enterprise Manageability Solution"???
However great the new technology is, it requires a manageability solution along with it. "Manageability" is the overall ease of the technology starting at its commissioning until its decommission.
Even with great manageability features, the Oracle RAC took some time to move into mainstream production. [Ref. Gartner Report on Oracle RAC claims it took nearly eight years]
In that case, what is the importance of "Manageability" of a "Enterprise Manageability Solution"???
Subscribe to:
Posts (Atom)