For the data warehouses having Terabytes of data, the key problem is moving the data from the large storage arrays into servers performing the processing - This is termed as the "Data Bandwidth Problem"
Oracle and HP came together and unveiled the new technology called Exadata - The key difference here is the query processing has moved nearer to the storage technology in the "programmable storage server"
This technology should eventually improve OLTP performance too....
Thursday, September 25, 2008
Thursday, September 4, 2008
Technology Transformation and Hiding
The latest trend seems to be "Hiding Technology" from business.
Various layers of virtualization, Service Oriented architecture and virtual appliances, automated transformation tools and testing tools are making this trend dominate the current technology market.
The custom business applications and ERP based applications turn more and more service oriented and reduce the dependency on the underlying technical infrastructure.
There will be more and more automation of testing and migration tools at the infrastructure layers let it be application server or the database technology. Does it mean are we going to have a virtual database and virtual application server?
The pay-as-you-go model of technology sourcing is already in. Along with the virtualization, renting model and fast transformation tools, each layer will hide the internal implementation and move towards a OPEN standard based model.
May be we are not very far from service providers providing, CPU capacity, MEMORY/Disk Storage, network bandwidth on a pay as you go model
also we should see further service providers putting these basic services together and providing technology like DB and application on the same model etc., as well as the top end business services like BI and analytics.
Let us wait and watch...
Various layers of virtualization, Service Oriented architecture and virtual appliances, automated transformation tools and testing tools are making this trend dominate the current technology market.
The custom business applications and ERP based applications turn more and more service oriented and reduce the dependency on the underlying technical infrastructure.
There will be more and more automation of testing and migration tools at the infrastructure layers let it be application server or the database technology. Does it mean are we going to have a virtual database and virtual application server?
The pay-as-you-go model of technology sourcing is already in. Along with the virtualization, renting model and fast transformation tools, each layer will hide the internal implementation and move towards a OPEN standard based model.
May be we are not very far from service providers providing, CPU capacity, MEMORY/Disk Storage, network bandwidth on a pay as you go model
also we should see further service providers putting these basic services together and providing technology like DB and application on the same model etc., as well as the top end business services like BI and analytics.
Let us wait and watch...
Friday, August 8, 2008
Quick Recap of 20 years 8/8/88 till 08/08/08
My association with computer software is just turned 20 years. I have joined my graduate course in computer sciences on 8/8/88.
My first BASIC program
10 PRINT Hello Prasad
20 GO TO 10
went into an infinite loop.
An algorithm to interchange values of A & B without using a third temporary variable.
A = A + B
B = A - B
A = A - B
This proves a program can be highly optimized to use least possible memory or least of processor but can never achieve both at the same time. Optimize for bet processor utilization needs more memory usage and vice-versa.
Fascinated by the fact that the knowledge of direction of flow of current by the diode can make basic logic of AND, OR and NOT making a microprocessor consisting of highly sophisticated processing logic and studied digital electronics and microprocessor fundamentals with a great regard. Von Neumann Architecture of stored program computers, digital electronics and Boolean algebra were my favorite subjects.
In 1991 joined my Masters in Computer Sciences studying Algorithms, data structures, operating systems, assemblers. Developed a Intel 8085 two pass assembler using C language.
Interested in image processing, expert systems and operational research and optimization elected Operations Research and expert systems in my masters. Did a project using expert systems to provide training and advice for purchase process.
In 1993, joined Indian Space Research Organization and worked on a "Scheduling" problem of Low earth Orbiting satellites operations scheduling and successfully completed developing algorithm and implementing the algorithm in c language to produced optimized operations schedules at ISRO telemetry, tracking and command network.
In 1996, Moved out of scientific research and joined Tata Consultancy Services and provided Automated Branch Automation software implementation at various Banks in Bangalore. The main achievements during this period include installing a 1st ATM machine at UTI Bank Bangalore and implementing Voice response system at the same bank.
In 1997, traveled to Belgium on a complex design involving multi lingual HR system where an employee registered with one language, work for an employer speaking another language and the payroll data being processed by the processing company who would like to see the error messages in third language. Successfully implemented the solution using a CASE tool then called IEF which was later called Composer and cool:gen.
Traveled to Citibank Singapore, and Transco in UK on different assignments. Involved in implementation of Domestic Competition phase III at BG Transco for providing a supplier of choice to all 21Million households in UK.
In 1999, worked on providing integrated information systems to a TPML and BEML at Bangalore using varied technologies on UNIX and Windows platforms.
In 2000, back in UK at National Grid Transco involved in consolidating the UK-Link system architecture which was originally partitioned with 9 Oracle Databases. All the partitions consolidated to a single Oracle database. Key challenges included CoolGen and database migration to newer technology as well as getting the whole business process work as before after the technology change.
Worked on Reforms of Gas Metering arrangements and metering unbundling projects for Transco.
Technically lead a team involving multiple organizations and people from multiple nationalities and cultures to achieve one of the most complex spacial data migration project. The project involved migrating legacy vector data in 56 layers to new Oracle Spacial based ArcGIS system. A total of over 108 Million geographical features including over 28 Million Transco's gas pipeline network assets were migrated. This being a safety critical project, the work got internally and externally audited to make sure the accuracy of this migration.
In 2004, working as a Technical Architect for the Unbundled metering companies, TMS and OnStream provided a complete AS IS and TO BE architectures and a high level transformation strategy to the customer. Provided tactical improvements along with the strategic road map which has improved the SAP back end system performance by 300%
In 2005 after returning to India, lead the offshore team of TCS implementing important changes to metering business in meter asset management area and successfully delivered the changes to best customer satisfaction (98%)
In 2006, left Tata Consultancy Services and Joined Oracle Corp. in Enterprise Management product suite as a Technical Lead.
Worked with internal and external customers of Oracle to quickly adopt the data center management solutions realizing the benefits of a centralized management console and simplifying, standardizing and automating several monitoring and management activities.
Overall it has been a great experience, learning and working across various technologies and people from different cultural backgrounds and corporate business processes.
It is 7305 days since 8/8/88 and every day is a new learning experience for me....
My first BASIC program
10 PRINT Hello Prasad
20 GO TO 10
went into an infinite loop.
An algorithm to interchange values of A & B without using a third temporary variable.
A = A + B
B = A - B
A = A - B
This proves a program can be highly optimized to use least possible memory or least of processor but can never achieve both at the same time. Optimize for bet processor utilization needs more memory usage and vice-versa.
Fascinated by the fact that the knowledge of direction of flow of current by the diode can make basic logic of AND, OR and NOT making a microprocessor consisting of highly sophisticated processing logic and studied digital electronics and microprocessor fundamentals with a great regard. Von Neumann Architecture of stored program computers, digital electronics and Boolean algebra were my favorite subjects.
In 1991 joined my Masters in Computer Sciences studying Algorithms, data structures, operating systems, assemblers. Developed a Intel 8085 two pass assembler using C language.
Interested in image processing, expert systems and operational research and optimization elected Operations Research and expert systems in my masters. Did a project using expert systems to provide training and advice for purchase process.
In 1993, joined Indian Space Research Organization and worked on a "Scheduling" problem of Low earth Orbiting satellites operations scheduling and successfully completed developing algorithm and implementing the algorithm in c language to produced optimized operations schedules at ISRO telemetry, tracking and command network.
In 1996, Moved out of scientific research and joined Tata Consultancy Services and provided Automated Branch Automation software implementation at various Banks in Bangalore. The main achievements during this period include installing a 1st ATM machine at UTI Bank Bangalore and implementing Voice response system at the same bank.
In 1997, traveled to Belgium on a complex design involving multi lingual HR system where an employee registered with one language, work for an employer speaking another language and the payroll data being processed by the processing company who would like to see the error messages in third language. Successfully implemented the solution using a CASE tool then called IEF which was later called Composer and cool:gen.
Traveled to Citibank Singapore, and Transco in UK on different assignments. Involved in implementation of Domestic Competition phase III at BG Transco for providing a supplier of choice to all 21Million households in UK.
In 1999, worked on providing integrated information systems to a TPML and BEML at Bangalore using varied technologies on UNIX and Windows platforms.
In 2000, back in UK at National Grid Transco involved in consolidating the UK-Link system architecture which was originally partitioned with 9 Oracle Databases. All the partitions consolidated to a single Oracle database. Key challenges included CoolGen and database migration to newer technology as well as getting the whole business process work as before after the technology change.
Worked on Reforms of Gas Metering arrangements and metering unbundling projects for Transco.
Technically lead a team involving multiple organizations and people from multiple nationalities and cultures to achieve one of the most complex spacial data migration project. The project involved migrating legacy vector data in 56 layers to new Oracle Spacial based ArcGIS system. A total of over 108 Million geographical features including over 28 Million Transco's gas pipeline network assets were migrated. This being a safety critical project, the work got internally and externally audited to make sure the accuracy of this migration.
In 2004, working as a Technical Architect for the Unbundled metering companies, TMS and OnStream provided a complete AS IS and TO BE architectures and a high level transformation strategy to the customer. Provided tactical improvements along with the strategic road map which has improved the SAP back end system performance by 300%
In 2005 after returning to India, lead the offshore team of TCS implementing important changes to metering business in meter asset management area and successfully delivered the changes to best customer satisfaction (98%)
In 2006, left Tata Consultancy Services and Joined Oracle Corp. in Enterprise Management product suite as a Technical Lead.
Worked with internal and external customers of Oracle to quickly adopt the data center management solutions realizing the benefits of a centralized management console and simplifying, standardizing and automating several monitoring and management activities.
Overall it has been a great experience, learning and working across various technologies and people from different cultural backgrounds and corporate business processes.
It is 7305 days since 8/8/88 and every day is a new learning experience for me....
Tuesday, August 5, 2008
Oracle Database - Real Application Testing
When a pure technology upgrade is planned without any application change, or some performance improvements via patches or parameter changes are done to a critical production database it is very important to test the patch, upgrade or the parameter change.
This includes a test infrastructure setup with database, application and web tier components.
This is not only expensive but also tedious to build, validate and use it for testing.
The solution is RAT – Real Application Testing which can be used to capture the workload from the production database and replay on the test database.
Advantages:
1. No application or web tier infrastructure need to be setup
2. The Testing can entirely be done by DBAs
3. Much superior to testing the application with simulated load or with scripts
This includes a test infrastructure setup with database, application and web tier components.
This is not only expensive but also tedious to build, validate and use it for testing.
The solution is RAT – Real Application Testing which can be used to capture the workload from the production database and replay on the test database.
Advantages:
1. No application or web tier infrastructure need to be setup
2. The Testing can entirely be done by DBAs
3. Much superior to testing the application with simulated load or with scripts
Thursday, June 26, 2008
Workaround - Best Practices
I find myself in a lot of situations to provide a "workaround"
Few quick properties of workarounds:
Workaround generally leaves the root cause not attended. A conscious effort required to get the development identify and fix the root cause of the problem.
Workaround generally forgotten in the documentation and would lead to an unknown problem at a future time. (In most cases another workaround nullifying the original workaround is suggested)
so,
Document the workaround, possible root cause and follow the "process" to get the original problem identified and fixed in the base product. also,
clearly communicate to the customer and make sure they mention the workaround when logging a new related problem (or bug) against the product in future.
Otherwise, customer will have "workarounds", the support team a "nightmare" and the product "a lot of bugs"
Few quick properties of workarounds:
Workaround generally leaves the root cause not attended. A conscious effort required to get the development identify and fix the root cause of the problem.
Workaround generally forgotten in the documentation and would lead to an unknown problem at a future time. (In most cases another workaround nullifying the original workaround is suggested)
so,
Document the workaround, possible root cause and follow the "process" to get the original problem identified and fixed in the base product. also,
clearly communicate to the customer and make sure they mention the workaround when logging a new related problem (or bug) against the product in future.
Otherwise, customer will have "workarounds", the support team a "nightmare" and the product "a lot of bugs"
Friday, June 13, 2008
Of Memory Leaks and Core Dumps
Even a very well tested product have these basic problems when finally put into the production.
This statement is not an exaggeration of truth. It is Truth. There are multiple software components and multiple platforms on which these components are made to run. For example an embedded JVM may process the logoff signal from a remote desktop connection on a Windows server.
The Java Runtime Bug cause the embedded application to crash and dump core on a Windows platform.
Errors like this are hard to debug and fix.
Memory leaks are another complex thing to debug and fix when it happens on a Live/Production environment. It is best to identify the leaks during the development/testing phases. Example of such errors are the ORA-04030 errors showing the out of process memory on an oracle database running a production application.
When applications use the connection pooling and hold the connection for a long duration and application carries out complex and divergent transactions through these connections it gets quite challenging to identify and fix the problems.
Bottom line: The rigor of performance and scalability testing is very crucial in a product development life cycle.
This statement is not an exaggeration of truth. It is Truth. There are multiple software components and multiple platforms on which these components are made to run. For example an embedded JVM may process the logoff signal from a remote desktop connection on a Windows server.
The Java Runtime Bug cause the embedded application to crash and dump core on a Windows platform.
Errors like this are hard to debug and fix.
Memory leaks are another complex thing to debug and fix when it happens on a Live/Production environment. It is best to identify the leaks during the development/testing phases. Example of such errors are the ORA-04030 errors showing the out of process memory on an oracle database running a production application.
When applications use the connection pooling and hold the connection for a long duration and application carries out complex and divergent transactions through these connections it gets quite challenging to identify and fix the problems.
Bottom line: The rigor of performance and scalability testing is very crucial in a product development life cycle.
Thursday, May 22, 2008
Avoid Gobbledygook
Especially when reading the proposals and technical documents I came across a lot of Gobbledygook.
Some examples from the http://www.unfg.org/
"The relevant modules of the project are, at the inter-governmental level, mapped onto local non-donor-driven poverty reduction and sustainable growth strategies."
"In the context of improving the legal and regulatory framework, the Member States will develop inter-agency-coordinated administrative targeted policy implementation aimed at mitigating negative effects of globalization."
It is best to make things simple and written in plain English. The plain english campain website has more info: http://www.plainenglish.co.uk/
...
Some examples from the http://www.unfg.org/
"The relevant modules of the project are, at the inter-governmental level, mapped onto local non-donor-driven poverty reduction and sustainable growth strategies."
"In the context of improving the legal and regulatory framework, the Member States will develop inter-agency-coordinated administrative targeted policy implementation aimed at mitigating negative effects of globalization."
It is best to make things simple and written in plain English. The plain english campain website has more info: http://www.plainenglish.co.uk/
...
Thursday, May 8, 2008
Managing Data Center With Multiple Networks
In the world of mergers/acquisitions and consolidation, large application hosting companies are being formed by the merger of several small hosting providers. This leads to having a diverse networks connected & separated by multiple firewalls and DMZs.
The challenge here is to manage and standardize the whole of the data center into one consolidated view.
Possible Monitoring/management Solution Options:
1. Centralize Monitoring Solution which can see all the Assets using necessary policies on firewalls/proxy etc.,
==>Complex network management and simple administration of the management solution.
==>Additional Load on the WAN
2. Distributed Management solution on each network with a data consolidation to a central management repository
==>Management solution need to provide a data synchronization option and possible "time lag" between the view of local and central consoles.
==>Maintainability of the solution is complex but involve minimum changes on Network /Security policies.
An Ideal Data Center Management Solution product should provide ability to configure both the above options.
The challenge here is to manage and standardize the whole of the data center into one consolidated view.
Possible Monitoring/management Solution Options:
1. Centralize Monitoring Solution which can see all the Assets using necessary policies on firewalls/proxy etc.,
==>Complex network management and simple administration of the management solution.
==>Additional Load on the WAN
2. Distributed Management solution on each network with a data consolidation to a central management repository
==>Management solution need to provide a data synchronization option and possible "time lag" between the view of local and central consoles.
==>Maintainability of the solution is complex but involve minimum changes on Network /Security policies.
An Ideal Data Center Management Solution product should provide ability to configure both the above options.
Monday, April 14, 2008
Service Oriented Analysis and Design
Currently the Business Process Modeling (BPM), Enterprise Architecture (EA) and the object oriented analysis and design (OOAD) being put together to make the overall life cycle of software architecture development into a service oriented analysis and design (SOAD)
The focus is to analyze the business from a service oriented perspective and get the business aligned, loosely coupled, coarse grained services identified and documented.
The complete SOA life cycle involves
Service Identification - Analysis
Service Specification - Design
Service Realization - Development & Testing
Service Operation - Implementation and Maintenance
The concepts come from Object Orientation but quickly getting adopted to "Service Orientation"
The life cycle is supported by the Service Governance and Service Security and other manageability aspects as the supporting "technical" SOA framework.
The focus is to analyze the business from a service oriented perspective and get the business aligned, loosely coupled, coarse grained services identified and documented.
The complete SOA life cycle involves
Service Identification - Analysis
Service Specification - Design
Service Realization - Development & Testing
Service Operation - Implementation and Maintenance
The concepts come from Object Orientation but quickly getting adopted to "Service Orientation"
The life cycle is supported by the Service Governance and Service Security and other manageability aspects as the supporting "technical" SOA framework.
Tuesday, April 1, 2008
Application Performance Management
End user experience monitoring and management is becoming a key for the websites. As the "Software as Service" increases it is more important to mange the services in an end to end perspective. End to End performance monitoring and management have two primary methods.
Active or synthetic monitoring:
This will need a simulated user action recorded and played back from different geographies. One can install multiple hosts across the globe and run the simulated tests periodically to monitor and identify possible availability and performance problems with a web application.
This method can be used to identify common problems with the availability and detect any degradation of performance actively.
Disadvantage is one can only monitor the "predefined access paths" through the application. Also need to have the simulated "agent"s across the different geographies and modify the access paths as needed when the application changes.
Real end-user monitoring or passive monitoring:
This method involves mining the access logs of the application and analyzing the data to identify the performance problems. It can also be done using the "network sniffing" method. Whatever is the method used to get the real end-user access data to the application, this data is very valuable for business to analyze both the performance and the general usage patterns on the application.
As our focus is performance monitoring, the disadvantage is we will get to know the issue after it impacts the users.
There are several products available in the market to do both the active and passive application performance management and one should develop a correct strategy with a right combination of both methods to archive the required service levels with right balance.
A holistic strategy should also include diagnostic tools within the solution.
Active or synthetic monitoring:
This will need a simulated user action recorded and played back from different geographies. One can install multiple hosts across the globe and run the simulated tests periodically to monitor and identify possible availability and performance problems with a web application.
This method can be used to identify common problems with the availability and detect any degradation of performance actively.
Disadvantage is one can only monitor the "predefined access paths" through the application. Also need to have the simulated "agent"s across the different geographies and modify the access paths as needed when the application changes.
Real end-user monitoring or passive monitoring:
This method involves mining the access logs of the application and analyzing the data to identify the performance problems. It can also be done using the "network sniffing" method. Whatever is the method used to get the real end-user access data to the application, this data is very valuable for business to analyze both the performance and the general usage patterns on the application.
As our focus is performance monitoring, the disadvantage is we will get to know the issue after it impacts the users.
There are several products available in the market to do both the active and passive application performance management and one should develop a correct strategy with a right combination of both methods to archive the required service levels with right balance.
A holistic strategy should also include diagnostic tools within the solution.
Subscribe to:
Comments (Atom)
