Blog Moved

Future posts related to technology are directly published to LinkedIn
https://www.linkedin.com/today/author/prasadchitta

Friday, December 17, 2010

Integration - Centralized or Distributed?

Let us imagine a big greenfield IT program that implements multiple packaged products, custom developed applications for an enterprise business.

There are multiple software vendors, package implementation partners and custom development teams, that are involved in the complete life cycle consisting of requirements analysis, architecture/design, code & test, integrate and deploy to production.

In such a scenario, how to deal with the interface development and integration?

Option 1: Centralized Integration Development
1. Agree the interfaces as step 1 of the program.
2. Let an independent group of people (let us call center of excellence) do the integration/interface work.
3. As the different products, applications are deployed the integration team will integrate them.

Option 2: Distributed Integration Development
1. Only have a integration back end (say ESB) be controlled by central team.
2. Each team working on different projects of the program will have their own interface development team.
3. As the different products, application are deployed the respective team will integrate them with the central ESB.

Which of the above options is better?

In my personal opinion, all the integration related activities are better done from a central COE.

Central COE can share the best practices, identify core patterns and leverage the skills.

Challenges in the COE model are acquiring and building the right skills and right sizing the COE for the program.

Any thoughts or ideas?

Monday, December 6, 2010

Historize or Roll-Up

Information Technology is all about acquiring, processing, storing and presenting the "data" to the right people at right time to enable them to derive some meaningful information and in some cases useful intelligence or insight out of that data.

In usual business, a data point is captured only when there is a transaction that changes the data. So, each change to the data is captured, validated and stored. Such systems are called OLTP or On-Line Transaction Processing systems.

But, when the data is acquired at regular intervals in typical process control systems, not all the data points required to be stored; For example
a. the utilization of a system processor at every 5 seconds interval
b. Temperature of a steam turbine scanned at every 400 ms interval

In a typical process control system there will be several thousands of such data points scanned at very high frequency, typically every second.

What to do with all this data?
In a standard relational database storing all this data in raw format will be simply impractical. So, there are two methods to make some sense out of such "time series" data.

1. Historize the data using a process historian. A process historian uses a compression algorithm that only stores a data point only when there is a deviation beyond a set limit using variety of algorithms like straight line interpolation (SLIM1, 2, 3) or swinging door compression etc., to achieve a high degree of compression in storing the time series data.

2. Roll-up data on a periodical (i.e., hourly data for few weeks) basis to store max, min, average, standard deviation etc., values in one single record per data point. A next level roll-up of data can happen for a longer time interval (i.e., daily data for several years) This multi-level roll-up data can be used for historical trending purposes.

There are advantages of both methods. Recently I have seen a patent on dynamic compression of system management data which is interestingly putting the Historization with multiple compression algorithms for storing the systems management data.

Sunday, November 28, 2010

Energy Storage - Pumped Storage Hydro

Q: How to store the energy?
A: In its potential form...

When electricity is produced from an hydro electric plant, during off-peak hours of demand, the produced electricity can be used to pump the water to a higher reservoir using a reversible turbine/generator assembly.

The non-conventional "wind turbines" or "solar cells" can also be used to pump the water to a higher reservoir, thereby making the electricity production from them more reliable to connect the generated energy to the electricity Grid!

Once the water is available in higher reservoir, electricity can be generated as and when needed depending on the demand.

This is the largest capacity form of stored (potential) energy available for a Grid Connected storage.

In India we have one 1000MW plant at Tehri Dam.

In my personal opinion, this is one of the best and environmental friendly solution to store the electricity!

Thursday, November 18, 2010

smarter grids

I have been at a conference related to Smart Grids at Delhi last week. There was a question asked by a BA student to me "What is this smart grid all about?"

I will try to answer this question as simply as I can in this post.

Electricity traditionally supplied to consumers from its generating plants using the "power GRID". The interconnected network of generating plant -> transmission grid (high voltage) -> distribution grid (medium voltage) -> Consumer (at 220Volts) is the power grid.

Due to the amount of investment into this core infrastructure of GRID, it has been a public sector (government) agency that built the grid. The way consumers buy or pay for the energy is dictated by the utility company.

As the infrastructure development slowly got de-regulated and more and more private participation is into power generation and "open access" to the power across the grid is being made available for consumers.

"Energy" is freely available in the nature in the form of Sun light, wind, tidal, flow of a river, coal, nuclear etc., forms but making it available in a house to the lights, fans, ACs and several electrical appliances like pump sets etc., costs money.

Energy has an initial "conversion" cost i.e., converting fuel to heat in a boiler and heat to mechanical movement of a turbine and the mechanical energy to electricity by a generator. Then it need to pass through the transmission and distribution channels to the consumer which adds a "transportation and distribution" cost. Finally the consumption at each end point is measured by a Meter that involves some costs of metering, billing and collection of bills through operating some channels like collection kiosks, bank direct debit etc., from the final consumer.

How is all this related to smartness of the GRID?

A customer has not much a role in the whole process... He uses the Electricity and pays the bill monthly... It is a one-way flow of information and energy!

As the demand for energy increases and the fossil fuel like coal decreases in availability one should think of methods to wisely use the energy. A smarter grid is an enabler for wiser generation, distribution and consumption of the energy.

A "smart meter" that can measure more accurate consumption and power quality at more frequent interval (e.g every one hour) gives a better understanding of consumption patterns for the consumer and utility, there by the energy pricing can be more dynamic as different price can be applied for peak hours and another price for non-peak hours.

A "smart meter" enables micro grids. Where ever possible small commercial customer can have their own "solar" or "wind" generation capability and when they have excess of energy they can reduce the consumption from the grid or even feed the energy back to the grid. So, the meter can measure two way i.e., import and export of the energy from and to the grid.

This makes the distribution grid to be more responsive to the demand and supply scenario. A distribution grid should instrument each transformer, feeder and substation to measure power quality and other key parameters for efficient use of the equipment by minimizing the asset maintenance costs and failures of the equipment. It will also ensure the energy lost over the transportation to a minimum. (Currently globally a 50% of generated energy never reaches any customer!!)

An interconnected set of devices they can exchange information along with the energy in both directions for the purpose of making intelligent decisions for wise consumption of energy/utility makes up a smart grid.

A customer is more informed about the prices of energy he is using at different times of the day, Utility can control some high power consuming equipment at customer premise remotely, information exchanged between the customer, distribution, transmission and generation entities in bi-directional, automated mode.

It will finally lead to conserving the energy, managing the demand and supply balance in a wise manner and finally make the planet EARTH more greener! That is the whole idea of smarter grids. Advancements in telecommunication and information technology enable the transformation of power grids into smarter grids.

I hope this post is simple and smart enough to explain what is a "Smarter Grid" to a novice.

Sunday, October 10, 2010

Entering 888th work week

"If you can't explain it simply, you don't understand it well enough." — Albert Einstein

I have completed my 17 years working in Information Technology industry. In past 17 years I have realized the essence of above statement by Einstein.

Most of the times, people can't explain things simply. The problem IT industry is seeing today is the focus on "Information" has been lost completely and the whole industry is running behind "Technology"!

I wish all the best to the industry on this special day of 10/10/10 at 10AM IST.

Friday, October 8, 2010

Review of "Quality Reviews"

Review means a "critical inspection" of a product or a process by someone who was not involved in the creation or regular operation of it.

Review is considered to be one of the important methods of Quality Assurance (QA) of software systems.

Review in software product / development:
Now, let us take a "review" of the review process in software product.

1. A software deliverable is architected as per the business requirement. The software architecture is reviewed by a reviewer to assure that architecture meets the functional and non-functional (scalability/performance/interoperability etc.,) requirements of the business.

2. Software deliverable-s are designed as per the architecture. The design is reviewed by a reviewer again to assure the design meets the architecture.

3. The design is implemented into software program-s (code) and the code is reviewed with reference to the design.

4. Each unit of code is also tested by the developer or an independent tester to make sure code has met the design specifications!

5. All the different units of software are integrated to realize the overall architecture of the software product. The integrated product is tested for functional and non-functional requirements.

6. The product is released to the customer.

So, review is preventive measure of quality assurance that helps in avoiding injection of bugs where as testing is a reactive method of finding and fixing the bugs after the bugs are injected into the product.

Why software still have bugs?

In my opinion, the primary cause is lack of "self-review" of the work products delivered by the individuals. Each individual should be encouraged to develop a culture of reviewing the work-product (let it be the architecture, a design, a unit of code, a test spec, or anything else) before it goes out into the software development process chain. Once the organization is successful in this mechanism, the quality will automatically improve and the cost of QA will considerably reduce.

Note: QA managers need not worry... They still have their jobs safe!! Guess WHY?


Review in process outsourcing:
Looking from a different perspective of BPO or ITO scenarios, the quality of a process where development of a "deliverable unit" costs nearly same as the review of that unit, the review adds a 100% overhead and not a solution for the quality assurance. The philosophy should be driving the workforce to "Do it right first time"
within the execution of the process.

But, a Business Process Review and transformation to:
a. Identify possible scope of improvement
b. Evaluate the impact of change
c. implementing the proposed transformation in a smooth manner

will help the process to be more efficient with an improved productivity and customer satisfaction.

Conclusion:
a better process --> a better quality product.

Hence one should "review" the process first to improve the process efficiency and then focus on "review" of individual products produced by the process for a better quality.

Software development is also a process! Increased focus on a review of a wrong thing will not result in the improved quality.

Friday, October 1, 2010

Binding Energy of Software Systems

When we study basics of atomic physics, we came to know that "A bound system has typically lower potential energy (i.e., mass) than its constituent parts" To make it into simple words, the total mass of all nucleons is more than the mass of nucleus formed by them. This mass deficit when converted to energy equivalent is called binding energy. That is the "force" which keeps the system together and not let the different constituent components fall apart!

So, In software terms, this is the effort that has gone into the "integration" of the different system components into its final form of Business Application.

Traditionally the integration has followed different models in the software systems.

Silos: Multiple software components were developed on a specific technology / programming language like COBOL, C etc., and they are integrated vertically using the procedure calls and RPCs. It is difficult to integrate with a component that is outside the technology.

Star/Spaghetti i.e., point to point integration: In this method, different components of a business application talk to each other using the flat files, or other methods. As a need for integration arises, the necessary interface should be developed and deployed on both the interfacing components of the business application. Soon, we will have a very complex spaghetti created that is very difficult to maintain.

Hub and Spoke based EAI: To overcome the standardization problems of point to point interfaces, each component should talk a "common language" with a central HUB that mediates all the integration between the enterprise business systems in that common language. This technology has developed several standard adaptors for common business components.

Enterprise Service Bus (ESB): This is the most modern integration technique available today. The key difference is that the central HUB is replaced with a more open set of protocols that can integrate the business components beyond a single enterprise. It is more open and allows more loosely coupled, heterogeneous components to talk to each other by providing more sophisticated "translation" services to them.

So, it is important that sufficient "binding energy" is in the Enterprise Business System and the CORRECT structure/method is used for the integration to keep the software strong (for operations) and flexible (for tackling the changes) all the way through its lifetime.

For more information on binding energy (atomic physics) : http://en.wikipedia.org/wiki/Binding_energy

For more on Patterns in Business Service Choreography using ESB (IBM redbook): http://www.redbooks.ibm.com/redpapers/pdfs/redp3908.pdf

Monday, September 20, 2010

Condition Based Maintenance

CBM as it is well known in the Enterprise Asset Management terms is a buzzword for sometime now.

This post aims at making CBM simple to understand.

Any "Asset" would need maintenance to give its optimal performance. Traditionally there are two strategies to manage the "Maintenance"

1. Wait till the equipment fails. As soon as it fails, fix the equipment/Asset. This is "reactive" maintenance. (Corrective Maintenance) The disadvantage is the unplanned downtime. You do not like the car to be broken down before the maintenance is done. is it not? But the advantage is we fix only what is required making it very "cost effective" mode of maintenance.

2. Maintain the Asset in a periodic schedule (Preventive Maintenance). Just like a car going for a service after every 5000 km run or after every 6 months. This will make the Asset perform without a breakdown, but the maintenance is expensive, we will change the oils etc., even when they can be used for further period. It amounts for more costs for maintenance.

So, there should be a more "smarter" approach for asset maintenance. Is it not? that is called a "predictive maintenance" or Condition Based Maintenance. Let us assume if we instrument the asset with more sensors they can exactly "measure" some key parameters, that can be used to compute the exact condition of the asset health and recognize certain "conditions" before they happen. Using this information, we can carryout the required "maintenance" of the asset. This approach will surely help reducing the maintenance costs and asset failures.

But, it adds additional instrumentation of sensors, data collection and processing the data to "intelligently" identify the conditions. When all this additional instrumentation, data collection and processing is considerably cheaper than the maintenance expenditure and it avoids a breakdown it is worth implementing it. Achieving this in a "smart" way is the key!

EAM with CBM will surely improve the operations of any "Asset Based" organization.

But when we look at the God's own creation, we see CBM everywhere... Macrocosmic System of the "World" and the microcosmic system of the "body" both implement the CBM to its nirvana. Both the systems sense the conditions, process them, present the conditions in a meaningful way and they also self HEAL the condition naturally which is the nirvana of "Asset Management"!!

Tuesday, September 14, 2010

My Technical White Paper on oracle.com

This was the last piece of work I did at Oracle before I left the company in June 2010. The link to the published paper http://www.oracle.com/technetwork/oem/app-mgmt/twp-next-gen-siebel-monitoring-164300.pdf

It is always a pleasure to see the customer happy after implementing the solution and makes a comment like this

“We use RUEI on technical administration to get information, how the application “feels” on customer side and for management point of view to get the information of SLA agreements in fact of guaranteeing a dedicated response time to our customers. We verify the SLA with RUEI KPIs which are implemented in a management dashboard.”

I randomly came across this paper on o.com and thought of sharing on my blog!

Monday, September 13, 2010

Enterprise Solution Architect

My teenage son sometimes wonders about what I do in the office to make my living....

Let me try to define the phrase "Solution Architecture". To do so, I got to first define the words that constitute the phrase in the correct context.

Solution :

A SYSTEM continuously experiences environmental change! Change in the system environment is a natural process. Sometimes the change is gradual and other times the change is sudden.

In any case a system (an enterprise itself is a SYSTEM!) has to have a built in mechanism to tackle the change.

When a "change" can't be handled by the system within its scope of operation, this change is called as a "problem" that needs a "Solution"

Architecture:
This is the tricky part of the phrase "Solution Architecture"!

Architecture is a discipline, a set of principles, methods, guidelines that can be used to create a "model" of a "system" from multiple view points of different stakeholders of the system. Once the model is constructed, it can be used to get a holistic view of the system by different stakeholders. It greatly helps the understanding of the problem and channelize the thought towards the challenge that is caused by the "problem"

Overall, the solution architecture is a discipline of finding the opportunities to improve the system gently to tackle the challenges posed by the environmental changes as well as making the system more "responsive" to future challenges by creating a multi-dimensional model of the system!

Enterprise Architecture

In the modern day every business organization is seen as a SYSTEM. So, Enterprise Architecture (EA) discipline is divided into four layers as

  1. Business, (what makes up the business - people, processes and tools; Its goal, structure and functioning)
  2. Data/Information,(what are the key data comes in and information needed for business functions both operational and strategic)
  3. Applications (How the data is captured, processed, stored and converted into the useful information and knowledge and presented to the right people at the right time!)
  4. Technology/Infrastructure Architectures. (What physical servers, network, storage components are required to run the applications within the budget meeting service levels)

With the "Cloud Computing Paradigm" is in, the business is seen as a loosely coupled services and each of the service can have three layered "clouding" in SaaS - Software as a Service, PaaS - Platform as a Service, and IaaS - Infrastructure as a Service. This cloud computing has changed the way we look at architecture in the Data/Information, Application and Technology/Infrastructure layers. An architect should consider the possible options of public (external supplier hosting the cloud), private (hosting a cloud within enterprise datacenter0 or hybrid (a combination of public and private) deployment models of the Cloud in these layers.

To make it simple, as an Enterprise Solution Architect, I draw different shapes and name them differently; I will connect those shapes to form a smooth looking flow between multiple layers of the enterprise and convince the key stakeholders that they have understood their current problem and different possible solutions to solve their problem. I will help them select the best solution option suitable for their budget and needs.....

It is FUN as I enjoy this!!


Sunday, September 5, 2010

Risk Management

Risk indicates a future event with a probability of occurrence that could cause an additional cost or a reduced revenue for a portfolio.

In a trade where the producer with stock in hand selling the product directly to a consumer who is buying the product for cash, there is no risk involved.

But, when a producer is selling the product to a consumer on a monthly installment basis, producer evaluates and manages a possible "credit risk".
Similarly when a consumer is paying cash in advance for product that will be delivered later, the consumer is evaluating and managing the "operational risks" of the producer.

When the transaction do not involve either producer or consumer and dealt by the "traders" then the risk is on its peak! When a trader purely buys a commodity or an instrument for the sake of selling it for profit in future and not for his own consumption; then it calls in for a methodical evaluation and management of "RISK".

In the modern day markets, there are multiple layers of traders and exchanges that are involved, making the "risk management" highly complex. There are spot, index based, options and future contract type trades that happen in the financial and commodity sectors, which made "risk management" a key tradition in the trading business!

We have several mathematical simulation models and supporting IT solutions in the Trading Risk Management models today. Several organizations are in the process of implementing and modernizing the "risk management" solutions to make their business make better sense.

But there is a "risk" in choosing and implementing the right "risk management" solution itself. How to manage that risk? Sounding tricky??

Warren Buffet's quote on this goes like this: "Risk comes from not knowing what you're doing." Taking this simple guiding principle, one can tackle the risk easily!

The impact of risk can be surely reduced by right KNOWLEDGE of what is being done! If you do not have the right knowledge, transfer the risk to someone who has the right knowledge!

Personally, I have successfully transferred all the risks to the most knowledgeable one thereby became completely risk free!

Tuesday, August 17, 2010

Identity and Access management philosophy

Security of Information in the Information Technology world is very important component.

IT always revolves around making necessary information accessible to the authorized person as and when required to carryout his/her duties and help taking crucial decisions.

So, this post is about "identity management" in the IT world.

Identity can be established by one or more of the three factors:
1. What the requester "knows" e.g., password
2. What the requester "has" e.g., a token that generates random numbers every few seconds
3. What the requester "is" e.g., finger print or retina scan

Before letting the requester access a resource of IT infrastructure, the first thing to be done is to establish the identity of the requesting entity. Once the identity of the requester is established, the second step is to make sure the requester has necessary access to the resource.

The requester may be authorized to
a. read or view the information resource
b. make some modification to the information resource
c. create a new information resource
d. delete one of the resource

Some times the access is dependent on the value of the resource (a bank officer can only approve a transaction of value less than "some max limit")

Separation of duties:
One requester can only create a table and maintain it, but will not be able to read the data from it.

so, the "access policy" could be very complex to define, maintain and enforce it on the IT infrastructure. One very evolved method is Role based access control - RBAC.

There are multiple vendors who provide the solutions around Identity and Access Management in the IT world.

But, interestingly this problem was ancient and in the Great Indian epic of Ramayana, Hanuman establishes his identity to Mother Seetha using a two factor model with a token and a pass phrase! I think we are only applying an ancient solution in a modern way with IAM solution stacks...

Some modern architectural patterns are on this RedPaper..... for those who are interested!

Monday, August 9, 2010

8035 days or 22 years!

It is 22 years since I have associated myself with computers and software. So what?

The key point to make is my recent move from "Grid Control" to "Smart Grid" ...

Having worked for past 4 years with Oracle working with computing grid management software, I have moved over to IBM into consulting solutions around the power grids!

It is just few weeks into the new role, and currently focusing on the domestic market. A new role, in a new place with only thing common to my immediate past role is "GRID" but in the both contexts, this same word means completely different!

With a new learning experience everyday, "Grid" is going to keep me active for several more cycles...

Is it??

Wednesday, June 23, 2010

The "Pull" mode process!

This is my favorite subject that is applicable in Marketing, Production, Consulting and also in Philosophy.

The question here is what is the best method of process control?
a. should a process push its output to the next stage? or
b. should a process pull its input from the previous stage?

In marketing:
Do we sell the product to customer or
Do we make the customer buy our product?

Who should be an active participant of a transaction?
Is it the giver?
or
the taker?

Pull mode is the natural and sustainable model in most of the situations. A process should be designed to make the next stage "pull" the product of the previous stage.

A customer sees the "value" in the product and starts the buying process.
Once the "buy" in initiated, all the necessary steps will be executed to fulfill the request.

The skill of designing such a process lies in reducing the "waiting time" of the customer when he requires the product. Depending on the type of the product, the sufficient inventory should be maintained. But even in the case of customer to wait for a reasonable period, it is fine. (As the value is attractive, customer will wait!)

Pull will gently fill a gap and add value to the customer.
Push will generally "replace" something at the customer. (or cause over consumption or wastage!)

Present World human population is suffering from a "Great PUSH Syndrome" in all the production to consumption processes. Customer has really no time even to "think" about it. That is the force of PUSH in this world as of today.

I can only hope customers and the producers will understand the gentle nature of "pull" process as explained and make the world a better place by using it!

Wednesday, June 16, 2010

Send-off to Prasad (Good Luck Prasad)



Today, after having spent 4+ years at Oracle in Enterprise Manager Grid Control Strategic Customer program, worked with internal and external customers, I am moving on. My Oracle colleagues have given me a wonderful send off treat at Little Italy. This picture is when Gagan is presenting me the gift of "Kautilya's Arthasastra (Three volume set!)"

I would thank everyone of my colleagues who made my stay at Oracle a really memorable one. It has just gone in a jiffy starting with internal customers, training the Japan team, working with customers in Australia, Korea, Hongkong, China, India, Germany and North America. The trips to Oracle HQ for Open World 2006 and 2009 and CAB are very memorable. It is the demo booths, sessions and Hands on Labs and various activities around helping customers achieve their goals and enable them serve their customers better!

I wish all my Oracle colleagues all the best!

Saturday, May 1, 2010

Maximum Security Architecture

In one of the past posts, I have just listed different technologies that are available in the Data Security area. (in Feb 2008)

With Oracle 11g database, the security focus has taken more methodical and architectural approach.

To put things together data security is placed under the following broad (four) categories:
1. User Management
2. Access Control
3. Encryption and Masking
4. Auditing/Monitoring

Just like Maximum Availability Architecture for Highly available architectural patterns, we can call this as Maximum Security Architecture for highly secure architecture....

One should choose the required options and implement it properly to really make the data SECURE!

This Link gives more details of MSA on Oracle 11g database.

Saturday, April 10, 2010

Enterprise Data Fabric - data grids

Enterprise Data Fabric or "in memory data grid" can improve the distributed or clustered application performance dramatically.

The Problem:
Distributed applications need to share the "application objects" across multiple processing nodes. As the application objects are not "relational" there is a need for the Object Relational Mapping (ORM) to share the application objects using a relational database.

The ORM technology involves converting the object into a set of relational (table/column) information and and turns slow.

The Solution:
Using an in-memory data grid to store the application objects on distributed multiple node cache and managing the transactions in once and only once manner.

This will dramatically improve the performance and scalability of the application.

This is also going to be a key enabler of the Private PaaS Cloud Computing in future.

IBM WebSphere eXtreme Scale and Oracle Coherence are the examples of Distributed Caching Platforms that provide Enterprise Data Fabrics or in-memory data grid solutions.

Let us see how this technology shapes up the future!

Tuesday, March 23, 2010

migration, upgrade, enhancement

Migration of a software application from one platform (32bit OS1 on xxx hardware) to another platform (64bit OS2 on yyy hardware)

Upgrade of a software application involves upgrading the software from a lower version to a higher version (like XX version of database to YY version of database)

Both the above activities are technical and no business benefit is directly attributed to this type of activity on the software applications.

An application enhancement will involve providing a new feature or functionality to the end-user of the application. Sometimes when the application is majorly enhanced, the new enhanced application can get migrated to a new platform and/or the underlying technology stack upgraded to higher version.

The key question I am dealing in here is:
When there is no "enhancement" involved, purely a migration and/or upgrade is planned:
a. is it good to do both together in a big bang? OR
b. is it good to migrate first to the new platform and upgrade the software later? OR
c. is is good to upgrade the software first and then migrate to new platform?

a. One can do it together but if there is an issue it is difficult to find the cause of the problem due to new platform or new software version.... each can point fingers at the other....
b. is a good choice if the platform migration is more complex than software upgrade.
c. is a good choice if the software is more stable on the higher version and if any specific platform related issues are fixed in the higher version.

If the software application has multiple tiers with database, application server etc., one should carefully choose the overall path of migrate and upgrade. Making sure the impact on the business SLAs is minimized; It is also important to have a very good "back-out" plan, if something badly goes wrong!!

Thursday, March 11, 2010

Performance Analysis & Tuning

Performance of any system is defined as "accomplishment of a 'task' measured against agreed quality, cost and time."

The software tasks are primarily divided into "transactional" tasks and "analytical" tasks.

One big area in Software Systems management is the Performance Analysis and Tuning.

How to measure the performance?
The basic thing needed for "Performance" is the "Measurement" - What is to be measured and How?
Generally people measure the "response time" (time taken to complete the task) if the task is transactional
and
throughput (number of tasks performed in a fixed unit of time) if the task is analytical.

So, in both the cases what is measured is "time". What happened to the original definition that consisted the "quality" and "cost"?

In Software systems the "tasks" are automated and once the automation is tested well, "quality" is guaranteed for the task.
The "cost" of performing the automated task of a software application is the computing resources (CPU and memory) that are allocated to the application and how well the application can use additional resources to carryout the tasks.

Assuming the application can take full advantage of available CPU and Memory resources allocated to it, "TIME" becomes the best measure to analyze the performance of a software system.

How to Tune the system?
Reduce the amount of time taken to perform the 'task'. As simple as that.

Is it that simple?
Let us take a close look at this approach. Can we really reduce the time taken to perform a task?
Answer is "NO". If a task takes "x" amount of seconds to do, it takes "x" seconds to finish it.
But,
Every Software task has two components of this "time". What are they?
a. The time taken to actually DO the work
b. The time taken while waiting for a dependent sub-task to get completed.

So, Now if we ask the same question Can we reduce the time taken to perform a task?
The answer is YES - by reducing the time "waiting"!

Performance Tuning is all about scheduling and executing the sub-tasks of a "task" optimally such that the waiting time is minimized (if not completely removed) thereby using the available resources optimally and reducing the "cost" of performing the "task" That is the secret of performance tuning!

Wednesday, February 17, 2010

Platform As A Service - Private Cloud

Sometime back, Oracle has published this white paper.

This gives a clear vision of where Oracle is heading in the technological direction of Cloud Computing.

By standardization on the technology stack for enterprise application and fusion middleware enhancements, the PaaS seems the natural direction for enterprises in medium term.

Link to my past blog post on Private Clouds Here.

Tuesday, February 9, 2010

SOA Governance standalone? or merge with IT Management?

"Governance" in general and SOA Governance in particular has been a buzz word for sometime now.

My definition for Governance is making sure that a "thing" works as it is supposed to work. Replace the "thing" with SOA, IT, state, country etc.,

One of the emerging trends for past few years is "Business Transaction Management" for managing the composite IT applications configuration, performance and "Governance"

Traditionally the IT management tools and frameworks have looked at the IT as operations management, quality management, performance management, configuration management etc., disciplines.

In this trend the problem of IT management is from a business point of view. That is good. So, what is happenning in the technology?

1. Application Performance Management
2. Composite Application Management using run time discovery of relationships between the components (services) of the application
3. Business transaction management tools

Here is the question:

Will the new trend merge in traditional IT Management tools?
or
A standalone set of new SOA Governance tools emerge due to the new trend?

Based on my "UNIX" philosophy, I think an integrated set of tools that consist of traditional IT management with the new SOA Governance would be a best fit to tailor to the needs of contemporary corporates!

Let us wait and see....

Wednesday, February 3, 2010

Hybrid Columnar Compression (HCC)

Oracle has introduced a new data storage technology in Exadata V2 called Hybrid Columnar Compression. This method uses a "compression unit" that is made of several rows; it is physically stored in compressed "column vectors" in multiple blocks of storage.

There are two flavors of columnar compression that are optimized for "QUERY" (warehouse compression better for executing queries) and "ARCHIVE" (best data storage for archive compression)

Each of the QUERY and ARCHIVE come with another modifier to set the "LOW" or "HIGH" compression mode.

This technical white paper gives an overview of the new technology.

This article on Oracle Magazine may also be useful in understanding this new feature.

More data can be stored in "less" space. Good!