As the year 2011 draws to a close, we are entering into a new year that is very important for UK workplace pensions. The new regulation around workplace pensions is coming into force in 2012.
Being purely technical for few years, I just want to test my skills around understanding regulatory documents and extract "Functional Information Needs" for a business.
A good review published in October 2010 can be accessed here: http://www.dwp.gov.uk/docs/cp-oct10-full-document.pdf
Basically, all the employers in UK have to automatically enroll all the eligible workers falling in a AGE range and EARNINGS range to a suitable pension scheme. They also need to "certify" the selected pension scheme meets the required quality criteria. (Refer to section 6.5 of the review document above) The requirement is to have at least 8% of earnings are paid towards a pension fund.
Regulation defines "Qualifying Earnings" as gross earnings that include commissions, bonuses, overtime etc., but most of the employers have the pension contribution basis as the "Basic Pay" i.e a pensionable pay. So, employer has primarily following options to "Certify" the pension scheme and comply with it.
Pseudo logic in plain English
1. IF pension contribution basis IS qualifying earnings (within the band) THEN pay the contributions of 8% and no certification is required.
2. IF pension contribution basis IS pensionable pay THEN
2a. CASE pensionable pay is 100% of gross pay THEN pay contributions of 7%
2b. CASE pensionable pay is at least 85% of gross pay THEN pay contributions of 8%
2c. CASE pensionable pay is less than 85% of gross pay THEN pay contributions of 9%
AND self certify pension scheme for all the employees participating in the pension scheme.
So, core information needs to implement this regulation is employee payroll data that covers age, all components of qualifying earnings of all employees. A bit of intelligence is needed to "model" the best possible grouping of employees and assign them to a suitable pension scheme(s) with one (or more) of the pension providers in the market.
The overall goal for a techno-functional consultant like me is to optimize the value of new pension regulations for employees, employers, pension providers and IT consulting companies by optimizing the information flows across various stakeholders!!
Wishing everyone a great new year 2012. Let there be peace, security and prosperity be with one and all.
Friday, December 30, 2011
Sunday, December 4, 2011
Storing Rows and Columns
A fundamental requirement of a database is to store and retrieve the data. In Relational Database Management Systems (RDBMS) the data is organized into a table that contain the rows and columns. Traditionally the data is stored into blocks of rows. For example a "sales transaction row" may have 30 data items representing 30 columns. Assuming a record occupies 256 bytes, a block of 8KB can hold 32 such records. Again assuming a million such transactions that need to be stored in 32150 blocks per day. All this works well as long as we need the data as ROWS! We want to access one row or a group of rows at a time to process that data, this organization has no issues.
Let us consider if we want to get a summary of total value of type x items that are sold in past seven days. This query need to retrieve 7million records that contain 30 columns each to just process the count of items of types x. All that we need is two columns item type and amount to process this. This type of analytical requirement lead us to store the data in columns. We group the columns together and store them in blocks. It improves the speed of retrieving the columns from the overall table quickly for the purpose of analyzing the data.
But the column storage has its limitations when it comes to the write and update
With a high volume of social data, where there is high volume of write is needed (like messages and status updates, likes and comments etc.,) , highly distributed, NOSQL based column stores are emerging into mainstream. Apache Cassandra is the new breed of NOSQL column store that was initially developed by Facebook.
So, we have a variety of data base / data stores available now, a standard RDBMS engine with SQL support for OLTP applications, A column based engies for OLAP processing and noSQL based key value pair stores for in-memory processing, highly clustered Hadoop style big data with map/reduce framework for big data processing and noSQL based column stores for high volume social write and read efficiencies.
Making right choice of data store for the problem in had is becoming tough with many solution options. But that is the job of an architect; Is it not?
Let us consider if we want to get a summary of total value of type x items that are sold in past seven days. This query need to retrieve 7million records that contain 30 columns each to just process the count of items of types x. All that we need is two columns item type and amount to process this. This type of analytical requirement lead us to store the data in columns. We group the columns together and store them in blocks. It improves the speed of retrieving the columns from the overall table quickly for the purpose of analyzing the data.
But the column storage has its limitations when it comes to the write and update
With a high volume of social data, where there is high volume of write is needed (like messages and status updates, likes and comments etc.,) , highly distributed, NOSQL based column stores are emerging into mainstream. Apache Cassandra is the new breed of NOSQL column store that was initially developed by Facebook.
So, we have a variety of data base / data stores available now, a standard RDBMS engine with SQL support for OLTP applications, A column based engies for OLAP processing and noSQL based key value pair stores for in-memory processing, highly clustered Hadoop style big data with map/reduce framework for big data processing and noSQL based column stores for high volume social write and read efficiencies.
Making right choice of data store for the problem in had is becoming tough with many solution options. But that is the job of an architect; Is it not?
Subscribe to:
Posts (Atom)