Gear up for #AIOUG OTN Yathra’ 2016

Guys,

AIOUG is back again with OTN Yathra’ 2016. It is a series of technology evangelist events organized by All India Oracle Users Group in six cities touring across the length and breadth of the country. It was my extreme pleasure to be the part of it in 2015 and I’m pleased to announce that I’ll be speaking this year too. This year it starts on April 23rd and runs until May 01st, 2016. Check out the event schedule and objectives here http://www.otnyathra.com/

I will be speaking in Bangalore (Apr 24th), Hyderabad (Apr 26th), and Mumbai (Apr 30th). My session abstracts are as below –

1. Backup your databases to the cloud using Oracle Database Backup Service

Oracle Database Backup Service is a secure, scalable, reliable, and on-demand Oracle public cloud storage solution for storing Oracle Database backups. Businesses can access additional storage capacity in minutes with zero hardware investment. The Oracle Database Backup Service can be accessed from anywhere, at anytime, and from any Oracle database server connected to the Internet.

This session will touch upon several aspects of backup cloud service like subscription process, scalability, access, and security. The attendees will learn about the new backup-as-a-service offering that enables customers to store their backups securely in the Oracle cloud via a downloadable backup module that transparently handles the backup and restore operations.
2. Use Oracle BigData SQL to query for All Your Data

Oracle Big Data SQL provides unified query across Oracle database, Hadoop, and NoSQL datastores. It uses query franchising technique to maximize the performance and avoid the pitfalls of language-level federation using query franchising. Oracle BigData SQL uses Hadoop Smart Scan to quantify the performance and minimize data movement. This session will discuss the limitations of language level federation, capabilities of BigData SQL and the latest updates of a cutting-edge technology.

Please register for this event and confirm your availability in the event. I’ll see you there.

Regards

Advertisements

#C15LV Collaborate’15 updates!!

Hi All,

IMG_20150411_203427~2

Last week, I registered my presence at Collaborate’15 in Las Vegas as a speaker, attendee and at Oracle demo booth duty. It was an awesome experience meeting some polished brains and smart minds. After the first day of registration, I attended the #DBIM12c session by Maria Colgan, #DB12c Multitenant by John McHugh and #Exadata session by Dan Norris – its always good to hear the most recent updates on something you have been working on.

IMG_20150414_145303IMG_20150412_142332~2

The topic of my talk was “I/O Resource Management on Exadata”. Thanks to all those who attended my session 662 at Banyan D conference room (inspite of Database In-Memory bootcamp and other concurrent sessions). The objectives of my session was to discuss the management of server resources in the cases of database consolidation, including the configuration, monitoring and best practices. At the same time, provide the latest updates on IO Resource Management. I appreciate the interaction of the audience who came forward with their experiences, problems and understand how IORM can resolve resource conflict issues. The hour long session was well received, which reminds me to remind all the session attendees – do fill in the session evaluation 🙂

IMG_20150413_184947

IMG_20150414_174609

 

 

 

 

 

 

 

 

After my stint as speaker, I took over the demo booth duty for Oracle Database 12c. Thanks to all those who passed by booth#54 and showed interest in understanding Oracle Database 12c, Multitenant, In-Memory. Glad we were able to help but sorry, our booth didn’t had freebies though!! We discussed some complex database deployments and tried to figure out better and easy solutions for those. On and off, I also visited partner booths and I truly appreciate their efforts in installing really nice structures and innovative ideas to attract more people.

One of the challenge which I could see and experience was planning the sessions to attend. Many concurrent sessions left the attendees in confusion as to which one to attend and which one to leave. But believe me, thats the beauty of this conference, you get things in extra size only. Just grab a cup of coffee and get set go.

Special mention to all those involved in coordinating and event administration. Thanks to IOUG for organizing the bring some great minds together. Hope to see you all next year as well. Thanks all

Saurabh

 

 

Oracle 12c Technical Hands-On Workshop

I have been running many tech events and briefings on Oracle 12c for India Partners. This time around, I thought of posting my latest event reviews.

This week, I wrapped up the Oracle 12c Technical Hands-On workshop in Oracle facility, Gurgaon. It was a 2-day event from Oracle Database Product Management. The focus of the workshop was primarily on Oracle 12c Multitenant architecture along with the hands-on labs using a virtualbox image. The audience was comprised of the representatives from the key Oracle partners in the NCR area. I appreciate their interest and in spite of odds like the last night rain and massive traffic, they showed up in time. I was the lead instructor for this event along with my senior colleague Mr. Rick Pandya. Thanks to Rick who has flown from Chicago, to be with me for several partner events in India.

Here was the agenda of the workshop –

  • Introduction to Oracle 12c Multitenant Architecture
  • Administration and Management of Multitenant Databases
  • Cloning, Consolidation, Relocating, Backup/Recovery, Security
  • Migrating to Multitenant Architecture
  • Upgrading to 12c CDB using DBUA
  • Performance Monitoring and Resource Management
  • Heat Maps and ILM, Temporal Validity, In-Database Row archiving

For all reasons, Multitenant was the focus and area of interest. Certain excerpts like share-able components of a Container Database, PDB provisioning, remote cloning, spfile, control file and common users were the area of exploration. Some of the folks were good to find that – in a pluggable container, the instance name (from V$INSTANCE) and db name (from V$DATABASE) appears to be the container name because of the obvious reasons.

The real challenge was the hands-on part of the workshop where the participants were required to carry the high-end configuration laptops, but only few of them could manage it. An 8G laptop with 100G of space was expected to run the exercises on database upgrade, backup and migration. We gave a shot by trimming the vbox image to run on 4G lappe. It did worked but host OS performance was in a toss.

My next stop is in Chennai where I’ll be driving this event for specific Partners. See you all 🙂

Saurabh K. Gupta

Oracle Database 12c PRAGMA UDF and WITH clause enhancements

Here are two interesting enhancements in Oracle database 12c PL/SQL.

PL/SQL subprogram defined using WITH clause of a subquery – Oracle database 12c allows PL/SQL declaration section in the WITH clause. One can define PL/SQL function or procedure into a WITH clause. Functions declared in the PL/SQL declaration section can be invoked instantly in the SELECT statement while the procedures can be invoked from the functions used in the declaration section.

PL/SQL functions defined using PRAGMA UDF – SQL and PL/SQL have different memory representations of values. Therefore, interconversion is involved while “switching” from one engine to other and vice versa. It allows you to define the PL/SQL subprogram outside the SQL statement but matching the performance of an inlined PL/SQL program.

Let us do a small test to see the performance gains-

1. Created a test table T

CREATE TABLE t
(
PK integer not null,
n1 integer not null,
n2 integer not null,
n3 integer not null,
constraint t_PK primary key(PK)
)
/

2. Inserting some random data in the table T using DBMS_RANDOM

DECLARE
commit_count constant pls_integer := 100000;
nof_Rows constant pls_integer := 20*commit_count;
Zero constant integer not null := 0;
THS constant integer not null := 1000;
MIL constant integer not null := THS*THS;
BIL constant integer not null := MIL*THS;
TIL constant integer not null := BIL*THS;

M1 constant integer not null := 2*THS;
M2 constant integer not null := 2*BIL;
Hi constant integer not null := 2*TIL;
BEGIN
DBMS_Random.Seed(To_Char(Sysdate, 'MM-DD-YYYY HH24:MI:SS'));
for j in 1..Nof_Rows loop
declare
n1 integer not null := DBMS_Random.Value(Zero, M1);
n2 integer not null := DBMS_Random.Value(M1, M2);
n3 integer not null := DBMS_Random.Value(M2, Hi);
begin
insert into t(PK, n1, n2, n3) values(j, n1, n2, n3);
end;
if Mod(j, commit_count) = 0 then
commit;
end if;
end loop;
commit;
END;
/

3. Table has undergone good number of transactions; so lets gather the table stats

begin
DBMS_Stats.Gather_Table_Stats('SCOTT', 'T');
end;
/

4. Here is the objective. Let us create the PL/SQL function to display an integer as a multiple of appropriate unit of “Thousand”, “Million”,”Billion” or “Trillion”. We shall do this activity in different fashions to compare the performance. We shall record the timing for each case.

a) Using a conventional pre 12c standalone function to set the BASELINE

CREATE OR REPLACE FUNCTION F_ShowVal_pre12c(n IN integer) return varchar2 is
THS constant integer not null := 1000;
MIL constant integer not null := THS*THS;
BIL constant integer not null := MIL*THS;
TIL constant integer not null := BIL*THS;
BEGIN
RETURN
CASE
WHEN n <= THS-1 then To_Char(n, '999999')||' units'
WHEN n/THS <= THS-1 then To_Char(n/THS, '999999')||' Thousand'
WHEN n/MIL <= THS-1 then To_Char(n/MIL, '999999')||' Million'
WHEN n/BIL <= THS-1 then To_Char(n/BIL, '999999')||' Billion'
ELSE To_Char(n/TIL, '999999')||' Trillion'
END;
END F_ShowVal_pre12c;
/

SET TIMING ON
SELECT F_ShowVal_pre12c(n1) n1, F_ShowVal_pre12c(n2) n2, F_ShowVal_pre12c(n3) n3 FROM t
/

b) Using Pure SQL – Without using a function or 12c enhancement

SET TIMING ON
SELECT PK,
case
when n1 <= 999 then To_Char(n1, '999999')||' units'
when n1/1000 <= 999 then To_Char(n1/1000, '999999')||' Thousand'
when n1/1000000 <= 999 then To_Char(n1/1000000, '999999')||' Million'
when n1/1000000000 <= 999 then To_Char(n1/1000000000, '999999')||' Billion'
Else To_Char(n1/1000000000000, '999999')||' Trillion'
end,
case
when n2 <= 999 then To_Char(n2, '999999')||' units'
when n2/1000 <= 999 then To_Char(n2/1000, '999999')||' Thousand'
when n2/1000000 <=999 then To_Char(n2/1000000, '999999')||' Million'
when n2/1000000000 <=999 then To_Char(n2/1000000000, '999999')||' Billion'
Else To_Char(n2/1000000000000, '999999')||' Trillion'
end,
case
when n3 <= 999 then To_Char(n3, '999999')||' units'
when n3/1000 <= 999 then To_Char(n3/1000, '999999')||' Thousand'
when n3/1000000 <= 999 then To_Char(n3/1000000, '999999')||' Million'
when n3/1000000000 <= 999 then To_Char(n3/1000000000, '999999')||' Billion'
Else To_Char(n3/1000000000000, '999999')||' Trillion'
end
FROM t
/

c) Declaring the PL/SQL function in the subquery’s WITH clause

SET TIMING ON
WITH
function ShowVal(n IN integer) return varchar2 is
THS constant integer not null := 1000;
MIL constant integer not null := THS*THS;
BIL constant integer not null := MIL*THS;
TIL constant integer not null := BIL*THS;
BEGIN
return
case
when n <= THS-1 then To_Char(n, '999999')||' units'
when n/THS <= THS-1 then To_Char(n/THS, '999999')||' Thousand'
when n/MIL <= THS-1 then To_Char(n/MIL, '999999')||' Million'
when n/BIL <= THS-1 then To_Char(n/BIL, '999999')||' Billion'
Else To_Char(n/TIL, '999999')||' Trillion'
end;
end ShowVal;

SELECT showVal(n1) n1, showVal(n2) n2, showVal(n3) n3
FROM t
/

d) Declaring the PL/SQL function using PRAGMA UDF

CREATE OR REPLACE FUNCTION F_ShowVal(n IN integer) return varchar2 is
PRAGMA UDF;
THS constant integer not null := 1000;
MIL constant integer not null := THS*THS;
BIL constant integer not null := MIL*THS;
TIL constant integer not null := BIL*THS;
BEGIN
RETURN
CASE
WHEN n <= THS-1 then To_Char(n, '999999')||' units'
WHEN n/THS <= THS-1 then To_Char(n/THS, '999999')||' Thousand'
WHEN n/MIL <= THS-1 then To_Char(n/MIL, '999999')||' Million'
WHEN n/BIL <= THS-1 then To_Char(n/BIL, '999999')||' Billion'
ELSE To_Char(n/TIL, '999999')||' Trillion'
END;
END F_ShowVal;
/

SET TIMING ON
SELECT F_ShowVal(n1) n1, F_ShowVal(n2) n2, F_ShowVal(n3) n3
FROM t
/

Recorded the timings from the above cases (a), (b), (c) and (d) in the below matrix. Here is the performance comparison from the above scenarios –

plsql

Exadata Hybrid Columnar Compression

The basic idea behind the Exadata Hybrid Columnar Compression (hereby referred as EHCC) is to reprise the benefits of column based storage while sustaining to the fundamental row based storage principle of Oracle database. Oftentimes  the databases following column based storage claim that comparatively they needs less IO to retrieve a row than a row based storage. In a row based store, a row search requires entire table to be scanned which needs multiple IO’s. Hybrid columnar compression uses column based storage philosophy in compressing the column values while retaining the row based stores in a logical unit known as compression Unit. It helps in space storage by compression and yield performance benefits by reducing the IO’s. EHCC is best suited for the databases with less updates and low concurrency. Also it applies only to table and partition segments – not to Index and LOB segments.

How EHCC works? What is a Compression Unit?
EHCC is one of the exclusive smart features of Exadata which targets the storage savings and performance at the same time. EHCC can also be enabled on other storage systems like Pillar Axiom and ZFS storage servers. Traditionally, the rows within a block are sequentially placed in a row format next to another. The collision of unlike data type columns restricts the compression of data within a block. EHCC enables the analysis of set of rows and encapsulates them into a compression unit where the like columns are compressed. As Oracle designates a column vector to the like valued column, compression of like columns having like values ensures considerable savings in space. The column compression gives a much better compression ratio as compared to the row compression.

Don’t run into the thoughts that Exadata offers a columnar storage through EHCC. It is still a row based database storage but stressed on the word “hybrid” columnar. The rows are placed in a Compression Unit where like columns are compressed together efficiently. Kevin Clossion explain the structure of CU in one of his blog posts (http://kevinclosson.wordpress.com/2009/09/01/oracle-switches-to-columnar-store-technology-with-oracle-database-11g-release-2/ ) as “A compression unit is a collection of data blocks. Multiple rows are stored in a compression unit. Columns are stored separately within the compression unit. Likeness amongst the column values within a compression unit yields the space savings. There are still rowids (that change when a row is updated by the way) and row locks. It is a hybrid.”.

Notice that EHCC is powerful only for direct path operations i.e. Bypassing the buffer cache.

A table or partition segment on Exadata system can accommodate compression units, OLTP compressed blocks and uncompressed blocks. A CU is independent of a block or the block size but surely, it is larger than a single block as it spans across multiple blocks. The read performance is benefited from the fact that a row can be retrieved in a single IO by picking up the specific CU instead of scanning the complete table. Hence, EHCC reduces the storage space through compression and disk IO’s by a considerable factor. A compression unit cannot be further compressed.

Compression Algorithms – The three compression algorithms used by EHCC are LZO, ZLIB, and BZ2. The LZO algorithm ensures highest levels of compression while ZLIB promises a fair and logical compression. The BZ2 offers a low level of compression of data.

CU Size – On an average, a typical CU size is 32k-64k in case of warehouse compression while for archival compression, the CU size is between 32k to 256k. In a warehouse compression, around 1M of rows (16-20 rows depending on a row size) are analyzed in a single CU. In archival compression, around 3M to 10M of row data is analyzed to built up a CU.

EHCC types – EHCC works in two formats – warehouse compression and archival compression. Warehouse compression is aimed for OLTP and data warehouse applications and the compression ratio hovers between 6x to 10x. Archival compression suits the historical data which hsa less probability of updates and transactions.

EHCC DDLs – Here are few scripts to demonstrate basic compression operations on tables

–Create new tables/partitions with different compression techniques–
create table t_comp_dwh_h ( a number ) compress for query high;
create table t_comp_dwh_l ( a number ) compress for query low;
create table t_comp_arch_h ( a number ) compress for archive high;
create table t_comp_arch_l ( a number ) compress for archive low;

–Query compression type for a table–
select compression,compress_for from user_tables where table_name = ‘[table name]’;

–Enable EHCC existing tables/partitions–
alter table t_comp_dwh compress for query low;

–Enable EHCC for new tables/partitions–
alter table t_comp_dwh move compress for query low;

–Disable EHCC feature–
alter table t_comp_dwh nocompress;

–Specify multiple compression types in single table–
Create table t_comp_dwh_arch
(id number,
name varchar2(100),
yr number(4))
PARTITION BY RANGE (yr)
(PARTITION P1 VALUES LESS THAN (2001) organization hybrid columnar compress for archive high ,
PARTITION P2 VALUES LESS THAN (2002) compress for query)

Language support to CU  A CU is fully compatible with indexes (b-tree and bitmap), mviews, partitioning, and data guard. It is fully supported with DML, DDL, parallel queries, parallel DML and DDLs. Let us examine certain operations with a CU.

Select – EHCC with smart scan enables the query offloading on the exadata storage servers. All read operations are marked with direct path read i.e. bypass the buffer cache. If the database reads multiple columns of the table and does frequent transaction, the benefits of EHCC are compromised. This is how the read operation carries on –

A CU is buffered => Predicates processing => Predicate columns decompressed => Predicate evaluation => Reject CU’s if no row satisfies the predicate => For satisfying rows, the projected columns are decompressed => A small CU is created with only projected and predicate columns => Returned to the DB server.

Locking – When a row is locked in the compression unit, whole compression unit is locked until the lock is released.

Inserts – As a feature, the hybrid columnar compression works only at the load time with direct operations only. Data load technique can be any of the data warehouse load technique or a bulk load one. For conventional inserts/single row inserts, data still resides in the blocks which can be either uncompressed or OLTP compressed. New CU’s will only be created during bulk inserts or table movement to the columnar compression state.

Updates – Updating a row in the CU causes the CU to be locked and the row moves out of CU to a less compression state. This hinders the concurrency of the CU which negatively effects the compression. The effect can be observed in warehouse compression but it is certainly more in archival compression. The ROWID of the updating row changes after the transaction.

Delete – Every row in a CU has an uncompressed delete bit which is checked if a row is marked for deletion.

Compression Adviser – The DBMS_COMPRESSION package serves as the compression adviser  You can get to know about the compression paradigm of a row by using DBMS_COMPRESSION.GET_COMPRESSION_TYPE subprogram. It returns a number indicating the compression technique for the input ROWID. Possible return values are 1 (No Compression),2 (OLTP Compression),4 (EHCC – Query high),8 (EHCC – Query low),16 (EHCC – Archive high),32 (EHCC – Archive low). In addition, the GET_COMPRESSION_RATIO subprogram can be used to suggest the compression technique based on the compression ratio for a segment.

Critical look
EHCC is one of the most discussed SMART feature of Exadata database systems. It promises to provide atleast 10x storage benefits – though certain benchmarks have shown better results too. A famous citation which I see in almost other session on EHCC – a 100TB database can be compressed to 10TB thus saving 90TB of space on the storage and hence, 9 other databases of size 100TB can be placed on the same storage – thus, the IT management can be relieved of storage purchases for atleast 3-4 years assuming the data grows by a factor of two. I’ll say the claim looks pretty convincing from the marketing perspective but quite impractical on technical grounds. I would like to read it as – 1000TB of historical data can be accommodated on 100TB of storage.

A lot has been written and discussed over the topic whether Oracle is on the way to embrace the columnar storage techniques. I’ll say NO because it just looks application of the concept which looks no harm. The biggest hump for the EHCC feature is its own comfort zone i.e. database with less transactions and low concurrency. On a database which does frequent transactions and reads the data, the feature stands defeated.

References – Some of the best blog references on the topic over the web

http://dbmsmusings.blogspot.com/2010/01/exadatas-columnar-compression.html
http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o10compression-082302.html
http://www.rittmanmead.com/2010/01/hybrid-columnar-compression-in-oracle-exadata-v2/
http://flashdba.com/history-of-exadata/66-2/

Any conflicts/comments/observations/feedback invited on the write up.

Oracle NoSQL Kvstore (1.2.123) deployment demo

With the next turn fold of web2.0, database paradigms have molded their behaviors too to map the requirements and expectations. NoSQL is a database of modern times where the database platform is need based rather than rule based. Rather being generic, traditional or conventional, its more suited for limited – perhaps specific requirements. Leading software giants like Facebook, Amazon, Oracle have already came up with their own idea of NoSQL databases. Its difficult to digest the fact that current more than 130 flavors of NoSQL databases available.

I have tried to deploy a 3 node datacenter on Virtualbox using Oracle’s NoSQL product offering. I would like to share the demonstration along with fair understanding. Here is the abstract and the demo doc for download.

Oracle NoSQL Overview

Oracle NoSQL offers a non relational, distributed and horizontally scalable datastore to achieve high availability with a simplified data model.

The article describes how to setup a 3-node Oracle NoSQL Kvstore datacenter on Oracle Virtualbox. The objective of the demonstration is to allow users to setup the datacenter on their local environment and carry out hands-on activities in regards to NoSQL administration.

The article starts with a brief introduction to NoSQL and the Oracle offering. It lists the prerequisite products used for the demonstration and summarizes the setup information. Later the datacenter deployment has been demonstrated in detail supported with required screenshots.

NoSQL: The Introduction

NoSQL is one of the emerging database solution in the world today. Very often, it is misunderstood as anti SQL, but it is not true. It carries the name “Not Only SQL” for the fact that it doesn’t uses SQL, contrary to the conventional database systems.

The key objective behind the evolution of NoSQL solution was to cope up the high growth of semi structured data and real time processing. The Web 2.0 platforms expressed their concerns over the availability, scalability, and flexibility of RDBMS products. On the other hand, NoSQL database facilitates, perhaps meant for high availability, flexibility and its scaling out capabilities. Several top tier websites like Amazon, Facebook and Twitter employ NoSQL solution to yield maximum availability and quick performance by virtue of its high throughput and low latency features.

Overall, NoSQL doesn’t provides a soul database management system but just a data repository. Contrary to the ACID (Atomicity, Consistency, Integrity and Durability) compliance, NoSQL database solution complies with BASE (Basically Available, Soft state, Eventually consistent) model. Most of NoSQL databases are key value stores, while several database products have also been registered as columnar, document based and graph based under NoSQL category.

Oracle brings its own NoSQL database product which is built on Berkley DB java platform. Berkley DB has been a proven XML based storage model over the last decade. Oracle NoSQL database is a distributed key value store which is built on set of storage nodes. In recent times, Oracle NoSQL product has caught the attention in the community by to its inclusion in the Oracle Big Data solution for real time data processing.

Currently, the NoSQL database is available in two flavors namely, Community edition and Enterprise edition.

The Oracle NoSQL database product can be downloaded from the below link

http://www.oracle.com/technetwork/database/nosqldb/downloads/default-495311.html

Download the complete demo document from the below link

OracleNoSQL 3 node KVstore Deployment on Virtualbox

 

Do share you feedback/suggestions and observations.

Oracle Engineered Systems: Hardware and Software Engineered to Work Together

Oracle Engineered Systems are highly efficient integrated systems which combine hardware and software to provide a complete enterprise solution to the customers or partners. The focus of the Oracle Engineered Systems is to give extreme performance, high scalability and maximum availability by reducing infrastructure complexity and setup cost. The pre assembled innovative systems have greatly simplified the requirements of a data center. One of the most complete enterprise systems in the current times, oracle engineered systems have everything to offer and fit the requirement. It can be understood as a box packaged box comprising of application support, operating system, virtualization, hardware management, networking support, and optimized storage scheme; all assembled to offer highly efficient and scalable solution to the customers.

Complete list of advantages drawn from Oracle Engineered Systems are as below
1. Integration of hardware and software components
2. Enhanced performance, claims 10x times faster than normal database
3. Low risk during installation and upgrades; high security
4. Accelerated deployment
5. Reduced complexity, IT cost and TCO (Total Cost of Ownership)
6. Single vendor support for purchase, deployment and support

Oracle engineered systems can be classified under below six solutions which we shall discuss briefly

1. Exadata
2. Exalogic
3. Exalytics
4. Oracle Database Appliance
5. Oracle Big Data Appliance
6. SPARC super cluster

1. Exadata
Exadata is one of the fastest database machines which work for both OLTP and data warehousing applications. It is a packaged integration of hardware and software comprising of server (Oracle 11g database servers), storage (Exadata Storage server), networking (InfiniBand), and virtualization. The Exadata machines are efficient to store upto 10 times more data and yield 10x-50x times better execution performance. The Exadata machine runs on the latest database version i.e. Oracle 11g Release 2.

Currently, there are two versions of Oracle Exadata namely, X2-2 and X2-8. The X2-2 version is a lower version with 2 to 8 twelve core database servers. The X2-8 version is best suited for huge requirements with 2 eighty core database servers. Depending upon the database size and performance requirements, these versions can be deployed in quarter rack, half rack and full rack configuration. Lower configurations can be upgraded to the next level with zero downtime, thus making it a scalable solution.

Key features of Exadata which contribute to the extreme performance are smart scan, hybrid columnar compression, smart flash cache, intelligent I/O resource management, smart flash logging, and storage indexes.

Oracle encourages its partners and customers (ISV) to have hands on Exadata and Exalogic through Oracle’s Exastack progam. The OPN members can utilize Oracle resources to achieve Oracle Exastack Ready or Oracle Exastack Optimized status. Oracle Exastack ready status qualifies an OPN member to be beneficiary based on their exhibition on Oracle products. A gold partner with Oracle Exastack optimized status has full access to technical resources from oracle and lab environments.

2. Exalogic
Exalogic is the high performance engineered system which is specifically designed for running Oracle Fusion Middleware, Oracle’s Fusion and Java based applications. Besides the enterprise applications, Exalogic works equally well for Linux or Solaris based applications. For Java based application mounted on Exalogic, performance can be improved upto 10x times with 5x more active users. Oracle applications run 4x times faster as compared to the normal servers with 3x times more active users. Key software engineered with the Exalogic hardware are Weblogic server, Coherence, JRockit and Hotspot, Exalogic elastic cloud software, Oracle linux, and enterprise manager for cloud monitoring and control. Exalogic is available in quarter rack, half rack, full rack and even multi racks (2-8) versions. Upgrades are possible from lower configuration to higher one with zero downtime and negligible maintenance issues.

An Exalogic unit is configured with cloud capacity too. The Exalogic Elastic Cloud is efficient to mount an application on a secure private cloud with extreme performance and simple management. All types of applications, ranging from small scale to large scale like mainframe applications can be based on Exalogic. The cloud capacity associated with Exalogic contributes to the enhanced application capacity and performance, reduced latency, and intensive database communication.

Oracle encourages its partners to take up Exalogic EX-CITE program. The program aims to demonstrate the efficiency and effectiveness of Exalogic on customers/partners business prospects.

3. Exalytics
After database and application engineered systems, Exalytics is the engineered system which focuses on the Business Intelligence applications. The Exalytics machine enables speedy analysis of data using In Memory (In Memory Parallel Analytics) processing engine.

The Exalytics architecture includes BI foundation suite (OBIEE), In Memory Parallel Essbase, and In Memory Parallel TimesTen database for Exalytics along with network components (Infiniband). Oracle TimesTen database is a relational In Memory database where the tables are cached under cache groups in the memory. Its existing capabilities have been enhanced for analytic processing by supporting columnar compression. Oracle EssBase is an OLAP server for analytic applications.

BI query reporting time improves by 18x times when Exalytics works with Oracle database. Combination of Exalytics and Exadata improves the BI query reporting time by 23 times.

The Oracle Exalytics machine is fed with four Intel Xeon E7-4800 processors where each one can provide 10 cores for computational purposes.

4. Oracle Database Appliance (ODA)
The Oracle Database Appliance is the engineered system which serves the lower capacity database services for OLTP and data warehousing applications. It is a shorter format (quarter rack) of Exadata machine which are expandable and promise higher capacity systems. In contrary to Exadata, ODA is affordable and offers easy implementations over skilled and risk deployment.

Oracle Database Appliance comes as a 4 rack unit (2 server nodes and 12TB storage capacity) running on Oracle Linux with 11gR2 RAC supported database. A complete ODA system is engineered with Oracle Linux, Oracle 11gR2 database (enterprise edition), RAC, grid infrastructure, enterprise manager, oracle automatic service requests and appliance manager. The automatic service request is an intelligent facility which can record and generate any hardware failure or replace requests. The appliance manager is a self efficient tool gets started in the deployment stage for assembling, installation and configuration tasks. In the later stages of maintenance and support, appliance manager is efficient to apply patches or reports a fixation for troubleshoot (if any).

It is best suited for non expandable systems and lower capacity customers. ODA is easy to implement, affordable and ensures high performance and serviceability.

5. Oracle Big Data Appliance (OBDA)
Oracle Big Data Appliance is the engineered system from Oracle to handle the growth of large scale enterprise data in the varied sections of the industry. The term Big Data refers to the techniques to counter the large enterprise data, both structured or unstructured, which grows at exponential rate like web data from twitter, linkedin, mapping sites etc. In a single rack, the Big Data appliance has 216 CPU processing cores and 648TB of raw storage. Starting with a single rack, the appliance can be scaled upto eight racks.

The Big Data appliance runs on Oracle Linux and Oracle JVM. Apache Hadoop from Cloudera is used for the distribution while NoSQL database (Oracle Berkeley DB) stores the data sets in key-value pairs. Using Map Reduce framework, the available data is organized and loaded into Oracle Exadata database machine. Key components which operate in this stage are Oracle loader and Oracle Data Integrator. Once the organized data is loaded, it is ready for analytical process and business decision making. The analysis is done in Oracle Exalytics In Memory machine. The statistical environment R is also used for advanced analytics at the decision stage. All the major hardware components i.e. Big Data appliance, Exadata and Exalytics share the InfiniBand connectivity so as to boost the network speed. In addition, the Big Data connectors do the load balancing between Big Data Appliance and Oracle Exadata machine.

The large data is diversified, organized and then analyzed. The Big Data platform operates in three stages namely, Acquire, Organize and Analyze. The infrastructure required for the Big Data platform can be divided as per the three stages.

Acquire: The stage where all available data is pulled and kept. Important components in this stage are Hadoop Distributed File System, and NoSQL Database.

Hadoop is an open source framework developed by Doug Cutting from Cloudera to counter large number of upcoming data requests. It is a file system which parallel takes the requests in large batches, breaks them into smaller requests and feeds into the distributed file systems. Cloudera manager tool is used to manage Hadoop.

Organize: Mapping, reducing and organization of data. Components at this stage are Oracle Exadata database machine, Oracle loader, Oracle Data Integrator, and Hadoop Map Reduce framework.

Analyze: Analysis and decision making stage. Oracle Exalytics does the job in the stage.

Decide and Visualize: Advanced analytics using R statistical environment

6. SPARC Super Cluster
The SPARC super cluster is the engineered system from Oracle to fit general purpose requirements of customers. It runs for all sorts of workloads. It integrates the high performance components like SPARC T4 compute tool, Exadata storage cells, Exalogic Elastic Cloud, ZFS storage appliance, Solaris 11, and enterprise manager. The components share InfiniBands connectivity.