Using D3 and OBIEE

Standard

OBIEE is an integration of several pieces of technology that creates an enterprise grade scalable platform for delivering business analytics.  Because of the capabilities of OBIEE, people with no programming skills can create rich and complex visualizations using just the base functionality of the product.

When organizations do need special visualizations that can’t be accomplished with OBIEE’s delivered visualizations there can be a tendency to turn to other technology stacks because of the perception that either OBIEE isn’t easy to integrate or that using JavaScript libraries, like D3, within OBI will make it harder to upgrade the OBI in the future.

The proliferation of open source technologies for visualizing data in recent years has made it far easier for organizations to build rich and dynamic visualizations targeted at specific analytics needs that simply are not available inside of purchased applications.

OBI provides a number of key capabilities that are difficult, expensive and time consuming to reproduce.  Some of those capabilities include: a scalable webserver with performance and logging capabilities that has 24/7 worldwide support, the capability to query and federate hundreds of different data sources and create a reusable and extensible business metadata layer, security services that can concurrently integrate with multiple corporate identity management systems and extend that security to row level results, and an analytics deployment mechanism that already extends across the organization.

Given a business case where (Gasp!) OBIEE can’t provide the needed functionality out of the box how can an organization take advantage of functionality that OBIEE provides and still satisfy their internal requirements?

OBIEE has always provided the capability to integrate with other technologies and one of the easiest technologies to integrate is open source JavaScript and CSS libraries such as D3, jQuery and jQueryUI.

The steps to using a D3 visualization within OBI are easy:

  1. Make libraries available
  2. Create an analysis to get data you can use with D3
  3. Create visualization

The first two steps are super easy. The third step is where the actual effort is because you have to write some JavaScript code.

Note: you will need to have, or develop, some skills to do step 3 but those skills are on par with other skills within the scope of OBI development. Don’t let this stop you from trying this out!

If you have ever done RPD or ETL work then you can handle JavaScript development.  If you haven’t ever worked on the RPD or ETL, but you are sure you could if someone would just show you how, then you can handle JavaScript development.  If you have ever created an Access Database, or an absurdly complicated Excel spreadsheet with formulas and macros, then you can handle JavaScript development.  If you are obsessive compulsive and capable of using Google then you can handle JavaScript development.  If all of these things sound beyond you then you should probably stop now and save yourself the time and effort.

Step 1 – Making Libraries Available

There are 3 possible approaches to getting the JavaScript/CSS you need depending on your situation.

Situation 1 – I have control of my OBI server or the person who does is reasonable and will do stuff if I ask them to and therefore I can put files on my OBI server and deploy the analyticsRes directory.

Oracle documents detailing how to deploy analyticsRes can be found here – https://docs.oracle.com/middleware/12212/biee/BIESG/GUID-237D613D-CC0D-464D-BBC2-9A223CD12567.htm#BIESG9186

There are also at least 100 blog entries on how to do it.  It takes about 15 minutes.

Here’s a screenshot from sampleApp Weblogic console showing the mapping of the analyticsRes deployment to the /app/oracle/biee/user_projects/domains/bi/custom_wls_apps/analyticsRes/ file system directory on the OBI server.
Pic1

Here’s a screenshot of the contents of that directory on the filesystem –

Pic2

The libraries directory is where I have stashed all my stuff, so if I wanted to prove that the directory can be served over the web I could enter http://MyOBIServer:7780/analyticsRes/libraries/d3/d3.js into a browser and I should see the JavaScript code for D3 –

Pic3

At this point all of the libraries I need to access to make my very own visualization are deployed on the OBI server and available by using a script tag like this:

Or a link tag like this for CSS:

http://librariesd3d3.min.js

Situation 2 – You can reference links outside your corporate firewall

This is super-duper easy and takes about 1 minute.  D3 and jQuery will get you started nicely and you can reference them using the following:

If it’s easier you can also load all the libraries you need into an AWS S3 bucket (or your favorite bucket storage) and make the bucket available via URL.  The script tags will be exactly as above except referencing the URL for the bucket you have created.   You can read up on buckets here: http://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html

Situation 3 – Nobody trusts you or it’s just too painful and I wanna do it right now!

You can just copy paste the code from D3 and jQuery into a text area on your dashboard

http://librariesd3d3.min.js

Pic4

It’s quick, it’s effective, it’s lemon scented but for the love of all that is holy please don’t make this your production deployment approach because it’s also maddeningly slow every time you edit or save your dashboard page, not reusable and will offend anyone with a sense of decency.  But I’m not here to tell you how to live so do what you have to do.

Step 2 – Create an analysis to get data you can use with D3

For this example, I am going to produce a visualization that shows all of the sales reps for a given manager as boxes next to that manager’s name. This example pulls data from the commonly-used SampleApp application.

This is as simple as creating a narrative view on an analysis and having it translate data into JSON.

The basic format is as follows:

Prefix

var myVar={};

myVar.data=[];

Narrative

myVar.data.push({“col1”:@1,”col2”:@2});

Postfix

//do something with myVar.data

Using Sample App we can create a simple example using the Sample Sales subject area:

Pic5

Pulling the Manager Name, Sales Rep Name and Revenue we get a table view like the following –

Pic6

Next we create a narrative view using our format from above and then using the Developer Tools console of our browser (Internet Explorer in this case) we can check whether our JavaScript variable has been created by entering something in the console and hitting enter.  When we do this we see that our object has been created and we can drill down on it to see the data we expect:

Pic7

With this working we are now ready to move on to creating a visualization.

Step 3 – Create Visualization

Here comes the actual work.  JavaScript and D3 (a JavaScript library) are both magical and perplexing.  The hardest part about JavaScript is realizing it doesn’t wait to finish one thing before it gets started on the next.  This means if you need it to wait for something to happen (like fetch your data) then you need to use a callback function.  This will hurt your brain but it’s actually relatively simple after you get the hang of it.  We won’t need it for this example but just keep it in mind for later when you get all crazy.

D3 binds data to HTML elements (so you can build awesome visualizations) but what’s strange about it is D3 actually creates placeholders for the visual representations of your data before they exist.  It’s not terribly difficult to get used to but it’s a new way of thinking.

So let’s create our very own visualization.  First we need to add references to the code libraries in our narrative view and then add a div element that will contain the svg canvas D3 uses to draw our visualization.  Then we add the svg to our div element:

Pic8

Now if we run our narrative view and use the DOM explorer on our developer tools we can see the element we’ve created:

Pic9

The first thing we will do with our data is create an array of our distinct Manager names using some simple JavaScript and then we will begin using the magic of D3 by creating an html group element for each manager.

Let’s dig in to the function we used to add groups to the page:

myGrp=mySVG.selectAll(‘g’).data(distinct).enter().append(‘g’).attr(‘id’,function(d){return d.replace(” “, “”);});

First you’ll notice that d3 allows us to chain methods.  Each period represents a function and we can chain these together to get a lot of work done without being too verbose about it.

myGrp is a variable we assign the results of our function to and will be very useful for the next bit of code.

mySVG is the variable we assigned our drawing canvas to when we appended an svg to the page.

selectAll(‘g’) is the place holder I mentioned earlier and tells d3 that we have some data coming that we would like to do something with

data(distinct) is a reference to the array we created for our manager names

enter() is a function for joining each element in our array to an html element

append(‘g’) actually appends the group elements to our svg

and finally attr(‘id’,function(d){return d.replace(” “, “”);}) gives the html element an attribute ‘id’ equal to the manager name with the space removed.  ‘d’ is d3’s way of referring to the data being bound with an element and we use a function to return the value of d when assigning it to an attribute.  By inspecting the page with our browser DOM explorer we can see the elements created by our function.

Pic10

At this point we still don’t have anything visible on the page we’ve just created a group element and bound it to our data, but because we have bound the data the rest is easy.  Using the myGrp variable we can append a rectangle and d3 will automatically do it for each data value.  Our rectangle just needs some basic information to render like width, height, and the x/y position to draw it.  You will see that we use a function to refer to another automatic variable I that represents the element number in our data.  We have statically set x to 25 pixels from the edge of our svg and y is incremented 30 pixels down from the top for each element.  We can see the result of adding our rectangle below –

myGrp.append(‘rect’)

.attr(‘width’,150)

.attr(‘height’,25)

.attr(‘x’,function(d,i){ return 25})

.attr(‘y’,function(d,i){return i*30});

Pic11

The obvious thing missing from our rectangles is some text to tell us what they represent.  We add this the same way also setting the font color, size and family with the style attribute.

myGrp.append(‘text’)

.attr(‘x’,function(d,i){return 30})

.attr(‘y’,function(d,i){return (i*30)+18})

.text(function(d) { return d;})

.attr(“style”,”fill:white;font-family:sans-serif;  font-size:15px;”);

Pic12

Now if we would like to add rectangles for the Sales Reps that report to each manager it’s as easy as rinse and repeat by looping through our distinct manager array and appending a group, rectangle and text element for each rep.

Pic13

While rectangles aren’t particularly exciting the basic concept shown here can be used to produce nearly any visualization you can think of.

Once you master the basics of creating a simple visualization, the possibilities are almost endless. Here is another example of a project similar to a project I recently completed for a customer.

Imagine that we want to render incidents along I-70 at Eisenhower Pass in Colorado and show both the milepost they occurred at and the relative elevation and grade along with creating context menu’s for detail information we might create a visualization like this:

Pic14

While this may look much more complicated, it’s just adding some additional properties and uses the same basic techniques as the previous simplistic example.

I hope this post gives you an idea of how easy it is to use d3 and other libraries inside of OBI and a sense of what’s possible if you have a requirement a custom visualization.  Happy coding!

Advertisements

Adding a coordinate system to Oracle 12c Database

Standard

Had a situation where I was given a shape file from ESRI with EPSG 102005 which isn’t in the database.  When importing the shapefile through MapBuilder it of course flamed so I needed to figure out how to add the coordinate system to the database.

Found this on MOS –  Example Adding A New Coordinate System based on the EPSG model introduced in 10gR2 (Doc ID 395171.1) and was able to put together what I needed to import shapefile and work with it using MapBuilder and MapViewer Editor.

Here’s the SQL I used to insert new definitions into database –


/*
PROJCS[
"USA_Contiguous_Equidistant_Conic",
GEOGCS["GCS_North_American_1983",
DATUM[
"North_American_Datum_1983",
SPHEROID["GRS_1980",6378137,298.257222101]],
PRIMEM["Greenwich",0],
UNIT["Degree",0.017453292519943295]],
PROJECTION["Equidistant_Conic"],
PARAMETER["False_Easting",0],
PARAMETER["False_Northing",0],
PARAMETER["Longitude_Of_Center",-96],
PARAMETER["Standard_Parallel_1",33],
PARAMETER["Standard_Parallel_2",45],
PARAMETER["Latitude_Of_Center",39],
UNIT["Meter",1],
AUTHORITY["EPSG","102005"]]
*/

insert into MDSYS.SDO_COORD_OPS ( 
COORD_OP_ID, 
COORD_OP_NAME, 
COORD_OP_TYPE, 
SOURCE_SRID, 
TARGET_SRID, 
COORD_TFM_VERSION, 
COORD_OP_VARIANT, 
COORD_OP_METHOD_ID, 
UOM_ID_SOURCE_OFFSETS, 
UOM_ID_TARGET_OFFSETS, 
INFORMATION_SOURCE, 
DATA_SOURCE, 
SHOW_OPERATION, 
IS_LEGACY, 
LEGACY_CODE, 
REVERSE_OP, 
IS_IMPLEMENTED_FORWARD, 
IS_IMPLEMENTED_REVERSE) 
VALUES ( 
102005000, 
'USA_Contiguous_Equidistant_Conic', 
'CONVERSION', 
NULL, 
NULL, 
NULL, 
NULL, 
9802, 
NULL, 
NULL, 
NULL, 
NULL, 
1, 
'FALSE', 
NULL, 
1, 
1, 
1);

//8821: Latitude_Of_Origin
insert into MDSYS.SDO_COORD_OP_PARAM_VALS ( 
COORD_OP_ID, 
COORD_OP_METHOD_ID, 
PARAMETER_ID, 
PARAMETER_VALUE, 
PARAM_VALUE_FILE_REF, 
UOM_ID) 
VALUES ( 
102005000, 
9802, 
8821, 
39.0, 
NULL, 
9102);

//8822: Central_Meridian
insert into MDSYS.SDO_COORD_OP_PARAM_VALS ( 
COORD_OP_ID, 
COORD_OP_METHOD_ID, 
PARAMETER_ID, 
PARAMETER_VALUE, 
PARAM_VALUE_FILE_REF, 
UOM_ID) 
VALUES ( 
102005000, 
9802, 
8822, 
-96.0, 
NULL, 
9102);

//8823: Standard_Parallel_1
insert into MDSYS.SDO_COORD_OP_PARAM_VALS ( 
COORD_OP_ID, 
COORD_OP_METHOD_ID, 
PARAMETER_ID, 
PARAMETER_VALUE, 
PARAM_VALUE_FILE_REF, 
UOM_ID) 
VALUES ( 
102005000, 
9802, 
8823, 
33.0, 
NULL, 
9102);

//8824: Standard_Parallel_2
insert into MDSYS.SDO_COORD_OP_PARAM_VALS ( 
COORD_OP_ID, 
COORD_OP_METHOD_ID, 
PARAMETER_ID, 
PARAMETER_VALUE, 
PARAM_VALUE_FILE_REF, 
UOM_ID) 
VALUES ( 
102005000, 
9802, 
8824, 
45.0, 
NULL, 
9102);

//8826: False_Easting
insert into MDSYS.SDO_COORD_OP_PARAM_VALS ( 
COORD_OP_ID, 
COORD_OP_METHOD_ID, 
PARAMETER_ID, 
PARAMETER_VALUE, 
PARAM_VALUE_FILE_REF, 
UOM_ID) 
VALUES ( 
102005000, 
9802, 
8826, 
0.0, 
NULL, 
9001);

//8827: False_Northing
insert into MDSYS.SDO_COORD_OP_PARAM_VALS ( 
COORD_OP_ID, 
COORD_OP_METHOD_ID, 
PARAMETER_ID, 
PARAMETER_VALUE, 
PARAM_VALUE_FILE_REF, 
UOM_ID) 
VALUES ( 
102005000, 
9802, 
8827, 
0.0, 
NULL, 
9001); 

// create the projected CRS
insert into MDSYS.SDO_COORD_REF_SYSTEM ( 
SRID, 
COORD_REF_SYS_NAME, 
COORD_REF_SYS_KIND, 
COORD_SYS_ID, 
DATUM_ID, 
SOURCE_GEOG_SRID, 
PROJECTION_CONV_ID, 
CMPD_HORIZ_SRID, 
CMPD_VERT_SRID, 
INFORMATION_SOURCE, 
DATA_SOURCE, 
IS_LEGACY, 
LEGACY_CODE, 
LEGACY_WKTEXT, 
LEGACY_CS_BOUNDS, 
GEOG_CRS_DATUM_ID) 
VALUES ( 
102005000, 
'USA_Contiguous_Equidistant_Conic',  
'PROJECTED', 
4530, 
NULL, 
4269, 
102005000, 
NULL, 
NULL, 
NULL, 
NULL, 
'FALSE', 
NULL, 
NULL, 
NULL, 
6269); 

//Check your work
select srid, wktext from cs_srs where srid = 102005000;

select mdsys.sdo_cs.transform( 
SDO_GEOMETRY( 2001, 4269, SDO_POINT_TYPE( -79.5, 36, NULL), NULL, NULL), 102005000) 
from dual;

 

Running MapViewer Editor on Windows

Standard

I downloaded the latest version of  MapViewer Editor the other day and double clicked the jar file and it opened right up.  But when I tried to connect to a server the dialog window opened behind the main program causing me to utter several phrases not appropriate for this post.

Turns out it’s just a java issue.  Whatever is in my path doesn’t point to what MapViewer Editor wanted to act right but instead of jacking around with Java home or path variables I just opened it from the command line telling it to use one of the many JDK’s living on my desktop.

Here’s the command I used just in case this saves someone else some pain.

"C:\Program Files\Java\jdk1.8.0_65\bin\java.exe" -jar map_editor-12.2.1.2.0.jar

 

Twitter Live Feed with Oracle Database As A Service and Business Intelligence Cloud Service

Standard

 

In this post I am going to show how you can very quickly get a Twitter live feed streaming into your Oracle Database as a Service (DBAAS) instance.  Although this example will work on any machine with an Oracle client and for any Oracle Database instance putting it on DBAAS makes it extremely easy to utilize our streaming Twitter data inside of other Oracle Cloud applications such as Business Intelligence Cloud Service (BICS).

Acknowledgments:

  1. Thank you to geniuses at Tweepy for making it so I don’t have to deal with oauth or really anything related to the Twitter API.  I don’t know who you are but I love you! ♥
  2. Twitter so I never have to wonder what Kim or Kanye are thinking.
  3. My homey Przemyslaw Piotrowski (I don’t know him either but his examples sustain my laziness in coding) for writing a super helpful series called The Mastering Oracle+Python Series which I fully intend to finish someday.
  4. Who ever this dude is (https://pythonprogramming.net/twitter-api-streaming-tweets-python-tutorial/) whose example made this easy.

The great thing about DBAAS is I actually have access to the underlying server so I can take advantage of the operating system, and languages like python, as well as the database.  Some might be offended by such interloping but I’m not one of them and as I said above there isn’t any requirement to do this on the database server I’m just taking advantage of a machine that has the cycles to spare.

I will assume that you already have a DBAAS instance up and running and are familiar with how to administer it and are also familiar with using tools like MobaXterm to connect to it.

At the time of this example our Database as a Service is running Oracle Linux 6.7 and 12.0.1.2 of the database with all the bells and whistles.  I needed to install 5 things to make this work:

  1. plliblzma to make it so I can get the latest EPEL repository without having to deal with “expletive” Error: xz compression not available message

wget https://kojipkgs.fedoraproject.org//packages/pyliblzma/0.5.3/3.el6/x86_64/pyliblzma-0.5.3-3.el6.x86_64.rpm

yum install pyliblzma-0.5.3-3.el6.x86_64.rpm

  1. Latest EPEL repository for this version of linux

wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

yum install epel-release-6-8.noarch.rpm

  1. Then I can install python-pip with ease

yum install python-pip

  1. And then we can fetch Tweepy with equal ease

pip install tweepy

  1. And last but not least Oracle’s python library for interacting with the database

pip install cx_Oracle

Now that we have all the chunks make sure you can import the stuff you need with python and not get any errors.

[oracle@testdrive-01 ~]$ python
Python 2.6.6 (r266:84292, Jul 23 2015, 05:13:40)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tweepy
>>> import cx_Oracle
>>>

If you have issue with cx_Oracle you most likely just need to make sure the Oracle environment variables are set via bashrc or however you like to set them.

I created a table in my database to store the JSON document that is the API response

CREATE TABLE "TWITTER_USER"."EAT_MY_TWEET"
( "ID" NUMBER,
"USERNAME" VARCHAR2(500 BYTE),
"TIMEWHYUPUNISHME" TIMESTAMP (6) DEFAULT systimestamp,
"TWEET_JSON" CLOB,
CONSTRAINT "ENSURE_JSON" CHECK (TWEET_JSON is JSON) ENABLE
)

In order to use the Twitter API you will have to register your app with them here https://apps.twitter.com at which point they will give you the keys to make oauth work.


Then all we need to do is write (by which I mean copy paste and modify) a little python code and we are ready to rock:

#import libraries
from tweepy import Stream
from tweepy import OAuthHandler
from tweepy.streaming import StreamListener
import cx_Oracle
import datetime
import json


#connection string for database
conn_str='WHODAT/Ap@ssw0rd@localhost:1521/MyPluggableDatabseServiceName'

#get me a connection
conn =cx_Oracle.connect(conn_str)

#turn on autocommit
conn.autocommit=1

#object for executing sql
c=conn.cursor()

#clob variable
bvar=c.var(cx_Oracle.CLOB)

#twitter api application keys
#consumer key, consumer secret, access token, access secret.
ckey='Dont'
csecret='Tell'
atoken='Anybody'
asecret='The Password'

#listen to the stream
class listener(StreamListener):

#get some
    def on_data(self, data):
         try:

            #barf response insto json object
            all_data = json.loads(data)

            #parse out tweet text, screenname and tweet id
            tweet = all_data["text"]
            if (all_data["user"]["screen_name"]) is not None:
                username = all_data["user"]["screen_name"]
            else:
                username = 'No User'
            tid = all_data["id"]

            #set clob variable to json doc
            bvar.setvalue(0,data)
            try:
                #create sql string with bind for clob var

                sql_str="INSERT INTO EAT_MY_TWEET (ID,USERNAME,TWEET_JSON) Values("+str(tid)+",q'["+username.encode('utf-8').strip()+"]',:EATIT)" 

                #insert into database 
                c.execute(sql_str,[bvar])                  

            except Exception: 
                sys.exc_clear() 

            #watch tweets go by in console 
            print((username,tweet)) 

            #in case you want to print response 
            #print(data) 
            return(True) 
        except Exception: 
                sys.exc_clear() 
def on_error(self, status): 
     print status 
     print(data) 
     print sql_str 

#log in to twitter api 
auth = OAuthHandler(ckey, csecret) 
auth.set_access_token(atoken, asecret)
 
#fire it up 
twitterStream = Stream(auth, listener()) 

#what to search for (not case sensitive) 
#comma separated for words use 
# for hashtag 
#phrases are words separated by spaces (like this comment) 
twitterS.filter(track=["ProveFilterWorks,Oracle,Vlamis,OBIEE,BICS,DVCS,Data Visualization Desktop"])

I named my file stream.py so to execute I just fire it up with nohup –

nohup python -u /home/oracle/stream.py>/home/oracle/stream.out 2>/home/oracle/stream.err &

and if I tail my stream.out file with tail -f stream.out I can watch the tweets go by as they are inserted into the database

stream

now that I am inserting the json document that the twitter api sends back to me into the database

json

I can use Oracle Database 12c support for json to expose the document in my table as a view using the following SQL

  CREATE OR REPLACE FORCE EDITIONABLE VIEW "TWITTER_USER"."LIVE_TWITTER_FEED" ("CST_DATE", "UTC_DATE", "UTC_HOUR", "UTC_MINUTE", "ID", "CREATED_ON", "SCREEN_NAME", "LOCATION", "FOLLOWERS_CNT", "FRIENDS_CNT", "LISTED_CNT", "FAVOURITES_CNT", "STATUSES_CNT", "RETWEET_CNT", "FAVOURITE_CNT", "URL", "PROFILE_IMAGE_URL", "BANNER_IMAGE_URL", "HASHTAGS", "TWEET", "EMT_TIMEWHYUPUNISHME") AS 
  SELECT cast(TO_TIMESTAMP_TZ(REPLACE(upper(b.created_on),'+0000','-00:00'),'DY MON DD HH24:MI:SS TZH:TZM YYYY')  at Time zone 'CST' as date) CST_DATE,
cast(TO_TIMESTAMP_TZ(REPLACE(upper(b.created_on),'+0000','-00:00'),'DY MON DD HH24:MI:SS TZH:TZM YYYY') as date) UTC_DATE,
cast(to_char(cast(TO_TIMESTAMP_TZ(REPLACE(upper(b.created_on),'+0000','-00:00'),'DY MON DD HH24:MI:SS TZH:TZM YYYY') as date),'HH24') as number) UTC_HOUR,
cast(to_char(cast(TO_TIMESTAMP_TZ(REPLACE(upper(b.created_on),'+0000','-00:00'),'DY MON DD HH24:MI:SS TZH:TZM YYYY') as date),'MI') as number) UTC_MINUTE,
a."ID",b."CREATED_ON",a."USERNAME",b."LOCATION",b."FOLLOWERS_CNT",b."FRIENDS_CNT",b."LISTED_CNT",b."FAVOURITES_CNT",b."STATUSES_CNT",
b."RETWEET_CNT",b."FAVOURITE_CNT",b."URL",b."PROFILE_IMAGE_URL",b."BANNER_IMAGE_URL",b."HASHTAGS",b."TWEET",
a.TIMEWHYUPUNISHME
FROM EAT_MY_TWEET a,
json_table(tweet_json,
'$' columns(
    id varchar(50) path '$.id',
    created_on varchar2(100) path '$.created_at',
    screen_name varchar2(200) path '$."user".screen_name',
    location varchar2(250) path '$."user"."location"',
    followers_cnt number path '$."user".followers_count',
    friends_cnt  number path '$."user".friends_count',
    listed_cnt  number path '$."user".listed_count',
    favourites_cnt  number path '$."user".favourites_count',
    statuses_cnt  number path '$."user".statuses_count',
    retweet_cnt number path '$.retweet_count',
    favourite_cnt number path '$.favorite_count',
     url varchar2(250) path '$."user"."url"',
    profile_image_url  varchar2(500) path '$."user".profile_image_url',
    banner_image_url varchar2(500) path '$."user".profile_banner_url',
     hashtags varchar2(500) format json with wrapper path '$.entities.hashtags[*].text',
      tweet varchar2(250) path '$.text'
    -- nested path '$.entities.hashtags[*]' columns (ind_hashtag varchar2(30) path '$.text'    )
   
)
) b

Then I can analyze the real time twitter data I’m interested in using a simple sql statement

SELECT * FROM LIVE_TWITTER_FEED ;

and while being able to hit real time twitter data with SQL is cool the real goal is being able to surface that data inside of our BI tools. Here we combine Answers and Visual Analyzer on a dashboard to show latest tweets and aggregate information over time utilizing Oracle Business Intelligence Cloud Service (BICS)

bics

Maps in OBIEE Free and Easy – Part 2 – The Database

Standard

In the database geometry objects are stored in a special column type called SDO_GEOMETRY that is specifically for geometric data.  The database stores geometry objects, such as States, as an ordered set of points, which are then connected by line segments  by Map Viewer to render the shape.

SDO_geometry

The type, and its associated metadata views and functions, are provided by the MDSYS schema which is included with your database installation.  Although the SDO_GEOMETRY type is associated with the MDSYS schema it can be used by any database user to
create geometry columns in any schema.  The only requirement is that there must be a spatial index on the table that contains the geometry type and there must be an entry in the USER_SDO_GEOM_METADATA view that tells the database that a particular table and column contain geometric data in order for functions and rendering operations to work correctly.

The geometry type is analogous to an XML type in that it is stored as a column in a table but also has its own internal structure that can be addressed by various database functions provided specifically for it.

grid

X/Y Grid

Using an example from the documentation, if we look at the polygon named cola_b rendered on an xy grid we can see that its definition should include the points (5,1, 8,1, 8,6, 5,7, 5,1).

If we look at the COLA_MARKETS table in the database and look at the row that contains cola_b we can see that  the SHAPE column (which is an SDO_GEOMETRY type) does indeed contain those points.  Checking to see we have an entry in the USER_SDO_GEOM_METADATA view and that we have a spatial index on the table we can create a map view of our COLA_MARKETS table right inside of SQL Developer.

sqldevcola

COLA_MARKETS table in the database

Map based objects such as a US State are merely a more complex example of a polygon that have a reference to a projection system (SRID) for rendering the earth’s 3 dimensional surface on a 2 dimensional plane.  If we look at a stored geometry for Colorado we can see that the only difference between it and the simple polygon we created for cola_b above is a reference to an SRID code and the number of points (longitude and latitude in this case) needed to render it.
4corners

That covers the basics of what you need to know about geometry and the database for now.  The SDO_GEOMETRY column type is used to store geometry definitions in the database and the USER_SDO_GEOM_METADATA view gives the database the information it needs to create spatial indexes.

I’ll show you in another post that lines, points, circles, etc. can also be associated with and rendered on maps and it’s pretty easy to create and use these objects to add even more insight to your map visualizations.  Additionally, we’ll talk about several important and useful database procedures & functions that are provided for validating (and fixing if needed) your geometric data which is especially helpful when we import freely available shapefiles into our database to create our own maps for use inside of OBIEE.

 

 

Maps in OBIEE Free and Easy – Part 1 – The Basics

Standard

Many people mistakenly believe that implementing map views in Oracle Business Intelligence (OBI) is difficult, requires additional licensing on the Oracle database and requires them to pay for content from third party map providers.  Third party provided map data and Oracle Spatial and Graph (an additional option for the Enterprise Edition of the database) may offer significant value to your organization but neither of these options are necessary to create and use maps with OBI.  Country, state or province, county, city and even neighborhood level map data is freely available and easily found in a number of places.

Oracle Locator is included with all versions of the Oracle Database.  Oracle Map Viewer is included with OBI and the Map Builder and Map Editor tools included with Map Viewer make it possible (and dare I say easy) to import and customize maps.  Maps views are produced through an integration between the Oracle Database, Map Viewer and OBI but the hard part of the integration is already done for you so all that is really required to utilize the power of maps is an understanding of the components involved in the integration and how they interact with each other.

As with most things Oracle, what at first seems complicated is actually pretty simple once you understand what’s going on.  I can’t count the number of times when first learning something new with Oracle technology I have marveled at how over complicated it seems but after taking the time to understand it have been equally impressed with how simple they have made very complicated things.  Maps are like this.  At the heart of map views we are dynamically associating dollar or quantity type measures with geometric shapes and rendering those shapes using colors, styles and other visualizations in such a way as to enhance the meaning of the underlying data.   That’s a complicated task.  Thankfully all you really need to know about it is what geometric shapes (US States as an example) do you want to visualize and what attribute of that shape (State name for example) do you want to use to associate with your measures.  Everything else involved is just giving the various components involved what they need to do their part of the task.

MapViewer1Let’s take a look at a figure from the MapViewer documentation that helps to describe the architecture involved in producing maps and then we will discuss the important things to understand at each layer.  OBI is the “Client” application in our scenario so it passes map rendering requests to MapViewer which in turn interacts with the database to get the map definition information it needs to render a map and pass it back to OBI.

MapViewer (the “Middle Tier”) needs to know what map definition to use and how that definition is linked to OBI data.  This linkage is defined in the Map Administration page of OBI and ties a field (or fields) from a subject area to a map layer.  MapViewer is a J2EE application that comes pre-deployed with the included Weblogic instance that OBI is deployed on.  Although it is collocated with OBI it can also be deployed on a standalone server running Glassfish or Tomcat.

Oracle Database stores the geometric definition of shapes and what colors, line types and text styles to use when rendering them.  Additionally, various functions can be performed on geometric data such as distance calculations or merging several states into a territory.

So that gives us a basic understanding of the moving parts involved with creating Map views inside of OBI Answers.  OBI is a client application asking Oracle MapViewer to render an interactive map for it based on some business data.  Oracle MapViewer fetches the map definition and styles from the Oracle Database and uses that information to render a map that it passes back to OBI.

 

Deploying Oracle Business Intelligence 12c on AWS EC2 Instance

Standard

This is the third in a series of posts around putting together your own Oracle BI development environment on AWS.  Utilizing AWS can be a remarkably affordable (<$50/month assuming system up 50 hours/week) option for most developers.

See Deploying Oracle Database 12c on AWS EC2 Instance and Deploying Oracle APEX on EC2 against Oracle Database on EC2 for more information.  (Note – deploying APEX is not necessary for following this post but you do need a database available)

In this post, I will show you how to install Oracle Business Intelligence 12c on an Amazon Web Services EC2 instance.

First let me list some super helpful resources –

Posts that I use as the basis for what I show in this blog post –

Automating Database Startup and Shutdown on Linux

INSTALLING OBIEE 12C. PART 1: PREREQUISITES

SwapFaq and Swap

How To Install and Configure GUI for Amazon EC2 RHEL 7 Instance

Oracle WebLogic Server 12c: Creating a Boot Identity File for Easier Server Start Up

RPM for Linux Installation Notes

MobaXterm Download

Fix Firefox Already Running Error

Licensing Oracle Software in the Cloud Computing Environment

Workflow for Installing OBIEE 12c

This YouTube video follows the workflow I’ve created below –

  1. Make sure you have a database – See my post: Deploying Oracle Database 12c on AWS EC2 Instance
  2. Provision AWS ol7 Instance
  3. Attach volumes
    • 5GB Swap
    • 15GB /u01
    • 15GB /inv
    • 15GB Root
  4. Login via MobaXterm as ec2-user and change password
    • sudo passwd ec2-user
  5. Install packages needed for instance
    • sudo yum install wget zip unzip -y
    • sudo yum install perl-libwww-perl.noarch -y
    • sudo yum install oracle-rdbms-server-12cR1-preinstall -y (this is used for installing database but works wonderfully for OBIEE too)
  6. Make Swap and mount volumes
    • df -h
    • lsblk
    • sudo mkswap /dev/xvdb (the volume id (xvdb here) is instance dependent)
    • sudo swapon /dev/xvdb
    • sudo vi /etc/fstab
    • /dev/xvdb none swap defaults 0 0
    • sudo mkfs -t ext4 /dev/xvdc (the volume id (xvdd here) is instance dependent)
    • sudo mount /dev/xvdc /u01
    • /dev/xvdc /u01 ext4 defaults 0 0
    • /dev/xvdf /inv ext4 defaults 0 0 (this is for software install files, you may need to format a drive separately for this)
    • sudo mount -a (remount everything to make sure it worked)
  7. Change password for oracle user and make it possible for user to connect remotely (Managing User Accounts on Your Linux Instance)
    • sudo passwd oracle
    • sudo chown -R oracle.oinstall /u01
    • sudo chown -R oracle.oinstall /inv
    • su oracle (switch to oracle user)
    • cd ~(make sure your are in oracle user home)
    • mkdir .ssh (create location for key file)
    • chmod 700 .ssh (set permissions)
    • touch .ssh/authorized_keys (create file)
    • chmod 600 .ssh/authorized_keys (set permissions)
    • GET http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key&gt;.ssh/authorized_keys (copy public key to file)
    • log out and login as oracle user
  8. Install desktop (borrowed from this post at DevOpsCube)
    • su root
    • sudo yum groupinstall -y “Server with GUI”
    • sudo systemctl set-default graphical.target
    • sudo systemctl default
    • sudo rpm -ivh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm
    • sudo yum install -y xrdp tigervnc-server
    • sudo chcon –type=bin_t /usr/sbin/xrdp
    • sudo chcon –type=bin_t /usr/sbin/xrdp-sesman
    • sudo systemctl start xrdp
    • sudo systemctl enable xrdp
    • sudo firewall-cmd –permanent –add-port=3389/tcp
    • For OBIEE go ahead and open 9500 and 9502 as well
      • sudo firewall-cmd –permanent –add-port=9500/tcp
      • sudo firewall-cmd –permanent –add-port=9502/tcp
    • sudo firewall-cmd –reload
  9. Install gconf-editor and disable lock screen
    • yum -y install gconf-editor
    • alt-f2 gconf-editor
    • To disable the lock screen and log out functions, set the /desktop/gnome/lockdown/disable_lockscreen_and_logout key to true.
  10. Install java & set JAVA_HOME in bash profile
    • su root
    • rpm -ivh jdk-8u73-linux-x64.rpm
    • vi /home/oracle/.bash_profile
    • export JAVA_HOME=/usr/java/jdk1.8.0_73
    • export PATH=$JAVA_HOME/bin:$PATH
    • source ~/.bash_profile (reloads profile)
  11. Install SQL Developer
    • su root
    • rpm -ivh
    • find -name sqldeveloper*
  12. Install Infrastructure and OBI (borrowed from series of posts starting here by Red Stack Tech)
    • $JAVA_HOME/bin/java -d64 -jar fmw_12.2.1.0.0_infrastructure.jar
    • ./bi_platform-12.2.1.0.0_linux64.bin
    • cd /u01/home/oracle/Oracle/Middleware/Oracle_Home/bi/bin
      ./config.sh
  13. Setup auto start for your pluggable database
    • To set up your pluggable database to autostart on the database you can use a trigger.  Login as the SYS user on the container database and execute the following –
      • create or replace trigger sys.after_startup
        after startup on database
        begin
        execute immediate ‘alter pluggable database YOUR_PDB_NAME open read write’;
        end after_startup;
      • I have links to Tim Hall’s post on how to autostart your database in this post on Deploying Oracle Database 12c on AWS EC2 Instance
  14. Setup auto start for OBIEE 12c
    • To make sure your weblogic instance doesn’t prompt for user name and password set up a boot identity file
      • cd /u01/home/oracle/Oracle/Middleware/Oracle_Home/user_projects/domains/bi/servers/AdminServer
      • mkdir security
      • cd security
      • vi boot.properties
        • add lines –
          • username=YOUR_WEBLOGIC_USER
            password=YOUR_WEBLOGIC_PASSWORD
    • To make your OBIEE instance auto-start when you start the machine instance I have borrowed from Tim Hall’s post on auto-starting the Oracle database look under the heading The “rsh” Command for the auto-start script for a database
    • Create a file called “/etc/init.d/dbora” as the root user and copy the script on Tim Halls page to the file
      • Change the line ORACLE_HOME line to your OBI startup script location
        • ORACLE_HOME=/u01/home/oracle/Oracle/Middleware/Oracle_Home/user_projects/domains/bi/bitools
        • and the dbstart and dbshut lines to reference the start and stop scripts respectively
          • runuser -l $ORACLE -c “$ORACLE_HOME/bin/start.sh “
          • runuser -l $ORACLE -c “$ORACLE_HOME/bin/stop.sh “
      • Your final script should look something like the following
      • #!/bin/sh# chkconfig: 345 99 10# description: Oracle auto start-stop script.## Change the value of ORACLE_HOME to specify the correct Oracle home# directory for your installation.

        ORACLE_HOME=/u01/home/oracle/Oracle/Middleware/Oracle_Home/user_projects/domains/bi/bitools

        #

        # Change the value of ORACLE to the login name of the

        # oracle owner at your site.

        #

        ORACLE=oracle

        PATH=${PATH}:$ORACLE_HOME/bin

        export ORACLE_HOME PATH

        #

        case $1 in

        ‘start’)

        runuser -l $ORACLE -c “$ORACLE_HOME/bin/start.sh ”

        touch /var/lock/subsys/dbora

        ;;

        ‘stop’)

        runuser -l $ORACLE -c “$ORACLE_HOME/bin/stop.sh ”

        rm -f /var/lock/subsys/dbora

        ;;

        *)

        echo “usage: $0 {start|stop}”

        exit

        ;;

        esac

        #

        exit

    • Lastly change the permissions and add entry for startup
      • chmod 750 /etc/init.d/dbora
      • chkconfig –add dbora