Feed aggregator
Establishing an initial connection can take an extraordinary amount of time.
Master Detail in APEX
Node Reliability Issues
AWS X-Ray Overview, Features, and Benefits
AWS X-Ray allows the developer to analyze and create a service map that displays an application’s architecture, including relation to components and a dependency tree. With the help of AWS X-Ray, we can understand how our application and its fundamental services are performing to identify and debug the root cause of performance issues and errors. […]
The post AWS X-Ray Overview, Features, and Benefits appeared first on Oracle Trainings for Apps & Fusion DBA.
Amazon DynamoDB: Fast And Scalable NoSQL Database
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-region, multi-active, durable database with built-in security, backup and restores, and in-memory caching for internet-scale applications. Check out the blog at https://k21academy.com/awsdev14 for a better understanding. In this blog, we are going to cover one […]
The post Amazon DynamoDB: Fast And Scalable NoSQL Database appeared first on Oracle Trainings for Apps & Fusion DBA.
AWS SageMaker: Modeling With AWS Machine Learning
AWS SageMaker has been the industry leader, offering many services that complement the entire lifecycle from data preparation to model deployment. Once you identified machine learning is the right fit for your business problem. Your next step will be to identify the right machine learning type which helps you in finding the right Model for […]
The post AWS SageMaker: Modeling With AWS Machine Learning appeared first on Oracle Trainings for Apps & Fusion DBA.
Google Cloud Storage And Database Services Rundown
Google Cloud Platform (GCP) delivers various storage and database service offerings that remove much of the burden of building and managing storage and infrastructure. Google Cloud offers 9 storage & database services. Want to know more about Google Cloud Storage & Database Services? Read the blog post at https://k21academy.com/gcp17 to learn more. The blog post […]
The post Google Cloud Storage And Database Services Rundown appeared first on Oracle Trainings for Apps & Fusion DBA.
Exploratory Data Analysis with AWS Machine Learning
According to “The State of Data Science 2020” study, exploratory data analysis (EDA), data management, feature selection, and feature engineering accounts for more than 66% of a data scientist’s time. The data analysis process involves many steps on the data, such as cleansing and transformation, to find out what hidden information the data has and […]
The post Exploratory Data Analysis with AWS Machine Learning appeared first on Oracle Trainings for Apps & Fusion DBA.
Amazon Cognito Overview & Its Features
The challenges in building applications have been around user authentication and management. Not many developers want to build yet another user identification and authentication system for their application nor would they want to cause a user to create yet another account unless needed. Amazon Cognito makes it simpler for developers to manage user identities, authentication, […]
The post Amazon Cognito Overview & Its Features appeared first on Oracle Trainings for Apps & Fusion DBA.
Introduction To Application Express (APEX) in Oracle Cloud
As per Oracle’s new update, few topics have been added in the 1Z0-998-20 version certification exam, including private endpoint, DB monitoring, and Application Express (APEX), etc. Oracle APEX is also now available as a managed cloud service, Oracle APEX Application Development service that can be accessed via cloud console. To read in brief about the […]
The post Introduction To Application Express (APEX) in Oracle Cloud appeared first on Oracle Trainings for Apps & Fusion DBA.
Data Engineering with AWS Machine Learning
Storing data for machine learning is challenging due to the varying formats and characteristics of data. Raw ingested data must first be transformed into the format necessary for downstream machine learning consumption, and once the data is ready to be used, it must be ingested from storage to the machine learning service. In this blog […]
The post Data Engineering with AWS Machine Learning appeared first on Oracle Trainings for Apps & Fusion DBA.
[Q/A] AWS Certified Solution Architect Associate Day 1: Introduction To Cloud & AWS
In this blog, we will cover the Q/A’s from Day 1 of AWS Certified Solution Architect Associate covering Introduction To Cloud & AWS FAQs. This blog will help you to get started with AWS Solution Architect. The blog post at k21academy.com/awssa36 covers: 1. Cloud Services and Deployment Model 2. AWS Global Infrastructure Region AZ 3. […]
The post [Q/A] AWS Certified Solution Architect Associate Day 1: Introduction To Cloud & AWS appeared first on Oracle Trainings for Apps & Fusion DBA.
Capturing DBMS_OUTPUT.PUT_LINE messages in the heat of a Six Nations campaign
It’s not a good time to be English, not in my house. That’s right, the Six Nations is in full swing and my Welsh wife and her family are once again indulging in the time-honoured practice of taunting the English.
As deployment of my emergency backup nationality ( “born In New Zealand, I’m neutral, honest !”) has failed, I’m reduced to keeping a low profile until the Brains Bitter runs out.
Fortunately, there’s excellent Wi-Fi reception in the cupboard under the stairs, so I can drown out the sound Welsh Nationalism and concentrate on the topic at hand.
Whatever language you’re coding in, however sophisticated you’re IDE, sooner or later you will likely revert to the time-hounoured debugging technique of writing out messages or variable values at runtime in an effort to find out what the heck is going on.
PL/SQL is no exception and DBMS_OUTPUT.PUT_LINE is the procedure that facilitates this practice.
However, whilst it’s a simple matter to view the messages interactively, how can you retrieve them if you are running the program as part of a batch process, where attaching a client to the session is not an option ?
WARNING – the code that follow may contain (memory) leeks…
How DBMS_OUTPUT works in SQL*PlusTo begin with, let’s find out what set serveroutput on in a SQL*Plus session actually does.
Yes, you could just have a quick look at the documentation, but you’re here now so…
alter session set tracefile_identifier = 'mike'; exec dbms_monitor.session_trace_enable(null, null, true, true); set serveroutput on size unlimited begin dbms_output.put_line(q'[What's occurrin' ?]'); end; / exec dbms_monitor.session_trace_disable;
We can find the location of the tracefile using :
select name from v$diag_info where name = 'Default Trace File';
To save mucking about with tkprof, I’ve just used an ftp client to retrieve the trace file from the server and then simply open it in SQLDeveloper…

You can expand the nodes in the default Tree View. In this case however, things may become clearer if we simply open the Statistics View tab :

In addition to the statments we’ve entered for tracing, we can see that
set serveroutput on size unlimited has been translated into two statements :
declare l_line varchar2(32767); l_done number; l_buffer varchar2(32767) := ''; l_lengthbuffer number := 0; l_lengthline number := 0; begin loop dbms_output.get_line( l_line, l_done ); if (l_buffer is null) then l_lengthbuffer := 0; else l_lengthbuffer := length(l_buffer); end if; if (l_line is null) then l_lengthline := 0; else l_lengthline := length(l_line); end if; exit when l_lengthbuffer + l_lengthline > :maxbytes OR l_lengthbuffer + l_lengthline > 32767 OR l_done = 1; l_buffer := l_buffer || l_line || chr(10); end loop; :done := l_done; :buffer := l_buffer; :line := l_line; end;
…followed by …
begin dbms_output.enable(1000000); end;Saving DBMS_OUTPUT.PUT_LINE messages to a table
The trace file has given us a number of pointers as to how we can do this.
First of all, let’s take a look at a stored program unit whose output messages we want to capture :
create or replace procedure six_nations( i_wal_pts in number, i_eng_pts in number) is begin dbms_output.put_line('Wales : '||i_wal_pts||', England : '||i_eng_pts); dbms_session.sleep(5); if nvl(i_wal_pts, 0) <= nvl(i_eng_pts, 0) then dbms_output.put_line('ERROR : User is English!'); else dbms_output.put_line('Bread of Heaven!'); end if; dbms_session.sleep(10); dbms_output.put_line('Program Complete'); end; /
Running this from SQL*Plus…
set serverout on exec six_nations(9,9); exec six_nations(9,11); exec six_nations(15,9);
…we get…
Wales : 9, England : 9 ERROR : User is English! Program Complete PL/SQL procedure successfully completed. Wales : 9, England : 11 ERROR : User is English! Program Complete PL/SQL procedure successfully completed. Wales : 15, England : 9 Bread of Heaven! Program Complete PL/SQL procedure successfully completed.
We can write these messages to a messages table…
create table six_nations_messages ( id number generated always as identity, msg_ts timestamp, message varchar2(4000)) /
… by using the following anonymous block to wrap our calls…
declare v_msg six_nations_messages.message%type; v_status pls_integer := 0; begin dbms_output.enable; six_nations(9,9); loop dbms_output.get_line(v_msg, v_status); if v_status = 0 then insert into six_nations_messages(msg_ts, message) values(systimestamp, v_msg); end if; exit when v_status != 0; end loop; end; /
NOTE – for the sake of simplicity, this code assumes messages will not be longer than 4000 characters, although the limit for dbms_output is actually 32K.
Whilst this approach does allow us to access all of the messages, the messages are only read after the procedure finishes executing :
select * from six_nations_messages; ID MSG_TS MESSAGE ---------- ---------------------------- ------------------------------------------------------------ 21 20-FEB-21 14.38.56.037399000 Wales : 9, England : 9 22 20-FEB-21 14.38.56.042760000 ERROR : User is English! 23 20-FEB-21 14.38.56.042837000 Program CompleteMulti-purpose logging routine
One alternative to peppering your code with dbms_output statements could be to amend your logging routine to output log messages to the screen if required.
A simple version of such a program might be :
create or replace procedure logit( i_msg in varchar2, i_interactive in boolean default true) as pragma autonomous_transaction; begin insert into six_nations_messages( msg_ts, message) values( systimestamp, i_msg); if i_interactive then dbms_output.put_line(i_msg); end if; end;
NOTE – once again, this code is written to keep things simple. You might consider it more realistic to have some kind of table based lookup to determine whether to output your message to the screen.
Either way, you could call this instead of dbms_output…
create or replace procedure six_nations( i_wal_pts in number, i_eng_pts in number) is begin logit('Wales : '||i_wal_pts||', England : '||i_eng_pts); dbms_session.sleep(5); if nvl(i_wal_pts, 0) <= nvl(i_eng_pts, 0) then logit('ERROR : User is English!'); else logit('Bread of Heaven!'); end if; dbms_session.sleep(10); logit('Program Complete'); end; /
Now when we execute the procedure…
set serverout on exec six_nations(21,19);
…we get the screen output…
Wales : 21, England : 19 Bread of Heaven! Program Complete PL/SQL procedure successfully completed.
…and also the log messages at the time they were issued by the procedure…
select * from six_nations_messages; ID MSG_TS MESSAGE ---------- ---------------------------- ------------------------------------------------------------ 30 20-FEB-21 15.00.05.298440000 Wales : 21, England : 19 31 20-FEB-21 15.00.10.298579000 Bread of Heaven! 32 20-FEB-21 15.00.20.300390000 Program Complete
Deb is threatening to do Bara brith for tea if Wales win. Fortunately, I’m quite partial to that particular Welsh delicacy. So much so that I’m off to find an inflatable daffodil and try to learn the words to “Land of My Fathers”.
Getting ORA-01404: ALTER COLUMN will make an index too large
Viewing columns for Oracle private temporary tables
Partner Webcast – Hitchhikers Guide to Oracle Cloud (Part 3)
We share our skills to maximize your revenue!
Boto3 Dynamodb TypeError: Float types are not supported. Use Decimal types instead
I was trying to ram data into AWS dynamodb via Boto3 and the streaming failed due to following error:
File "C:\Program Files\Python37\lib\site-packages\boto3\dynamodb\types.py", line 102, in serialize
dynamodb_type = self._get_dynamodb_type(value)
File "C:\Program Files\Python37\lib\site-packages\boto3\dynamodb\types.py", line 115, in _get_dynamodb_type
elif self._is_number(value):
File "C:\Program Files\Python37\lib\site-packages\boto3\dynamodb\types.py", line 160, in _is_number
'Float types are not supported. Use Decimal types instead.')
TypeError: Float types are not supported. Use Decimal types instead.
I was actually getting some raw data points from cloudwatch for later analytics. These datapoints were in float format which are not supported by Dynamodb. Now instead of importing some decimal libraries or doing JSON manipulation, you can solve above with simple Python format expression like this:
NOTE: Failed voting file relocation on diskgroup
After Oracle support performs maintenance for Exadata Cloud, the are many errors from ASM alert log.
There have been many discussions whether it’s a BUG or if alert can be ignored.
Unfortunately, BUG does not fit and cannot be ignored.
The simple solution is to check for SUCCESS after Failed as shown below.
egrep -n 'NOTE: Failed voting file relocation on diskgroup|SUCCESS: refreshed membership' /u01/app/grid/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log|grep DATAC1
1937:SUCCESS: refreshed membership for 1/0xacd2391b (DATAC1)
1972:SUCCESS: refreshed membership for 1/0xacd2391b (DATAC1)
4244:NOTE: Failed voting file relocation on diskgroup DATAC1
4250:SUCCESS: refreshed membership for 1/0x8f4036a8 (DATAC1)
9876:NOTE: Failed voting file relocation on diskgroup DATAC1
9881:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
10130:NOTE: Failed voting file relocation on diskgroup DATAC1
10135:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
11112:NOTE: Failed voting file relocation on diskgroup DATAC1
11425:NOTE: Failed voting file relocation on diskgroup DATAC1
11441:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
12410:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
13318:NOTE: Failed voting file relocation on diskgroup DATAC1
13717:NOTE: Failed voting file relocation on diskgroup DATAC1
13733:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
14703:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
15566:NOTE: Failed voting file relocation on diskgroup DATAC1
15865:NOTE: Failed voting file relocation on diskgroup DATAC1
15881:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
16836:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
Want more comfort?
$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 1b45f107ff974f1abf34ff7b005eaff1 (/dev/exadata_quorum/QD_DATAC1_QM828NAKVNXA1) [DATAC1]
2. ONLINE b3607b245b984f7ebfe5188c0775c44e (/dev/exadata_quorum/QD_DATAC1_QM738NAKVNXA2) [DATAC1]
3. ONLINE 5d26163d434a4fb6bf2cb173bae3cae1 (o/192.168.136.14;192.168.136.15/DATAC1_CD_05_phx302307exdcl07) [DATAC1]
4. ONLINE 3e1661086fed4f8bbf080db321282b23 (o/192.168.136.16;192.168.136.17/DATAC1_CD_04_phx302307exdcl08) [DATAC1]
5. ONLINE edac9a7822624fe4bfd59eb357d55d95 (o/192.168.136.18;192.168.136.19/DATAC1_CD_04_phx302307exdcl09) [DATAC1]
Located 5 voting disk(s).
# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 491684
Used space (kbytes) : 90748
Available space (kbytes) : 400936
ID : 1880339001
Device/File Name : +DATAC1
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
Text searching the SQL of BI Publisher Data Model contents in the OBIEE catalog
If you're working in an environment where there are a lot of Business Intelligence Publisher (BIP) reports in an on-site OBIEE instance, sometimes you want to impact assess, or find usages of a certain database table or column. The following shell script is a very simple way to search the Data Model SQL in BI Publisher reports in the filesystem. Note of course this only works if you have an on-site OBIEE instance. For Oracle eBusiness Suite you'd be better to query the CLOBs in stored XML Publisher tables, and for Cloud based Oracle Analytics - you'll need to work that one out!
Change the CAT= like to store the location of the shared folder that you're catalog resides in.
Save the contents to a file called bipdmsearch and remember to chmod 755 bipdmsearch so you can execute it.
#!/bin/sh if [ -z "$1" ]; then echo "Usage: bipdmsearch <search string>" exit 1 fi if [ -z "$CAT" ]; then CAT=/u01/app/obi12c/config12.2.1.2/bi/bidata/service_instances/ssi/metadata/content/catalog/root/shared fi echo "cd $CAT" cd $CAT echo "Search for $1" find . -type f -name \*xdm\* -exec grep -il "$1" {} \; echo "Done"This is a post from Gareth's blog at http://garethroberts.blogspot.com
... and we're back!
Wow - it has been a long time since I posted here! So much so that the world has changed - with an onslaught of content and apps and tools and technologies and Cloud! However that said I've found myself coming back to visit my old posts from time to time as technical references to especially fiddly things are often lost in the mix of content these days. I've always had a list of things to add here, and over the last few years I've been working in-depth with the Oracle Business Intelligence Suite and picked up a bunch of tips and tricks along the way, so the list is even longer now ;-)
Although there are more people documenting more things there are still plenty of techniques that are obscure but warrant being being referenced.
So, we'll start off with a couple of easy ones, maybe something using OBIEE back end catalog updates, then kick into some tougher ones .... maybe Google Earth KML output from Oracle SQL/BI Publisher or something like that :-)
Enjoy ...
This is a post from Gareth's blog at http://garethroberts.blogspot.comPages
