Feed aggregator

How can I register my SMTP service in Oracle Apex?

Tom Kyte - 4 min 49 sec ago
In the company where I work, we have the services of JD Edwards and Orchestrator Studio, but we are using APEX for fast applications. The problem is that we have not managed to use the JD Edwards SMTP in APEX or an external one to test that the apex_mail.send() function works and so far we have not succeeded, and we watched videos and followed the steps and nothing and the documentation offered by Oracle is not entirely clear even confusing that you have to do, another thing we tried was to create a service in Orchestrator Studio and it works with JS the drawback is that the code is on the client side and not the server and the alternative was to use <b>UTL_HTTP</b> or <b>apex_web_service. make_rest_request()</b> and neither as I read that <b>UTL_HTTP</b> has some limits with apex and <b>apex_web_service.make_rest_request(</b>) blocks the service because the URL is not secure. I have read most of the forum and nothing I can do to make it send APEX mails because it really depends on a database developer to do that and I have been using this tool for a very short time. <b>I do not know if you could help me which are the steps I must follow to be able to send mails with APEX as in Cloud and in my local machine (localhost). </b> Sources consulted: <b>https://docs.oracle.com/en/database/oracle/application-express/21.1/aeapi/Configuring-Oracle-Application-Express-to-Send-Email.html#GUID-596E11FE-9289-4238-A3CA-D33F508E40F7 https://stackoverflow.com/questions/65528885/invoke-a-rest-api-json-in-plsql https://www.youtube.com/watch?v=sI37Be2EZhk https://technology.amis.nl/database/invoke-a-rest-service-from-plsql-make-an-http-post-request-using-utl_http-in-oracle-database-11g-xe/</b>
Categories: DBA Blogs

Performance issues in production

Tom Kyte - 4 min 49 sec ago
Hi Tom, We are supporting a Siebel application having Oracle 11g as backed. Last month we upgraded database to 19C and after that we are facing performance issues on daily basis. When we checked DB reports like AWR and ASH, I have below observations: a) In AWR report random queries are coming on top and there are no consistency so I am assuming issue is not related to a particular SQL. b) In the DB reports we can see top event is "gc buffer busy acquire" most of the time and this event is related to a insert statement. c) All top SQL visible in AWR/ASH reports are having multiple plans. Below is the snap shot for one of the SQL: SNAP_ID NODE BEGIN_INTERVAL_TIME SQL_ID PLAN_HASH_VALUE EXECS AVG_ETIME AVG_LIO ---------- ------ ------------------------------ ------------- --------------- ------------ ------------ -------------- 2459 2 12-JAN-22 06.00.34.887 4hdhq06bahcnq 912142910 207 17.761 .0 2460 1 12-JAN-22 06.30.10.663 4hdhq06bahcnq 728991164 293 .024 1,376.6 2460 2 12-JAN-22 06.30.10.670 4hdhq06bahcnq 912142910 107 95.319 2,889,181.3 2461 1 12-JAN-22 07.00.04.996 4hdhq06bahcnq 728991164 439 .035 1,251.3 2461 2 12-JAN-22 07.00.04.990 4hdhq06bahcnq 912142910 149 66.469 2,106,097.7 My question from you guys are: 1. To investigate "gc buffer busy acquire" event what could be the sequence of investigation? 2. Since most of the SQLs coming on top in DB reports are generating multiple plans, Do we have any parameters in 19C which can be responsible for so many plans? Thanks
Categories: DBA Blogs

Library cache lock in wait class Concurrency

Tom Kyte - 4 min 49 sec ago
In the Production Application we randomly notice a hiccup (slowness) in the database. The AWR shows as `Waiting for event "library cache lock" in wait class "Concurrency"?. The respective SQL statement is one of the frequently running statement (15,000 times per minute) in the application. The SQL is a simple READ on primary key from a table which is cached in a logical memory. The table holds just 40,000 records and doesn?t grow at runtime. During the regular season, the same SQL statement elapsed time would be in milliseconds. But during the hiccup (slowness) time, the same SQL statement takes minutes to return the results. The other important thing to note is that the issue initiates most of the time at either 0th minute, 30th minute or 45th minute of an hour and resolves its own in couple of minutes. However, sometime the issue lasts more than an hour before it resolves its own. We would appreciate if you can help on what could be the reason the SQL performs slower for few minutes randomly.
Categories: DBA Blogs

Core Primary key concepts not mentioned in my Oracle training classes

Tom Kyte - 4 min 49 sec ago
Good Morning, Is there some underlying rule or DBA agreement that every table should have a primary key? To me it doesn't make logical sense that every table we define needs a primary key. Just recently I came across a primary key made up of two columns with each having about 60% unique values in a 1.5 million row table. The optimizer chooses to perform a full table when both those columns are used in the "Where" clause since the primary key isn't very selective. This table doesn't even have a child table. My assumption is that a primary key is only needed when it has a child table. On the other side, if a large, frequently accessed table doesn't have any child tables and doesn't have any unique column, should we create a column based off the value of a sequence and add a unique index (instead of primary key) on it? I would assume that isn't needed in all cases. An example where it would be needed is in the case where the application inserts a row and then plans to access it within the same session. In summary of questions: - Is there some underlying rule or DBA agreement that every table should have a primary key? - My assumption is that a primary key is only needed when it has a child table? - On the other side, if a large, frequently accessed table doesn't have any child tables and doesn't have a unique column, should we create a column based off the value of a sequence and add a unique index (instead of primary key) on it? Thanks, John
Categories: DBA Blogs

How to properly assign ENABLE_PARALLEL_DML to a profile?

Tom Kyte - Mon, 2022-01-17 14:26
Hi, In a 19.0, I've found a daily update that lasts for more than 30 mins in a 3rd party software. I would like to execute it in parallel. PDML is disabled, and when I try to create a profile for that update, I cannot get the desired plan. I made a small case to show my problem: <code>CREATE TABLE TESTCASE_PDML (keynum number, attribute1 number) PARALLEL 8; --Load some data. INSERT INTO TESTCASE_PDML SELECT ROWNUM, ROWNUM*1000 FROM DUAL CONNECT BY ROWNUM <=100; commit; --Plans for the update: EXPLAIN PLAN FOR UPDATE TESTCASE_PDML SET attribute1 = keynum * 1001 SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY()); Plan hash value: 3488641984 ------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------ | 0 | UPDATE STATEMENT | | 100 | 2600 | 2 (0)| 00:00:01 | | 1 | UPDATE | TESTCASE_PDML | | | | | | 2 | TABLE ACCESS FULL| TESTCASE_PDML | 100 | 2600 | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------------ Note ----- - dynamic statistics used: dynamic sampling (level=3) - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold EXPLAIN PLAN FOR UPDATE /*+ parallel(8) */ TESTCASE_PDML SET attribute1 = keynum * 1001 SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY()); Plan hash value: 2008974791 -------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib | -------------------------------------------------------------------------------------------------------------------- | 0 | UPDATE STATEMENT | | 100 | 2600 | 4 (0)| 00:00:01 | | | | | 1 | UPDATE | TESTCASE_PDML | | | | | | | | | 2 | PX COORDINATOR | | | | | | | | | | 3 | PX SEND QC (RANDOM)| :TQ10000 | 100 | 2600 | 4 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) | | 4 | PX BLOCK ITERATOR | | 100 | 2600 | 4 (0)| 00:00:01 | Q1,00 | PCWC | | | 5 | TABLE ACCESS FULL| TESTCASE_PDML | 100 | 2600 | 4 (0)| 00:00:01 | Q1,00 | PCWP | | -------------------------------------------------------------------------------------------------------------------- Note ----- - dynamic statistics used: dynamic sampling (level=3) - Degree of Parallelism is 8 because of hint <b>- PDML is disabled in current session </b> EXPLAIN PLAN FOR UPDATE /*+ parallel(8) enable_parallel_dml*/ TESTCASE_PDML SET attribute1 = keynum * 1001 SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY()); Plan hash value: 727441780 -------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib | -------------------------------------------------------------------------------------------------------------------- | 0 | UPDATE STATEMENT | | 100 | 2600 | 4 (0)| 00:00:01 | | | | | 1 | PX COORDINATOR | | | | | | | | | | 2 | PX SEND QC (RANDOM) | :TQ10000 | 100 | 2600 | 4 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) | | 3...
Categories: DBA Blogs

Hash Aggregation – 1

Jonathan Lewis - Mon, 2022-01-17 07:15

Here’s an observation I made some time in 2007, but didn’t mention online until a (possibly) relevant question appeared on the Oracle database forum in 2017; and the topic reappeared in a nearly unrelated question a little while ago. The 2017 question was this:

I am running following SQL, which it taking almost 4-5 seconds and returning only 18 records.

SELECT
   SUM(TOTAL_AMOUNT) C1,
   a.LEVEL2_ENAME AS c2,
   c.CURR_YEARMO AS c3
FROM TERRITORY_PRESET_MSNP a,
   CV_RESTATED_MSNP b
   LEFT OUTER JOIN
   MONTH_D c
   ON b.YEARMO = c.CURR_YEARMO,
   PRODUCT_OFFERING d
WHERE   b.PO_ID = d.ROW_ID
    AND b.DATASOURCE_ID = 10
    AND b.YEARMO = 201704
    AND b.OWNER_TERR_ID = a.TERR_ID
    AND c.CURR_YEARMO = 201704
    AND a.YEARMO = 201706
GROUP BY c.CURR_YEARMO, a.LEVEL2_ENAME
ORDER BY C3, C2;

If I remove the ORDER BY clause it is returning results in 1 second.

Before saying anything else, I’ll just make a couple of points about the SQL:

  • It’s not a good idea to mix traditional Oracle syntax with “ANSI” syntax – it’s likely to make things harder for the next person to read the code and there’s just a slight possibility that the rewrite that Oracle applies to hide the ANSI syntax may block some of the possible execution paths.
  • The C3, C2 in the order by clause are the column aliases for the curr_yearno, level2_ename columns used in the group by clause.  Although Oracle allows you to use aliases in the order by (but not in the group by) doing so can only make the SQL a little harder to interpret (especially in a case like this when you have both clauses).
  • There’s a left outer join to month_d (aliased as c), but the where clause then includes the predicate c.CURR_YEARMO = 201704 which will eliminate any rows where curr_yearmo is null, thus converting (we hope – but the mix and match syntax might introduce a side-effect) the outer join to an inner join – so maybe that’s a design error in the SQL.

Addressing the question, though, the first thought (rapidly eliminated) is that perhaps this is the standard complaint of the GUI interface: “it’s fast until I add an order by clause”.

The commonest reason for this complaint is that the typical GUI interface shows you the first few rows and waits for you to page down, so your impression of the response time is “the time to see the first few rows” rather than “the time to get the complete result set” when it might take much more time to return the entire result set. When you add an order by clause it’s possible that Oracle will have to sort the entire result set before you see any of it. It’s often the difference between “first few rows” and “entire result set” that triggers the complaint.

In this case the “rapid elimination” of this thought is due to the OP saying the result set was only 18 rows. which is likely to produce the gut feeling that it shouldn’t take very long for Oracle to sort 18 rows if it had to find all of them before showing displaying them. On the other hand the thought might need a little follow-up, after all:

  • it’s possible that the GUI is only displaying 15 rows at a time and it’s takes a lot of time to find the extra 3 rows. Just think of a tablescan with a filter subquery when the rows you want are the first few in the table. Without an order by the rows can be displayed as they are found, with an order by Oracle will have to get to the end of the tablescan before the rows can be sorted and displayed.
  • the optimizer can produce terrible estimates and the order by clause might prompt it to say “if I start with a different table, driving through a specific index, and changing the join order then I won’t have to do any sorting for the order by clause” The resulting path may be a very bad idea if the arithmetic produces the wrong results.

The OP hasn’t shown us the execution plan – and that’s what we really need to see; but there is an alternative  guess that we could make about what the optimizer is doing that would affect the performance this much.

The query is an aggregate query – we have a group by. Since 10g Oracle has been able to use “hash aggregation” – which shows up as the HASH GROUP BY operation in an execution plan. Here’s a little demo script, with a couple of sample queries:

rem
rem     Script:         sort_hash.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2012
rem 

create table t1
as
with generator as (
        select
                rownum id 
        from dual 
        connect by 
                level <= 1e4  -- > comment to avoid wordpress format issue
)
select
        trunc(dbms_random.value(0,262144))      n_256K,
        trunc(dbms_random.value(0,131072))      n_128K,
        trunc(dbms_random.value(0,8192))        n_8k
from
        generator       v1,
        generator       v2
where
        rownum <= 8 * 1048576  -- > comment to avoid wordpress format issue
;

set arraysize 1000
set timing on
set autotrace traceonly 

prompt  ===========
prompt  No Order by
prompt  ===========

select
        n_8K, count(*) ct
from
        t1
group by
        n_8k
;


prompt  =============
prompt  With Order by
prompt  =============

select
        n_8K, count(*) ct
from
        t1
group by
        n_8k
order by
        1
;


My table has 8M rows, and my queries target the column with 8K distinct values. I’ve enabled autotrace from SQL*Plus, set a large arraysize (to reduce time lost to SQL*Net round-trips), and set timing on so we can get an elapsed time for total execution. I’ve set autotrace to “traceonly” so that the SQL*Plus client will fetch the data but won’t doesn’t waste resources formatting it, but I’m not actually interested in the handful of execution statistics that will be reported.

Here are the two sets of results from a test run on 19.11.0.0. Note, particularly, the figure for Elapsed:


===========
No Order by
===========

8192 rows selected.

Elapsed: 00:00:00.58

Execution Plan
----------------------------------------------------------
Plan hash value: 136660032

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   1 |  HASH GROUP BY     |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |  8388K|    32M|  3460  (21)| 00:00:01 |
---------------------------------------------------------------------------

=============
With Order by
=============

8192 rows selected.

Elapsed: 00:00:03.03

Execution Plan
----------------------------------------------------------
Plan hash value: 3946799371

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   1 |  SORT GROUP BY     |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |  8388K|    32M|  3460  (21)| 00:00:01 |
---------------------------------------------------------------------------

The time has jumped from slightly under 0.6 seconds to just over 3 seconds as the critical operation changes from a HASH GROUP BY to a SORT GROUP BY (even though the estimated cost, hence predicted run-time, of execution has not changed).

Your first thought at this point is probably along the lines of “surely it doesn’t take 2.4 seconds to sort 8,192 small rows, why doesn’t Oracle do a hash group by followed by a sort order by?” The answer seems to be “it just doesn’t”. So here’s one way to make it happen (with execution plan and elapsed time from 19.11.0.0 again):

select
        dist_8k, ct
from
        (
        select  /*+ no_merge */
                n_8K dist_8k, count(*) ct
        from
                t1
        group by
                n_8k
        )
order by 
        dist_8k
;

8192 rows selected.

Elapsed: 00:00:00.59

Execution Plan
----------------------------------------------------------
Plan hash value: 1705136228

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |  8192 |   208K|  6938  (61)| 00:00:01 |
|   1 |  SORT ORDER BY       |      |  8192 |   208K|  6938  (61)| 00:00:01 |
|   2 |   VIEW               |      |  8192 |   208K|  6938  (61)| 00:00:01 |
|   3 |    HASH GROUP BY     |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| T1   |  8388K|    32M|  3460  (21)| 00:00:01 |
-----------------------------------------------------------------------------


Again the estimated cost of execution doesn’t (to the nearest whole number) change from the basic cost of the hash aggregation – but we have brought the time back down to just under 0.6 seconds.

It’s worth reminding you at this point that if you can re-engineer any SQL that’s performing badly and can see that the driving core of the query can be reduced to something much simpler and more efficient, then wrapping that core into an inline view with the /*+ no_merge */ hint (and possibly putting it up into a “with subquery” clause) might be the safest first step and most effective way of improving performance.

There is an option for avoiding the query rewrite here – hint the path you want to see:


select  /*+ use_hash_aggregation */
        n_8K, count(*) ct
from
        t1
group by
        n_8k
order by 
        1
;

8192 rows selected.

Elapsed: 00:00:00.59

Execution Plan
----------------------------------------------------------
Plan hash value: 2808104874

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   1 |  SORT ORDER BY      |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   2 |   HASH GROUP BY     |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| T1   |  8388K|    32M|  3460  (21)| 00:00:01 |
----------------------------------------------------------------------------

The nice thing about this, of course, is that you don’t actually have to edit the text; the hint could be attached to the query through an SQL Patch (or by abusing the SQL Profile or SQL Plan Baseline mechanisms).

The difficult part of hinting is finding the correct query block name for a more complex query. I simply added the hint /*+ use_hash_aggregation */ but the hint can be aimed at a query block so, in the absence of explicit query block names I could have used the hint /*+ use_hash_aggregation(@sel$1) */ using the default naming.

In a more complex case you can find the appropriate query block name by using the ‘alias’ format option when generating the execution plan. Consider the following query (where t2 and t3 are created from view all_objects), with its initial execution plan:


explain plan for
select 
        t2.owner, count(*)
from 
        t2 
where 
        t2.object_id in (
                select  t3.object_id 
                from    t3 
                where   t3.object_type = 'TABLE'
        )
group by 
        t2.owner
order by 
        t2.owner
/


select * from table(dbms_xplan.display(format=>'alias'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2646727453

------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |    23 |   575 |   319   (8)| 00:00:01 |
|   1 |  SORT GROUP BY        |      |    23 |   575 |   319   (8)| 00:00:01 |
|*  2 |   HASH JOIN RIGHT SEMI|      |  2298 | 57450 |   317   (7)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL  | T3   |  2298 | 34470 |   158   (7)| 00:00:01 |
|   4 |    TABLE ACCESS FULL  | T2   | 57448 |   561K|   156   (6)| 00:00:01 |
------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$5DA710D3
   3 - SEL$5DA710D3 / T3@SEL$2
   4 - SEL$5DA710D3 / T2@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T2"."OBJECT_ID"="T3"."OBJECT_ID")
   3 - filter("T3"."OBJECT_TYPE"='TABLE')

The Query Block Name / Object Alias information tells us that the query block holding the sort group by at operation 1 is named SEL$5DA710D3 so we can use that as the target query block in the hint: /*+ use_hash_aggregation(@SEL$5DA710D3) */ and the plan changes to:


-------------------------------------------------------------------------------
| Id  | Operation              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |      |    23 |   575 |   319   (8)| 00:00:01 |
|   1 |  SORT ORDER BY         |      |    23 |   575 |   319   (8)| 00:00:01 |
|   2 |   HASH GROUP BY        |      |    23 |   575 |   319   (8)| 00:00:01 |
|*  3 |    HASH JOIN RIGHT SEMI|      |  2298 | 57450 |   317   (7)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL  | T3   |  2298 | 34470 |   158   (7)| 00:00:01 |
|   5 |     TABLE ACCESS FULL  | T2   | 57448 |   561K|   156   (6)| 00:00:01 |
-------------------------------------------------------------------------------

It’s possible that you’ll only notice a significant difference in performance (and, perhaps, PGA memory allocated) when you aggregate a large number of rows into a small result set. And it does seem that this is one case where the only way to get the plan you want without a significant code rewrite is through a hint

tl;dr

If you have some code which does a “group by X, Y, Z order by X, Y, Z” (the order of the columns/expressions used doesn’t have to be the same for the two clauses) then Oracle will use a sort group by operation to handle the aggregation and ordering in a single step, even though we can find cases where hash aggregation followed by sort ordering is more efficient.

If you come across such a case then injecting the hint /*+ use_hash_aggregation(@query_block_name) */ may be the only way to change the execution plan if you’re not allowed to edit the SQL.

Footnote

In the second of the two links to the Oracle Developer Forum you’ll see that one poster pointed out that if the order by clause uses a suitable expression to substitute for one of the columns in the group by clause then you don’t need to hint the code, e.g.

group by
        numeric_column
order by
        numeric_column + 0

Very cute, but not a good idea.

Footnote 2

There is another part to the testing I started in 2007, and I’ll come back to that later in Hash Aggregation – 2.

Video : ANY_VALUE Aggregate Function in Oracle Database 21c

Tim Hall - Mon, 2022-01-17 03:12

In today’s video we’ll demonstrate the ANY_VALUE aggregate function, introduced in Oracle database 21c. The video is based on this article. ANY_VALUE Aggregate Function in Oracle Database 21c The star of today’s video is Scott Spendolini, who I suspect has used APEX at least once or twice in his life… Cheers Tim…

The post Video : ANY_VALUE Aggregate Function in Oracle Database 21c first appeared on The ORACLE-BASE Blog.Video : ANY_VALUE Aggregate Function in Oracle Database 21c was first posted on January 17, 2022 at 10:12 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

The contents of the Database Controlfile -- 1 : Logical Sructure

Hemant K Chitale - Sun, 2022-01-16 03:51

 (No, I am not referring to the "alter database backup controlfile to trace ..." command here.

In Oracle, the controlfile for a database is the "master" reference to the physical structure of the database and "known" backups.  


This what I see in my 19c database :

SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 383 383 161 160 1407 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 175 1 175 175 0
BACKUP PIECE 780 1006 136 1 136 136 0
BACKUP REDOLOG 76 215 153 1 153 153 0
BACKUP SET 96 1022 125 1 125 125 0
BACKUP SPFILE 124 131 25 1 25 25 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 4 1 4 4 0
DATAFILE 520 1024 32 0 0 1493 0
DATAFILE COPY 736 1000 3 1 3 3 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 635 1 635 635 0
FILENAME 524 4146 28 0 0 0 0
FLASHBACK LOG 84 2048 2 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 1 0 0 2 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 292 292 28 27 611 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 678 1 678 678 0
PDB RECORD 780 10 5 0 0 26 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 16 4 0 0 15 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 0 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 2 0 0 4 0
RMAN STATUS 116 141 141 58 57 339 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 1024 21 0 0 72 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 1024 7 0 0 36 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>


This structure is explained by :

SQL> select view_Definition from v$fixed_view_definition where view_name = 'GV$CONTROLFILE_RECORD_SECTION'
2 /

VIEW_DEFINITION
------------------------------------------------------------------------------------------------------------------------------------
select inst_id,decode(indx,0,'DATABASE',1, 'CKPT PROGRESS', 2, 'REDO THREAD',3,'REDO LOG',4,'DATAFILE',5,'FILENAME',6,'TABLESPACE',7
,'TEMPORARY FILENAME',8,'RMAN CONFIGURATION',9,'LOG HISTORY',10,'OFFLINE RANGE',11,'ARCHIVED LOG',12,'BACKUP SET',13,'BACKUP PIECE',
14,'BACKUP DATAFILE',15, 'BACKUP REDOLOG',16,'DATAFILE COPY',17,'BACKUP CORRUPTION',18,'COPY CORRUPTION',19,'DELETED OBJECT',20,'PRO
XY COPY',21,'BACKUP SPFILE',23,'DATABASE INCARNATION',24,'FLASHBACK LOG',25, 'RECOVERY DESTINATION', 26,'INSTANCE SPACE RESERVATION'
, 27, 'REMOVABLE RECOVERY FILES', 28, 'RMAN STATUS', 29, 'THREAD INSTANCE NAME MAPPING', 30, 'MTTR', 31, 'DATAFILE HISTORY', 32, 'ST
ANDBY DATABASE MATRIX', 33, 'GUARANTEED RESTORE POINT', 34, 'RESTORE POINT', 35, 'DATABASE BLOCK CORRUPTION', 36, 'ACM OPERATION', 3
7, 'FOREIGN ARCHIVED LOG', 38, 'PDB RECORD', 39, 'AUXILIARY DATAFILE COPY', 40, 'MULTI INSTANCE REDO APPLY', 41, 'PDBINC RECORD', 42
, 'TABLESPACE KEY HISTORY', 'UNKNOWN'),rsrsz,rsnum,rsnus,rsiol,rsilw,rsrlw, con_id from x$kccrs where indx not in (22)


SQL>


Thus, for example :

SQL> alter session set container=ORCLPDB1;

Session altered.

SQL> create tablespace x;

Tablespace created.

SQL>
SQL> select *
2 from v$controlfile_record_section
3 where type in ('DATAFILE','TABLESPACE')
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
DATAFILE 520 1024 33 0 0 1494 0
TABLESPACE 180 1024 22 0 0 73 0

SQL>


The datafile and tablespace record counts incremented by 1 for the new tablespace and datafile in PDB ORCLPDB1

oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Sun Jan 16 16:48:54 2022
Version 19.12.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCLCDB (DBID=2778483057)

RMAN> backup pluggable database orclpdb1;

Starting backup at 16-JAN-22
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=253 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=371 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf
input datafile file number=00011 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf
input datafile file number=00031 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf
input datafile file number=00033 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_x_jy7po5gg_.dbf
channel ORA_DISK_1: starting piece 1 at 16-JAN-22
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00009 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf
input datafile file number=00012 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf
input datafile file number=00026 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf
input datafile file number=00032 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf
channel ORA_DISK_2: starting piece 1 at 16-JAN-22
channel ORA_DISK_1: finished piece 1 at 16-JAN-22
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q04z2_.bkp tag=TAG20220116T164908 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:46
channel ORA_DISK_2: finished piece 1 at 16-JAN-22
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q08kj_.bkp tag=TAG20220116T164908 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:56
Finished backup at 16-JAN-22

Starting Control File and SPFILE Autobackup at 16-JAN-22
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143804_jy7q1xp1_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 16-JAN-22
RMAN-08591: warning: invalid archived log deletion policy

RMAN>

SQL> select *
2 from v$controlfile_record_section
3 where type like 'BACKUP%'
4 order by type
5 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 185 1 185 185 0
BACKUP PIECE 780 1006 140 1 140 140 0
BACKUP REDOLOG 76 215 153 1 153 153 0
BACKUP SET 96 1022 129 1 129 129 0
BACKUP SPFILE 124 131 27 1 27 27 0

6 rows selected.

SQL>


My RMAN Backup of was for 8 datafiles and ran to 3 BackupPieces and 3 BackupSets.
Yet, the number of "BACKUP DATAFILE" records increased by 10, the number of "BACKUP PIECE" by 4 and the number of "BACKUP SET"s by 4.  Also, note the "BACKUP SPFILE" records also increased by 2.
The difference is explained by the AutoBackup created immediately after I added the new tablespace 'X' and datafile to the Pluggable Database ORCLPDB1 and the fact that the controlfile AutoBackup  is included in the "BACKUP DATAFILE" count.


RMAN> list backup completed after "sysdate-1"
2> ;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
126 Full 18.36M DISK 00:00:02 16-JAN-22
BP Key: 137 Status: AVAILABLE Compressed: NO Tag: TAG20220116T164836
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143716_jy7pz5dd_.bkp
SPFILE Included: Modification time: 16-JAN-22
SPFILE db_unique_name: ORCLCDB
Control File Included: Ckp SCN: 16813310 Ckp time: 16-JAN-22

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
127 Full 342.80M DISK 00:00:42 16-JAN-22
BP Key: 138 Status: AVAILABLE Compressed: NO Tag: TAG20220116T164908
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q04z2_.bkp
List of Datafiles in backup set 127
Container ID: 3, PDB Name: ORCLPDB1
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
10 Full 16813384 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf
11 Full 16813384 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf
31 Full 16813384 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf
33 Full 16813384 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_x_jy7po5gg_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
128 Full 635.72M DISK 00:00:51 16-JAN-22
BP Key: 139 Status: AVAILABLE Compressed: NO Tag: TAG20220116T164908
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q08kj_.bkp
List of Datafiles in backup set 128
Container ID: 3, PDB Name: ORCLPDB1
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
9 Full 16813386 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf
12 Full 16813386 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf
26 Full 16813386 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf
32 Full 16813386 16-JAN-22 NO /opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
129 Full 18.36M DISK 00:00:02 16-JAN-22
BP Key: 140 Status: AVAILABLE Compressed: NO Tag: TAG20220116T165004
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143804_jy7q1xp1_.bkp
SPFILE Included: Modification time: 16-JAN-22
SPFILE db_unique_name: ORCLCDB
Control File Included: Ckp SCN: 16813424 Ckp time: 16-JAN-22

RMAN>


Thus, BackupSet 126 is the automatically-created fourth "BACKUP SET"and the AutoBackup in BackupSets 126 and 129 are the two additional "BACKUP DATAFILE"s.  Simillarly, the SPFiles included in the two AutoBackups also incremented the "BACKUP SPFILE" count.

However, when you DROP a Tablespace (and remove it's Datafile(s), the record count does NOT decrement.


SQL> alter session set container=ORCLPDB1;

Session altered.

SQL> drop tablespace X including contents and datafiles;

Tablespace dropped.

SQL>

SQL> select *
2 from v$controlfile_record_section
3 where type in ('DATAFILE','TABLESPACE')
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
DATAFILE 520 1024 33 0 0 1495 0
TABLESPACE 180 1024 22 0 0 74 0

SQL>


It simply means that the "33rd" DATAFILE and "22nd" TABLESPACE records are reusable later. (Note that LAST_RECID also has got incremented for the two entries). Note how "RECORDS_TOTAL" is 1024 for "DATAFILE" and "TABLESPACE". This allows for slots that are present but not in use currently.

What about ArchiveLogs ?


SQL> select *
2 from v$controlfile_record_section
3 where type like 'ARCHIVE%'
4 order by type
5 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ARCHIVED LOG 584 383 383 161 160 1407 0

SQL> alter system archive log current;

System altered.

SQL> alter system archive log current;

System altered.

SQL> select *
2 from v$controlfile_record_section
3 where type like 'ARCHIVE%'
4 order by type
5 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ARCHIVED LOG 584 383 383 165 164 1411 0

SQL>
SQL> select count(*)
2 from v$archived_log
3 /

COUNT(*)
----------
383

SQL>


Apparently, Oracle reuses "ARCHIVED LOG" records in the Controlfile, while adjusting the FIRST_INDEX, LAST_INDEX and LAST_RECID values (Note how, in this cas, "FIRST_INDEX" is actually less than "LAST_INDEX"). So, this seems to be Circular Structure that will expand only when necessary.

So, if I still generate two more ArchiveLogs and then check and delete missing ArchiveLogs with :


SQL> alter system archive log current;

System altered.

SQL> alter system archive log current;

System altered.

SQL>


RMAN> crosscheck archivelog all; -- which returns a number of "validation failed for archived log" warnings, indicating ArchiveLogs that have been deleted at the OS level
and
RMAN> delete noprompt expired archivelog all; -- to delete all those marked expired after validation failed

SQL> select *
2 from v$controlfile_record_section
3 where type like 'ARCHIVE%'
4 order by type
5 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ARCHIVED LOG 584 383 383 169 168 1415 0

SQL>
SQL> select dest_id, standby_dest, deleted, status, count(*)
2 from v$archived_log
3 group by dest_id, standby_dest, deleted, status
4 order by 1,2,3,4
5 /

DEST_ID STA DEL S COUNT(*)
---------- --- --- - ----------
1 NO NO A 102
1 NO YES D 23
2 NO NO A 8
2 YES NO A 97
3 YES NO A 105
4 YES NO A 48

6 rows selected.

SQL>


23 ArchiveLog entries at DEST_ID=1 are now marked as DELETED (by the "DELETE .. EXPIRED ARCHIVELOG ALL" command).
The FIRST_INDEX and LAST_INDEX have changed again. 

 My query on v$archived_log shows a a number of entries for DEST_IDs 2 to 4 as Standby Destinations (I have 3 Standby Databases, so this Database as a Primary, is tracking the ArchiveLogs it has to send to the Standbys).  Only entries for DEST_ID=1 are on the Local Filesystem on this server.

So : The 383 Records in the Controlfile does not represent the actual count of Physical ArchiveLogs for this Database present on this server.  At some point in time in the past, the number of entries had hit 383 but now there are "empty slots" that are being reused.

In the meantime, "DELETED OBJECT" count has increased by 48.


SQL> select *
2 from v$controlfile_record_section
3 where type = 'DELETED OBJECT'
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
DELETED OBJECT 20 818 683 1 683 683 0

SQL>


Is it some form of "garbage collector" ?
Categories: DBA Blogs

Running Hugging Face LayoutLM Model with PyCharm and Docker

Andrejus Baranovski - Sat, 2022-01-15 14:09
This tutorial explains how to run Hugging Face LayoutLM model locally with PyCharm remote interpreter. This is cool, because a remote interpreter allows you to run and debug your custom logic, while running Hugging Face model and its dependencies in Docker container. I share Dockerfile, which helps to setup all dependencies. Enjoy!

 

Why Cpu time so important

Tom Kyte - Thu, 2022-01-13 18:46
hi tom i just finished studying about TKPROF and SQLtrace. when we use TKPROF we can analyze which part of the query is having performance problem. my question is why is cpu time is so important? what does cpu time affect? and what is the best method to reduce cpu time on queries thanks
Categories: DBA Blogs

how does the result cache work

Tom Kyte - Thu, 2022-01-13 18:46
hi tom i have a simple question about result cache. 1.how exactly does the oracle access the result cache? and how does it knew that it is the same sql statement executed last time? 2.when we cache result of the SQL statement into the result cache and re run the query does oracle instantly access the result cache? or does oracle go into the shared sql area first then go to the result cache to get the result thanks
Categories: DBA Blogs

RMAN Backup disappeared from catalog

Tom Kyte - Thu, 2022-01-13 18:46
Hello, I've configured RMAN backup with the following schedulation: - FULL Level 0 , every Sunday at 01:05 AM - INCREMENTAL Level 1, from Monday to Saturday at 01:05 AM - Archive Log Backup every day, every 30 minutes The retention policy is set to Redundancy 2: <code> RMAN> show all; RMAN configuration parameters for database with db_unique_name DATABASE are: CONFIGURE RETENTION POLICY TO REDUNDANCY 2; CONFIGURE BACKUP OPTIMIZATION ON; CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u01/app/oracle/backup/DATABASE/autobackup/cf_sp_%F'; CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET PARALLELISM 3; CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE ENCRYPTION FOR DATABASE OFF; # default CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK; CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/19.0.0/dbhome_1/dbs/snapcf_DATABASE.f'; # default </code> The Backup script are the following: - Level 0 <code> CONFIGURE RETENTION POLICY TO REDUNDANCY 2; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u01/app/oracle/backup/DATABASE/autobackup/cf_sp_%F'; CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET PARALLELISM 3; run { backup AS COMPRESSED BACKUPSET archivelog all format '/u01/app/oracle/backup/DATABASE/arch_%d_ARCH_bks%s_%T_%U.bck' delete input filesperset 10; backup AS COMPRESSED BACKUPSET incremental level 0 database format '/u01/app/oracle/backup/DATABASE/data_%d_FULL_bks%s_%T_%U.bck' include current controlfile; backup AS COMPRESSED BACKUPSET archivelog all format '/u01/app/oracle/backup/DATABASE/arch_%d_ARCH_bks%s_%T_%U.bck' delete input filesperset 10; crosscheck backup; delete noprompt obsolete; } </code> - Level 1 <code> CONFIGURE RETENTION POLICY TO REDUNDANCY 2; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u01/app/oracle/backup/DATABASE/autobackup/cf_sp_%F'; CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET PARALLELISM 3; run { backup AS COMPRESSED BACKUPSET archivelog all format '/u01/app/oracle/backup/DATABASE/arch_%d_ARCH_bks%s_%T_%U.bck' delete input filesperset 10; backup AS COMPRESSED BACKUPSET incremental level 1 database format '/u01/app/oracle/backup/DATABASE/data_%d_FULL_bks%s_%T_%U.bck' include current controlfile; backup AS COMPRESSED BACKUPSET archivelog all format '/u01/app/oracle/backup/DATABASE/arch_%d_ARCH_bks%s_%T_%U.bck' delete input filesperset 10; crosscheck backup; delete noprompt obsolete; } </code> - Archivelog <code> CONFIGURE RETENTION POLICY TO REDUNDANCY 2; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u01/app/oracle/backup/DATABASE/autobackup/cf_sp_%F'; CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET PARALLELISM 3; run { backup AS COMPRESSED BACKUPSET archivelog all format '/u01/app/oracle/backup/DATABASE/arch_%d_ARCH_bks%s_%T_%U.bck' delete input filesperset 10; crosscheck backup; delete noprompt obsolete; } </code> With Redundancy set to 2, I expected that RMAN keeps 2 FULL Level 0 copies of the database. For example: I took a first FULL Level 0 at 26/12/2021 at 01:05 AM I took a second FULL Level 0 at 02/01/2022 at 01:05 AM On 09/01/2022 I expected that RMAN make a new FULL Level 0 and delete the Level 0 of 26/12/20...
Categories: DBA Blogs

COMPLICATED COMMINGLED DATABASE ENVIRONMENT

Michael Dinh - Thu, 2022-01-13 17:16

I have been reviewing RMAN RAC backup for environment having a total of 15 non-production and production databases on the same host excluding APX and MGMTDB.

That’s not a big deal, as I have once had to managed 28 databases residing on the same host, right?

I am just too lazy and too tedious to change RMAN configuration one database at a time.

Luckily, there is a convention where non-production instances ends with T1 and production instances ends with P1.

This allows me to make the same changes to non-production and production in 2 steps.

Goal is to configure RMAN PARALLELISM 2 for NON-PROD and PARALLELISM 4 for PROD and consistent RECOVERY WINDOW OF 14 DAYS.

### Current configuration is inconsistent across databases:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;

CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 2;
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 4;

====================
### NON-PROD: 
====================

--- RMAN commands: cat configure.rman:
set echo on
connect target;
show all;
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 2;
show all;
exit

--- Let's make sure the instances are correct:
$ ps -ef|grep [p]mon|egrep -v 'ASM|APX|MGMTDB'|cut -d_ -f3|grep -Ev '\P1'|sort
DB01T1
DB02T1
DB03T1

--- Make the change:
$ for db in $(ps -ef|grep [p]mon|egrep -v 'ASM|APX|MGMTDB'|cut -d_ -f3|grep -Ev '\P1'|sort); do echo 'RMAN configure' $db; . oraenv <<< $db; rman @configure.rman; done;

====================
### PROD:
====================

--- RMAN commands: cat configure.rman:
set echo on
connect target;
show all;
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 4;
show all;
exit

--- Let's make sure the instances are correct:
$ ps -ef|grep [p]mon|egrep -v 'ASM|APX|MGMTDB'|cut -d_ -f3|grep -E '\P1'|sort
DB01P1
DB02P1
DB03P1

--- Make the change:
$ for db in $(ps -ef|grep [p]mon|egrep -v 'ASM|APX|MGMTDB'|cut -d_ -f3|grep -E '\P1'|sort); do echo 'RMAN configure' $db; . oraenv <<< $db; rman @configure.rman; done;

Installing the cdata ODBC drivers for Excel

Yann Neuhaus - Thu, 2022-01-13 11:06

This article is part of a series that includes SQLite, Postgresql, Firebird, Oracle RDBMS, Microsoft SQL Server, HSQLDB, MongoDB, and MariaDB. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
Like MongoDB, Excel is not a relational data source, but the ODBC API will allow to access its data using SQL statements. The idea is to access spreadsheet files as if they were relational databases, each sheet being a table, each line a table’s row and each column a table’s column. The first row of a sheet contains the column headers. This is quite self-evident; see the screen copy down below.
As the Excel sheet is the database, there is no server software to install but only the ODBC drivers.
Let’s create a spreadsheet with the data sample from the site. We will just query an existing database, e.g. the PostgreSQL one if the instructions here were applied, and extract the content of the tables, as shown below:

$ psql sampledb
\a
Output format is unaligned.
sampledb=> select * from regions;
region_id|region_name
1|Europe
2|Americas
3|Asia
4|Middle East and Africa
(4 rows)

sampledb=> select * from countries;
country_id|country_name|region_id
AR|Argentina|2
AU|Australia|3
BE|Belgium|1
BR|Brazil|2
...
(29 rows)

and so on with the other tables locations, departments, jobs, employees and dependents.
We will simply create one spreadsheet using LibreOffice Calc or Excel, create one tab per table with the table name as its name, paste the data from the above output with the column headers as the first line. Be sure to select the ‘|’ character as the column separator and remove spurious lines and columns. This step is manual and a bit tedious, but it’ll only take a few minutes.
The image below shows how the spreadsheet may look like with some data in it:

If using isql to extract the data into a tabular format, spurious blanks are added, e.g.:

$ isql -v mysqldb -v debian debian
SQL> select * from regions;
+------------+--------------------------+
| region_id  | region_name              |
+------------+--------------------------+
| 1          | Europe                   |
| 2          | Americas                 |
| 3          | Asia                     |
| 4          | Middle East and Africa   |
+------------+--------------------------+
SQLRowCount returns 4

SQL> select * from countries;
+-----------+-----------------------------------------+------------+
| country_id| country_name                            | region_id  |
+-----------+-----------------------------------------+------------+
| AR        | Argentina                               | 2          |
| AU        | Australia                               | 3          |
| BE        | Belgium                                 | 1          |
...
| ZM        | Zambia                                  | 4          |
| ZW        | Zimbabwe                                | 4          |
+-----------+-----------------------------------------+------------+
SQLRowCount returns 29
29 rows fetched

This leading and trailing blanks must be removed as they don’t belong to the data but were added by isql to format the table so that it displays nicely. The following gawk script can be used to this effect:

for t in {regions,countries,locations,departments,jobs,employees,dependents}; do echo "select * from $t;" | isql -v mysqldb debian debian | gawk -v table_name=$t 'BEGIN {
   print "table", table_name
   do {
      getline
   } while (0 == index($0, "SQL>"))
   n = split ($0, t, "+")
   delete t[1]
   FIELDWIDTHS = "1"
   for (i = 2; i < n; i++)
      FIELDWIDTHS = FIELDWIDTHS sprintf(" %d 1", length(t[i]))
   FIELDWIDTHS = FIELDWIDTHS " 1"
   bHeader = 0
}
{
   bHeader++
   if (index($0, "+-"))
      if (2 == bHeader)
         next
      else
         exit
   for (i = 2; i < NF; i += 2) {gsub(/(^ +)|( +$)/, "", $i); printf("%s|", $i)}
   printf "\n"
}
END {
   printf "\n"
}';
done
Output:
table regions
region_id|region_name|
1|Europe|
2|Americas|
3|Asia|
4|Middle East and Africa|

table countries
country_id|country_name|region_id|
AR|Argentina|2|
...
etc...

If no prior database with the sample data is available, just save the data from here into a text file, say populate-tables.sql, save the following gawk script sql2flat.awk:

# convert sql statements into csv;
# INSERT INTO regions(region_id,region_name) VALUES (1,'Europe');
# becomes
#    1|Europe
# INSERT INTO locations(location_id,street_address,postal_code,city,state_province,country_id) VALUES (1400,'2014 Jabberwocky Rd','26192','Southlake','Texas','US');
# becomes
#    1400|2014 Jabberwocky Rd|26192|Southlake|Texas|US
# sql comes from here for example: https://www.sqltutorial.org/wp-content/uploads/2020/04/postgresql-data.txt;
# Usage:
#   gawk -v sep=separator -f sql2csv.awk data.sql
# Example:
#   gawk -v sep="|" -f sql2csv.awk populate-tables.sql | tee populate-tables.csv
{
   if (match($0, /INSERT INTO ([^)]+)\(.+\) VALUES \((.+)\)/, m)) {
      if (m[1] != ex_table) {
         ex_table = m[1]
         print "# tab", m[1] ":"
      }
      gsub(/'|"/, "", m[2])
      n = split(m[2], data, ",")
      for (i = 1; i  1 ≤ n; i++)
         printf("%s%s", i > 1 ? sep : "", data[i])
      printf "\n"
   }
}

and run it on the data file:

gawk -v sep="|" -f sql2flat.awk populate-tables.sql

Snippet of output:

# tab regions:
1|Europe
2|Americas
3|Asia
4|Middle East and Africa
# tab countries:
AR|Argentina|2
AU|Australia|3
BE|Belgium|1
BR|Brazil|2
...
# tab locations:
1400|2014 Jabberwocky Rd|26192|Southlake|Texas|US
1500|2011 Interiors Blvd|99236|South San Francisco|California|US
...
# tab jobs:
1|Public Accountant|4200.00|9000.00
2|Accounting Manager|8200.00|16000.00
3|Administration Assistant|3000.00|6000.00
...
# tab departments:
1|Administration|1700
2|Marketing|1800
...
# tab employees:
100|Steven|King|steven.king@sqltutorial.org|515.123.4567|1987-06-17|4|24000.00|NULL|9
101|Neena|Kochhar|neena.kochhar@sqltutorial.org|515.123.4568|1989-09-21|5|17000.00|100|9
102|Lex|De Haan|lex.de haan@sqltutorial.org|515.123.4569|1993-01-13|5|17000.00|100|9
...
# tab dependents:
1|Penelope|Gietz|Child|206
2|Nick|Higgins|Child|205
3|Ed|Whalen|Child|200
...

Next, copy and paste the above output into the respective sheet’s tabs.
After the edition is completed, save the file as SampleWorkbook.xlsx (for compatibility, select the output format as Excel file if using LibreOffice Calc) and keep a copy of it somewhere in case it becomes corrupted during the tests.
We will use the commercial ODBC drivers provided by cdata (https://www.cdata.com/odbc/); other vendors are e.g. Easysoft (https://www.easysoft.com/developer/interfaces/odbc/index.html) and Devart (https://www.devart.com/odbc/).
Download the drivers from https://www.cdata.com/drivers/excel/odbc/ by following the instructions on the screen, and install them as root:

# dpkg -i ExcelODBCDriverforUnix.deb 

Run the licensing script and follow the on-screen instructions:

# cd /opt/cdata/cdata-odbc-driver-for-excel/bin/
# ./install-license.x64 

*************************************************************************
Please provide your name and email to install a trial license.

To install a full version license pass your product key as a parameter.
For instance: ./install-license MY_PRODUCT_KEY

Please refer to the documentation for additional details.
*************************************************************************

Name: debian
Email: ...
Installing TRIAL license...
Downloading license data...
Verifying license data...
License installation succeeded.

Check the system-wide installed drivers:

# odbcinst -q -d
...
[CData ODBC Driver for Excel]

# odbcinst -q -d -n "CData ODBC Driver for Excel"
[CData ODBC Driver for Excel]
Description=CData ODBC Driver for Excel 2021
Driver=/opt/cdata/cdata-odbc-driver-for-excel/lib/libexcelodbc.x64.so
UsageCount=1
Driver64=/opt/cdata/cdata-odbc-driver-for-excel/lib/libexcelodbc.x64.so

Check the system-wide DSN in /etc/odbc.ini:

# odbcinst -q -s
...
[CData-Excel-Source]

# odbcinst -q -s -n "CData-Excel-Source"
[CData Excel Source]
Driver=CData ODBC Driver for Excel
AWS Access Key=
AWS Region=NORTHERNVIRGINIA
AWS Role ARN=
AWS Secret Key=
Azure Access Key=
Azure Shared Access Signature=
Azure Storage Account=
Azure Tenant=
MFA Serial Number=
MFA Token=
Password=
Share Point Edition=SharePointOnline
SSL Mode=AUTOMATIC
User=

Copy the newly inserted DSN into debian’s own ~/.odbc.ini file and append the URI setting as follows:

$ vi ~/.odbc.ini
...
[CData-Excel-Source]
Driver=CData ODBC Driver for Excel
AWS Access Key=
AWS Region=NORTHERNVIRGINIA
AWS Role ARN=
AWS Secret Key=
Azure Access Key=
Azure Shared Access Signature=
Azure Storage Account=
Azure Tenant=
MFA Serial Number=
MFA Token=
Password=
Share Point Edition=SharePointOnline
SSL Mode=AUTOMATIC
User=
URI=/home/debian/odbc4gawk/SampleWorkbook.xlsx

As debian, check its DSN so far:

$ odbcinst -q -s
[mysqlitedb]
[mysqldb]
[myfbdb]
[myfbdb_Devart]
[mypostgresqldb]
[OracleODBC-21]
[mymssqlserverdb]
[myhsqldb]
[mymongodb]
[DEVART_MONGODB]
[DEVART_FIREBIRD]
[CData-Excel-Source]

$ odbcinst -q -s -n [CData-Excel-Source]
[CData-Excel-Source]
Driver=CData ODBC Driver for Excel
AWS Access Key=
AWS Region=NORTHERNVIRGINIA
AWS Role ARN=
AWS Secret Key=
Azure Access Key=
Azure Shared Access Signature=
Azure Storage Account=
Azure Tenant=
MFA Serial Number=
MFA Token=
Password=
Share Point Edition=SharePointOnline
SSL Mode=AUTOMATIC
User=
URI=/media/sf_customers/dbi/odbc4gawk/SampleWorkbook.xlsx

Try a connection to the spreadsheet via the ODBC Driver Manager’s isql tool:

# isql -v "CData Excel Source"
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> help
+-----------+-------------+-------------------------------------------------------------------------+
| TABLE_CAT | TABLE_SCHEM | TABLE_NAME    | TABLE_TYPE | REMARKS                                    |
+-----------+-------------+---------------+------------+--------------------------------------------+
| CData     | Excel       | regions       | TABLE      | Retrieve data from the "regions" sheet.    |
| CData     | Excel       | countries     | TABLE      | Retrieve data from the "countries" sheet.  |
| CData     | Excel       | locations     | TABLE      | Retrieve data from the "locations" sheet.  |
| CData     | Excel       | department    | TABLE      | Retrieve data from the "department" sheet. |
| CData     | Excel       | employees     | TABLE      | Retrieve data from the "employees" sheet.  |
| CData     | Excel       | dependent     | TABLE      | Retrieve data from the "dependent" sheet.  |
| CData     | Excel       | jobs          | TABLE      | Retrieve data from the "jobs" sheet.       |
+-----------+-------------+---------------+------------+--------------------------------------------+
SQLRowCount returns -1
7 rows fetched

Interesting how the driver presents the data dictionary.
Run the test query:

SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN');
Output:
+----------------------------+-------------+------------+-------------------------------------------+----------------------+
| country_name               | country_id  | country_id | street_address                            | city                 |
+----------------------------+-------------+------------+-------------------------------------------+----------------------+
| United Kingdom             | UK          | UK         | 8204 Arthur St                            | London               |
| United Kingdom             | UK          | UK         | Magdalen Centre, The Oxford Science Park  | Oxford               |
| United States of America   | US          | US         | 2014 Jabberwocky Rd                       | Southlake            |
| United States of America   | US          | US         | 2011 Interiors Blvd                       | South San Francisco  |
| United States of America   | US          | US         | 2004 Charade Rd                           | Seattle              |
| China                      | CN          |            |                                           |                      |
+----------------------------+-------------+------------+-------------------------------------------+----------------------+
SQLRowCount returns -1
6 rows fetched

The join was performed as expected.
Let’s now see how the driver behaves when used with python the module pyodbc:

$ python3
import pyodbc Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.

import pyodbc

# connect using the DSN;
cnxn = pyodbc.connect(DSN='Cdata-Excel-Source')
# or:
# connect directly using the DRIVER definition, no DSN;
cnxn = pyodbc.connect('DRIVER={CData ODBC Driver for Excel};URI=/media/sf_customers/dbi/odbc4gawk/SampleWorkbook.xlsx')

cursor = cnxn.cursor()      
cursor.execute("""SELECT
               c.country_name,
               c.country_id,
               l.country_id,
               l.street_address,
               l.city
       FROM
               countries c
       LEFT JOIN locations l ON l.country_id = c.country_id
       WHERE
               c.country_id IN ('US', 'UK', 'CN')""") 

row = cursor.fetchone() 
while row:
     print (row) 
     row = cursor.fetchone()

Output:
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre - The Oxford Science Park', 'Oxford')
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')
('China', 'CN', None, None, None)

Excel spreadsheets are now accessible via ODBC under the debian account.
Admittedly, Excel sheets are no natural and reliable data sources for too many reasons to mention here (but they have other advantages) but it is quite impressive and almost magical to query them using SQL vs. some low-level cell-oriented API !
Instructions for the other data sources can be accessed through the following links:
SQLite
Firebird
HSQLDB
MariaDB
PostgreSQL
Oracle
Microsoft SQLServer for Linux
MongoDB

Cet article Installing the cdata ODBC drivers for Excel est apparu en premier sur Blog dbi services.

Installing the ODBC drivers for MongoDB

Yann Neuhaus - Thu, 2022-01-13 10:57

This article is part of a series that includes SQLite, Postgresql, Firebird, Oracle RDBMS, Microsoft SQL Server, HSQLDB, Excel, and MariaDB. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
MongoDB is a noSQL database but the ODBC API will hide this fact and allow to access its data using SQL statements. This is to show that with the appropriate ODBC drivers doing the translation, any data source can be accessed using SQL. Not all native operations are rendered correctly of course (notably SELECT’s JOIN clauses, see below) but the main ones, the CRUD, such as INSERT, SELECT, UPDATE and DELETE are all available.
As root, install MongoDB Community Edition and its command-line tool from the official package repositories:

# apt install mongodb
# apt install mongodb-mongosh

Follow the instructions here to configure and start mongodb.
A systemd service has been created. Still as root, start the service as shown below:

# ulimit -n 64000
# chown -R mongodb:mongodb /var/lib/mongodb
# systemctl stop mongod
# systemctl start mongod
# systemctl status mongod
● mongod.service - MongoDB Database Server
     Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
     Active: active (running) since Tue 2021-09-21 09:27:47 CEST; 3s ago
       Docs: https://docs.mongodb.org/manual
   Main PID: 38200 (mongod)
     Memory: 9.8M
        CPU: 26ms
     CGroup: /system.slice/mongod.service
             └─38200 /usr/bin/mongod --config /etc/mongod.conf

Check the MongoDB server availability by connecting to the server through the mongosh tool documented here:

# mongosh
Current Mongosh Log ID:	6149ba4e6d2a951c4e869dac
Connecting to:		mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000
Using MongoDB:		5.0.3
Using Mongosh:		1.0.6
test> quit

The data sample site does not provide scripts for creating and populating MongoDB collections. There are 2 ways to work around this:
1. use isql and ODBC to feed SQL statements to MongoDB;
2. translate SQL statements to javascripts ones suitable for MongoDB;
At this stage, ODBC drivers are not installed yet and therefore isql cannot work so we will select the 2nd alternative. The script below will do just that. As a Mongo database is schema-less, there is no need to create collections beforehand, they will be implicitely created when populated with documents. Any sample populating script will do so we’ll take the one for mysql here and save it to the file data4mongodb.sql. The script pop-mongodb.awk will convert SQL insert statements into MongoDB statements. Here it is:

# format of the input lines, e.g:
#   INSERT INTO regions(region_id,region_name) VALUES (1,'Europe');
# or:
#   INSERT INTO locations(location_id,street_address,postal_code,city,state_province,country_id) VALUES (2500,'Magdalen Centre, The Oxford Science Park','OX9 9ZB','Oxford','Oxford','UK');
# Output:
#    db.regions.insert({
#    region_id: 1,
#    region_name: "Europe"
#    })
# or:
#    db.locations.insert({
#    location_id: 2500,
#    street_address: "Magdalen Centre, The Oxford Science Park",
#    postal_code: "OX9 9ZB",
#    city: "Oxford",
#    state_province: "Oxford",
#    country_id: "UK"
#    })
# Usage:
#    gawk -v Apostrophe="'" -f pop-mongodb.awk data4mongodb.sql > data4mongodb.json
BEGIN {
   ApoRE="(([^,]+)|(" Apostrophe "([^" Apostrophe "]+)" Apostrophe ")),?" # i.e. (([^,]+)|('([^']+)')),?
   print "use sampledb"
}
{
   if (!$0 || match($0, /^\/\*/)) next

   match($0, /INSERT INTO ([^(]+)\(([^)]+)\)/, stmt)
   table_name = stmt[1]
   if (!bemptied[table_name]) {
      print "db." table_name ".deleteMany({})"
      bemptied[table_name] = 1
   }

   nb_columns = split(stmt[2], columns, ",")

   nb = match($0, /VALUES \(([^)]+)\)/, stmt)
   S = stmt[1]
   nb_values = 0
   while (match(S, ApoRE, res)) {
      values[++nb_values] = res[1]
      S = substr(S, RLENGTH + 1)
   }

   print "db." table_name ".insert({"
   for (i = 1; i <= nb_columns; i++) {
      if ("NULL" == values[i])
         values[i] = "null"
      gsub(Apostrophe, "\"", values[i])
      print columns[i] ": " values[i] (i < nb_columns ? "," : "")
   }
   print "})"
   printf "\n"
}

Invoke it:

gawk -v Apostrophe="'" -f pop-mongodb.awk data4mongodb.sql > data4mongodb.json

Example of input:

INSERT INTO regions(region_id,region_name) VALUES (1,'Europe');
...
INSERT INTO countries(country_id,country_name,region_id) VALUES ('AR','Argentina',2);
...
INSERT INTO locations(location_id,street_address,postal_code,city,state_province,country_id) VALUES (1400,'2014 Jabberwocky Rd','26192','Southlake','Texas','US');
...
INSERT INTO jobs(job_id,job_title,min_salary,max_salary) VALUES (1,'Public Accountant',4200.00,9000.00);
...
INSERT INTO departments(department_id,department_name,location_id) VALUES (1,'Administration',1700);
...
INSERT INTO employees(employee_id,first_name,last_name,email,phone_number,hire_date,job_id,salary,manager_id,department_id) VALUES (100,'Steven','King','steven.king@sqltutorial.org','515.123.4567','1987-06-17',4,24000.00,NULL,9);
...
INSERT INTO dependents(dependent_id,first_name,last_name,relationship,employee_id) VALUES (1,'Penelope','Gietz','Child',206);
...

and their corresponding generated json statements:

use sampledb
db.regions.deleteMany({})
db.regions.insert({
region_id: 1,
region_name: "Europe",
})
...
db.countries.deleteMany({})
db.countries.insert({
country_id: "AR",
country_name: "Argentina",
region_id: 2,
})
...
db.locations.deleteMany({})
db.locations.insert({
location_id: 1400,
street_address: "2014 Jabberwocky Rd",
postal_code: "26192",
city: "Southlake",
state_province: "Texas",
country_id: "US",
})
...
db.jobs.deleteMany({})
db.jobs.insert({
job_id: 1,
job_title: "Public Accountant",
min_salary: 4200.00,
max_salary: 9000.00,
})
...
db.departments.deleteMany({})
db.departments.insert({
department_id: 1,
department_name: "Administration",
location_id: 1700,
})
...
db.employees.deleteMany({})
db.employees.insert({
employee_id: 100,
first_name: "Steven",
last_name: "King",
email: "steven.king@sqltutorial.org",
phone_number: "515.123.4567",
hire_date: "1987-06-17",
job_id: 4,
salary: 24000.00,
manager_id: null,
department_id: 9,
})
...
db.dependents.deleteMany({})
db.dependents.insert({
dependent_id: 1,
first_name: "Penelope",
last_name: "Gietz",
relationship: "Child",
employee_id: 206,
})

To be sure the populating step is idempotent, the collections are emptied each time the script is run.
From mongosh, populate the tables as shown:

mongosh < data4mongodb.isql

Back in mongosh, check the data:

mongosh  show databases
admin       41 kB
config     111 kB
local     81.9 kB
sampledb   516 kB

test> use sampledb
switched to db sampledb

sampledb> show collections
countries
departments
dependents
employees
jobs
locations
regions

sampledb> db.regions.find()
[
  {
    _id: ObjectId("616eef8e230e4e4893edd45f"),
    region_id: 1,
    region_name: 'Europe'
  },
...
sampledb> db.countries.find()
[
  {
    _id: ObjectId("616eef8f230e4e4893edd463"),
    country_id: 'AR',
    country_name: 'Argentina',
    region_id: 2
  },
...
sampledb> db.locations.find()
[
  {
    _id: ObjectId("616eef91230e4e4893edd47c"),
    location_id: 1400,
    street_address: '2014 Jabberwocky Rd',
    postal_code: '26192',
    city: 'Southlake',
    state_province: 'Texas',
    country_id: 'US'
  },
...
sampledb> db.jobs.find()
[
  {
    _id: ObjectId("616eef92230e4e4893edd483"),
    job_id: 1,
    job_title: 'Public Accountant',
    min_salary: 4200,
    max_salary: 9000
  },
...
sampledb> db.departments.find()
[
  {
    _id: ObjectId("616eef94230e4e4893edd496"),
    department_id: 1,
    department_name: 'Administration',
    location_id: 1700
  },
...
sampledb> db.employees.find()
[
  {
    _id: ObjectId("616eef95230e4e4893edd4a1"),
    employee_id: 100,
    first_name: 'Steven',
    last_name: 'King',
    email: 'steven.king@sqltutorial.org',
    phone_number: '515.123.4567',
    hire_date: '1987-06-17',
    job_id: 4,
    salary: 24000,
    manager_id: null,
    department_id: 9
  },
...
sampledb> db.dependents.find()
[
  {
    _id: ObjectId("616eef9c230e4e4893edd4c9"),
    dependent_id: 1,
    first_name: 'Penelope',
    last_name: 'Gietz',
    relationship: 'Child',
    employee_id: 206
  },
...
sampledb> quit

The MongoDB ODBC drivers are available here. They are a modified version of MySQL ODBC driver. However, they don’t work on my test machine running Debian Linux v11 because of a missing openssl v1.0.2 library, which was predictable because those drivers are several years old and my test machine runs the latest Debian linux. For this reason, a commercial replacement from Devart has been installed; it comes with a one-month evaluation period. Once registered, it can be downloaded and installed as root as follows:

# wget https://www.devart.com/odbc/mongodb/devartodbcmongo_amd64.deb
# apt install /home/debian/Downloads/devartodbcmongo_amd64.deb

Check the system-wide installed drivers:

# odbcinst -q -d
...
[MongoDB Unicode]
[MongoDB ANSI]
...
[Devart ODBC Driver for MongoDB]
# odbcinst -q -d -n "Devart ODBC Driver for MongoDB"
[Devart ODBC Driver for MongoDB]
Driver=/usr/local/lib/libdevartodbcmongo.so

Check the system-wide DSN in /etc/odbc.ini:

# odbcinst -q -s
...
[DEVART_MONGODB]
#  odbcinst -q -s -n "DEVART_MONGODB"
[DEVART_MONGODB]
Description=My MongoDB sample database
Driver=Devart ODBC Driver for MongoDB
Data Source=
Port=27017
Database=sampledb
User ID=
Password=
Client Library=
BSON Library=
Additional Servers=
Connection Options=

Copy the newly inserted DSN into debian’s ~/.odbc.ini file:

$ vi ~/.odbc.ini
...
[DEVART_MONGODB]
Description=My MongoDB sample database
Driver=Devart ODBC Driver for MongoDB
Data Source=
Port=27017
Database=sampledb
User ID=
Password=
Client Library=
BSON Library=
Additional Servers=
Connection Options=

As debian, check its DSN so far:

$ odbcinst -q -s
...
[DEVART_MONGODB]
...

$ odbcinst -q -s -n DEVART_MONGODB
[DEVART_MONGODB]
Description=My MongoDB sample database
Driver=Devart ODBC Driver for MongoDB
Data Source=
Port=27017
Database=sampledb
User ID=
Password=
Client Library=
BSON Library=
Additional Servers=
Connection Options=

Try a connection to the MongoDB database via ODBC:

isql -v DEVART_MONGODB
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> help
+-----------+-------------+-------------+------------+---------+
| TABLE_CAT | TABLE_SCHEM | TABLE_NAME  | TABLE_TYPE | REMARKS |
+-----------+-------------+-------------+------------+---------+
| sampledb  |             | countries   | TABLE      |         |
| sampledb  |             | dependents  | TABLE      |         |
| sampledb  |             | regions     | TABLE      |         |
| sampledb  |             | locations   | TABLE      |         |
| sampledb  |             | departments | TABLE      |         |
| sampledb  |             | jobs        | TABLE      |         |
| sampledb  |             | employees   | TABLE      |         |
+-----------+-------------+-------------+------------+---------+ 
SQLRowCount returns -1
7 rows fetched

It looks good. Run the test query now:

SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN');
Output:
+--------------+-----------------+-----------------+----------------+------+
| country_name | country_id      | country_id_1    | street_address | city |
+--------------+-----------------+-----------------+----------------+------+
| China        | CN              |                 |                |      |
+--------------+-----------------+-----------------+----------------+------+ 
SQLRowCount returns -1
1 rows fetched

Only one row is returned because the left join is not implemented in the combo MongoDB/ODBC driver. In effect, joins in relational database are a necessity due to the data normalization; as MongoDB does not care about data duplication and works with complete, self-standing documents, no joins are necessary although inner queries can be simulated with aggregate() and lookup() as illustrated by the following equivalent query:

db.countries.aggregate([
   {
      $match:{$or:[{"country_id" : "US"}, {"country_id" : "UK"}, {"country_id" : "CN"}]}
   },
   {
     $lookup:
       {
         from: "locations",
         localField: "country_id",
         foreignField: "country_id",
         as: "r"
       }
  }
])

with its result:

[
  {
    _id: ObjectId("616eef90230e4e4893edd469"),
    country_id: 'CN',
    country_name: 'China',
    region_id: 3,
    r: []
  },
  {
    _id: ObjectId("616eef91230e4e4893edd478"),
    country_id: 'UK',
    country_name: 'United Kingdom',
    region_id: 1,
    r: [
      {
        _id: ObjectId("616eef92230e4e4893edd480"),
        location_id: 2400,
        street_address: '8204 Arthur St',
        postal_code: null,
        city: 'London',
        state_province: null,
        country_id: 'UK'
      },
      {
        _id: ObjectId("616eef92230e4e4893edd481"),
        location_id: 2500,
        street_address: 'Magdalen Centre, The Oxford Science Park',
        postal_code: 'OX9 9ZB',
        city: 'Oxford',
        state_province: 'Oxford',
        country_id: 'UK'
      }
    ]
  },
  {
    _id: ObjectId("616eef91230e4e4893edd479"),
    country_id: 'US',
    country_name: 'United States of America',
    region_id: 2,
    r: [
      {
        _id: ObjectId("616eef91230e4e4893edd47c"),
        location_id: 1400,
        street_address: '2014 Jabberwocky Rd',
        postal_code: '26192',
        city: 'Southlake',
        state_province: 'Texas',
        country_id: 'US'
      },
      {
        _id: ObjectId("616eef91230e4e4893edd47d"),
        location_id: 1500,
        street_address: '2011 Interiors Blvd',
        postal_code: '99236',
        city: 'South San Francisco',
        state_province: 'California',
        country_id: 'US'
      },
      {
        _id: ObjectId("616eef91230e4e4893edd47e"),
        location_id: 1700,
        street_address: '2004 Charade Rd',
        postal_code: '98199',
        city: 'Seattle',
        state_province: 'Washington',
        country_id: 'US'
      }
    ]
  }
]

To easy up the comparaison, we save that output to the file mongodb.out and run the following commands to reformat it into a tabular presentation.
First, convert mongodb.out to correct json syntax:

$ gawk -v Apo="'" '{
   gsub(Apo, "\"", $0)
   if (match($1, /^_id/)) next
   print gensub(/^( +)([^:]+)(:.+$)/, "\\1\"\\2\"\\3", "g", $0)
}' mongodb.out > mongodb.json

Now, save the following python script into the python script mongodb2tab.py:

$ cat - <<eop mongodb2tab.py
import os
import json
with open("./mongodb.json") as json_file:
   mongodb_out = json.load(json_file)
widths = [25, 10, 10, 42, 10]
print(f"{'country_name': <{widths[0]}}  {'country_id': <{widths[1]}}  {'country_id': <{widths[2]}} \ {'street_address': <{widths[3]}}  {'city': <{widths[4]}}")
for row in mongodb_out:
   if row['r']:
      for l in row['r']:
         print(f"{row['country_name']: <{widths[0]}}  {row['country_id']: <{widths[1]}}  {l['country_id']: <{widths[2]}}  {l['street_address']: <{widths[3]}}  {l['city']: <{widths[4]}}")
   else:
      print(f"{row['country_name']: <{widths[0]}}  {row['country_id']: <{widths[1]}}")
eop

Finally, execute it:

$ python3 mongodb2tab.py
Output:
country_name               country_id  country_id  street_address                              city      
China                      CN        
United Kingdom             UK          UK          8204 Arthur St                              London    
United Kingdom             UK          UK          Magdalen Centre, The Oxford Science Park    Oxford    
United States of America   US          US          2014 Jabberwocky Rd                         Southlake 
United States of America   US          US          2011 Interiors Blvd                         South San Francisco
United States of America   US          US          2004 Charade Rd                             Seattle   

Which shows that the output of the MongoDb query to simulate the left outer join was correct.
Let’s now test the DSN with pyodbc:

$ python3
import pyodbc 
cnxn = pyodbc.connect(DSN='mymongodb_Devart')
cursor = cnxn.cursor()	
cursor.execute("""SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN')""")
row = cursor.fetchone() 
while row:
    print (row) 
    row = cursor.fetchone()
Output:
('China', 'CN', None, None, None)

Here too, there are missing values as expected.
MongoDB is now accessible from ODBC under the debian account, albeit not all SQL statements are fully supported, which is understandable with a NoSQL database.
Instructions for the other data sources can be accessed through the following links:
SQLite
Firebird
HSQLDB
MariaDB
PostgreSQL
Oracle
Microsoft SQLServer for Linux
Excel

Cet article Installing the ODBC drivers for MongoDB est apparu en premier sur Blog dbi services.

Installing the ODBC drivers for Microsoft SQLServer for Linux

Yann Neuhaus - Thu, 2022-01-13 10:51

This article is part of a series that includes SQLite, Postgresql, Firebird, Oracle RDBMS, HSQLDB, MariaDB, MongoDB, and Excel. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
As root, follow the documentation here to get and install sqlserver express for Linux. Here are the needed steps:

# wget -qO- https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
# add-apt-repository "$(wget -qO- https://packages.microsoft.com/config/ubuntu/20.04/mssql-server-2019.list)"
# apt-get install -y mssql-server
# /opt/mssql/bin/mssql-conf setup

A systemd service was set up and launched:

# systemctl status mssql-server --no-pager
● mssql-server.service - Microsoft SQL Server Database Engine
     Loaded: loaded (/lib/systemd/system/mssql-server.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2021-08-27 15:22:14 CEST; 15s ago
       Docs: https://docs.microsoft.com/en-us/sql/linux
   Main PID: 7795 (sqlservr)
      Tasks: 120
     Memory: 880.5M
        CPU: 5.983s
     CGroup: /system.slice/mssql-server.service
             ├─7795 /opt/mssql/bin/sqlservr
             └─7817 /opt/mssql/bin/sqlservr

Get and install the ODBC drivers for sqlserver:

# apt install tdsodbc
# apt install curl
# curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list | sudo tee /etc/apt/sources.list.d/msprod.list
# apt-get update 
# apt-get install mssql-tools unixodbc-dev

Check that the device has been added system-wide:

# odbcinst -q -d
...
[ODBC Driver 17 for SQL Server]

# odbcinst -q -d -n "ODBC Driver 17 for SQL Server"
[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.8.so.1.1

As debian, create the sampledb database using the native administration tool sqlcmd:

$ sqlcmd -S localhost -U SA -P admin2021!
CREATE DATABASE sampledb
go
1> SELECT Name from sys.Databases;
2> go
Name                                                                                                                            
-----------
master                                                                                                                          
tempdb                                                                                                                          
model                                                                                                                           
msdb                                                                                                                            
sampledb                                                                                                                        

(5 rows affected)

Populate the sampledb database; statements for table creation for slqserver are available here and the ones to populate the tables here.
Click and save the files to create_tables_mssql.sql respectively populate_tables_mssql.sql.
Execute the above SQL scripts still using the default command-line administration tool, sqlcmd:

$ sqlcmd -S localhost -U SA -P admin2021! -i create_tables_mssql.sql
$ sqlcmd -S localhost -U SA -P admin2021! -i populate_tables_mssql.sql

Let’s check the data:

cat - <<eot | sqlcmd -S localhost -U SA -P admin2021!
USE sampleDB
SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN');
eot
Changed database context to 'sampledb'.
country_name                             country_id country_id street_address                           city                 
---------------------------------------- ---------- ---------- ---------------------------------------- ---------------------
China                                    CN         NULL       NULL                                     NULL                          
United Kingdom                           UK         UK         8204 Arthur St                           London                        
United Kingdom                           UK         UK         Magdalen Centre, The Oxford Science Park Oxford                        
United States of America                 US         US         2014 Jabberwocky Rd                      Southlake                     
United States of America                 US         US         2011 Interiors Blvd                      South San Francisco           
United States of America                 US         US         2004 Charade Rd                          Seattle                       

(6 rows affected)
SQL> 

Configure an ODBC DSN by editing the user’s DSN file:

$ vi ~/.odbc.ini
...
[mymssqlserverdb]
Driver = ODBC Driver 17 for SQL Server
Server = localhost
#Port = 1433
User = SA
Password = admin2021!
Database = sampledb
Language = us_english
NeedODBCTypesOnly = 1

Check the DSN:

$ odbcinst -q -s
...
[mymssqlserverdb]

$ odbcinst -q -s -n mymssqlserverdb
[mymssqlserverdb]
Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.8.so.1.1
Server=localhost
Port=
Database=sampledb
User=SA
Password=admin2021!
Language=us_english
NeedODBCTypesOnly=1

Try a connection to the mssql db via ODBC using the ODBC Driver Manager test tool, isql:

$ isql -v mymssqlserverdb SA admin2021!
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN');
Output:
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| country_name                            | country_id| country_id| street_address                          | city          |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| China                                   | CN        |           |                                         |               |
| United Kingdom                          | UK        | UK        | 8204 Arthur St                          | London        |
| United Kingdom                          | UK        | UK        | Magdalen Centre, The Oxford Science Park| Oxford        |
| United States of America                | US        | US        | 2014 Jabberwocky Rd                     | Southlake     |
| United States of America                | US        | US        | 2011 Interiors Blvd                     | South San Francisco           |
| United States of America                | US        | US        | 2004 Charade Rd                         | Seattle       |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
SQLRowCount returns 6
6 rows fetched

Test the DSN with the python module pyodbc:

$ python3
import pyodbc 
cnxn = pyodbc.connect(DSN='mymssqlserverdb;UID=SA;PWD=admin2021!')
cursor = cnxn.cursor()	
cursor.execute("""SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN')""")
row = cursor.fetchone() 
while row:
    print (row) 
    row = cursor.fetchone()
Output:
('China', 'CN', None, None, None)
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre, The Oxford Science Park', 'Oxford')
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')

mssqlservr is now fully accessible from any ODBC application under the debian account.
Instructions for the other data sources can be accessed through the following links:
SQLite
Firebird
HSQLDB
MariaDB
PostgreSQL
Oracle
MongoDB
Excel

Cet article Installing the ODBC drivers for Microsoft SQLServer for Linux est apparu en premier sur Blog dbi services.

Installing the ODBC drivers for HSQLDB

Yann Neuhaus - Thu, 2022-01-13 10:48

This article is part of a series that includes SQLite, Postgresql, Firebird, Oracle RDBMS, Microsoft SQL Server, MariaDB, MongoDB, and Excel. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
As user debian, follow the documentation here and here to get and install hsqldb for Linux. Here are the needed steps:

$ export workdir=~/odbc4gawk
$ mkdir $workdir
$ cd $workdir
$ wget https://sourceforge.net/projects/hsqldb/files/hsqldb/hsqldb_2_6/hsqldb-2.6.0.zip/download
$ mv download hsqldb-2.6.0.zip
$ unzip hsqldb-2.6.0.zip

As root, install a system-wide JDK from the official packages repository:

# apt install openjdk-17-jdk
# java -version
openjdk version "17-ea" 2021-09-14
OpenJDK Runtime Environment (build 17-ea+19-Debian-1)
OpenJDK 64-Bit Server VM (build 17-ea+19-Debian-1, mixed mode, sharing)

No special action is needed for the ODBC drivers because hsqldb starting in v2.0 can use the PostgreSQL ones, a clever decision; why reinventing the wheel when a few modifications are enough to make HSQLDB compatible with mainstream ODBC drivers ? Moreover, the choice of those drivers make sense as PostgreSQL is here to stay. If needed, please refer to the article Installing the ODBC drivers for PostgreSQL for step by step instructions. Later, we will only have to create a DSN for the hsqldb database.
As debian, start the database server process and send it to the background:

$ cd $workdir/hsqldb-2.6.0/hsqldb/lib
$ java -cp ../lib/hsqldb.jar org.hsqldb.server.Server --database.0 file:sampledb --dbname.0 xdb &

Connect to the local hsqldb service using the hsqldb’s provided administration application:

$ java -jar ../lib/sqltool.jar --inlineRc=url=jdbc:hsqldb:localhost/runtime,user=sa

Note that the tool uses JDBC for the connection, the java counterpart to ODBC. From that tool, create the tables for hsqldb using the Oracle script here; save it into the text file create_tables_hsqldb.sql, edit it and change occurrences of NUMBER to INT and occurrences of VARCHAR2 to VARCHAR using your favorite text editor. Finally, execute it:

\i /home/debian/odbc4gawk/create_tables_hsqldb.sql

Populate the tables for hsqldb using the Oracle script here; save it into the text file populate_tables_hsqldb.sql and execute it unchanged.

\i /home/debian/odbc4gawk/populate_tables_hsqldb.sql

Check the data:

sql> SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN');
Output:
COUNTRY_NAME              COUNTRY_ID  COUNTRY_ID  STREET_ADDRESS                            CITY
------------------------  ----------  ----------  ----------------------------------------  -------------------
China                     CN                      [null]                                    [null]
United Kingdom            UK          UK          8204 Arthur St                            London
United Kingdom            UK          UK          Magdalen Centre, The Oxford Science Park  Oxford
United States of America  US          US          2014 Jabberwocky Rd                       Southlake
United States of America  US          US          2011 Interiors Blvd                       South San Francisco
United States of America  US          US          2004 Charade Rd                           Seattle

Fetched 6 rows.

The data are OK.
Configure an ODBC DSN by editing the user’s DSN file:

$ vi ~/.odbc.ini
...
[myhsqldb]
Description=My Postgresql sample database
Driver=PostgreSQL Unicode
Database=sampledb

On line 4, we have specified that the PostgreSQL ODBC driver is to be used.
Check the DSN:

$ odbcinst -q -s
...
[myhsqldb]

$ odbcinst -q -s -n myhsqldb
...
[myhsqldb]
Description=My Postgresql sample database
Driver=PostgreSQL Unicode
Database=sampledb

Try a hsqldb connection to the hsqldb database via ODBC using the native administrative tool isql:

isql -v myhsqldb
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN');
Output:
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| country_name                            | country_id| country_id| street_address                          | city          |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| United States of America                | US        | US        | 2014 Jabberwocky Rd                     | Southlake     |
| United States of America                | US        | US        | 2011 Interiors Blvd                     | South San Francisco           |
| United States of America                | US        | US        | 2004 Charade Rd                         | Seattle       |
| United Kingdom                          | UK        | UK        | 8204 Arthur St                          | London        |
| United Kingdom                          | UK        | UK        | Magdalen Centre, The Oxford Science Park| Oxford        |
| China                                   | CN        |           |                                         |               |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
SQLRowCount returns 6
6 rows fetched

It looks good. Test now the DSN from a python script using the pyodbc module:

$ python3
import pyodbc 
cnxn = pyodbc.connect(DSN='myhsqldb')
cursor = cnxn.cursor()	
cursor.execute("""SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN')""")
row = cursor.fetchone() 
while row:
    print (row) 
    row = cursor.fetchone()
Output:
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre, The Oxford Science Park', 'Oxford')
('China', 'CN', None, None, None)

Everything is OK. hsqldb is now fully accessible from ODBC under the debian account.
Instructions for the other data sources can be accessed through the following links:
SQLite
Firebird
MariaDB
PostgreSQL
Oracle
Microsoft SQLServer for Linux
MongoDB
Excel

Cet article Installing the ODBC drivers for HSQLDB est apparu en premier sur Blog dbi services.

Installing the ODBC drivers for Oracle RDBMS

Yann Neuhaus - Thu, 2022-01-13 10:48

This article is part of a series that includes SQLite, Postgresql, Firebird, MongoDB, Microsoft SQL Server, HSQLDB, Excel, and MariaDB. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
As we already have a running Oracle remote instance, we don’t need to set one up and only the ODBC drivers need to be installed.
As user debian, get and install the ODBC drivers for Oracle (an account is needed), cf here.

$ id
uid=1000(debian) gid=1000(debian) groups=1000(debian)
$ cd ~/Downloads
$ wget https://download.oracle.com/otn_software/linux/instantclient/213000/instantclient-basiclite-linux.x64-21.3.0.0.0.zip
$ wget https://download.oracle.com/otn_software/linux/instantclient/213000/instantclient-sqlplus-linux.x64-21.3.0.0.0.zip
$ wget https://download.oracle.com/otn_software/linux/instantclient/213000/instantclient-odbc-linux.x64-21.3.0.0.0.zip
$ mkdir ~/odbc4gawk
$ export workdir=~/odbc4gawk
$ mkdir $workdir
$ unzip instantclient-basiclite-linux.x64-21.3.0.0.0.zip -d ${workdir}/.
$ unzip instantclient-sqlplus-linux.x64-21.3.0.0.0.zip -d ${workdir}/.
$ unzip instantclient-odbc-linux.x64-21.3.0.0.0.zip -d ${workdir}/.

The instant client software is required; we also download and install Oracle’s native command-line administration tool sqlplus to populate the test schema.
Follow the installation instructions here.
Add the Instant Client path to the shared library path:

$ vi ~/.bashrc
export LD_LIBRARY_PATH=/home/debian/odbc4gawk/instantclient_21_3:$LD_LIBRARY_PATH

Note: If WordPress does not render them correctly, there is an underscore between the name, 21 and 3 in instantclient_21_3 above.
As there is a bug in the Oracle script odbc_update_ini.sh, a work-around is provided here.

cd $workdir/instantclient_21_3
$ mkdir etc
$ cp /etc/odbcinst.ini etc/.
$ cp ~/.odbc.ini etc/odbc.ini

Run the configuration script now:

$ ./odbc_update_ini.sh .

Oracle has updated the local copy of the odbcinst.ini file. Let’s copy it to /etc to make the changes system-wide:

$ sudo cp etc/odbcinst.ini /etc/.

Check the ODBC driver file /etc/odbcinst.ini:

$ odbcinst -q -d
...
[Oracle 21 ODBC driver]

Correct.
See here for further configuration of the ODBC driver.
Let’s test the connection to the remote db via the instant client:

$ cd ${workdir}/instantclient_21_3
$ ./sqlplus scott/tiger@'(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = db)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl)))'

SQL*Plus: Release 21.0.0.0.0 - Production on Mon Oct 18 19:34:07 2021
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Last Successful login time: Sat Oct 16 2021 01:17:00 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> 

The remote database is available and reachable natively.
Still in the sqlplus session, let’s populate the schema scott with the sample data. As the sample’s date values assume a different format, let’s switch to it in the Oracle session and avoid formatting errors:

SQL> alter session set nls_date_format = 'yyyy-mm-dd';

Let’s download the tables creation script using from here and the data populating script from here:

wget https://www.sqltutorial.org/wp-content/uploads/2020/04/oracle.txt --output-document=oracle.sql
wget https://www.sqltutorial.org/wp-content/uploads/2020/04/oracle-data.txt --output-document=oracle-data.sql

Create the tables and load the data now:

@oracle
@oracle-data

-- test the query:
set pagesize 10000
set linesize 200
set tab off
col country_name format a25
col STREET_ADDRESS format a30
col city format a20
SQL> SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN')';

COUNTRY_NAME              CO CO STREET_ADDRESS                 CITY
------------------------- -- -- ------------------------------ --------------------
United States of America  US US 2014 Jabberwocky Rd            Southlake
United States of America  US US 2011 Interiors Blvd            South San Francisco
United States of America  US US 2004 Charade Rd                Seattle
United Kingdom            UK UK 8204 Arthur St                 London
United Kingdom            UK UK Magdalen Centre, The Oxford Sc Oxford
                                ience Park

China                     CN

6 rows selected.
SQL> quit

The test data are ready.
Let’s edit debian’s ODBC DSN definitions and add the settings below:

$ vi ~/.odbc.ini
...
[OracleODBC-21]
...
Driver=Oracle 21 ODBC driver
DSN=OracleODBC-21
...
ServerName=(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = db)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl)))
...
UserID=scott
Password=tiger

Check the DSN:

$ odbcinst -q -s -n "OracleODBC-21" 
[OracleODBC-21]
AggregateSQLType=FLOAT
Application Attributes=T
Attributes=W
BatchAutocommitMode=IfAllSuccessful
...
ServerName=(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = db)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl)))
...
UserID=scott
Password=tiger

Test it using the ODBC Driver Manager test tool, isql:

$ isql -v OracleODBC-21
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN')
Output:
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| COUNTRY_NAME                            | COUNTRY_ID| COUNTRY_ID| STREET_ADDRESS                          | CITY          |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| United States of America                | US        | US        | 2014 Jabberwocky Rd                     | Southlake     |
| United States of America                | US        | US        | 2011 Interiors Blvd                     | South San Francisco           |
| United States of America                | US        | US        | 2004 Charade Rd                         | Seattle       |
| United Kingdom                          | UK        | UK        | 8204 Arthur St                          | London        |
| United Kingdom                          | UK        | UK        | Magdalen Centre, The Oxford Science Park| Oxford        |
| China                                   | CN        |           |                                         |               |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
SQLRowCount returns -1
6 rows fetched

The ODBC connection is OK. Test the DSN with the python ODBC module pyodbc:

$ python3
import pyodbc 
cnxn = pyodbc.connect('DSN=OracleODBC-21')
cursor = cnxn.cursor()	
cursor.execute("""SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN')""")
row = cursor.fetchone() 
while row:
    print (row) 
    row = cursor.fetchone()
Output:
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre, The Oxford Science Park', 'Oxford')
('China', 'CN', None, None, None)
>>> 

Oracle is now fully accessible via ODBC under the debian account.
Instructions for the other data sources can be accessed through the following links:
SQLite
Firebird
HSQLDB
MariaDB
PostgreSQL
MongoDB
Microsoft SQLServer for Linux
Excel

Cet article Installing the ODBC drivers for Oracle RDBMS est apparu en premier sur Blog dbi services.

Installing the ODBC drivers for MariaDB

Yann Neuhaus - Thu, 2022-01-13 10:42

This article is part of a series that includes SQLite, Postgresql, Firebird, Oracle RDBMS, Microsoft SQL Server, HSQLDB, MongoDB, and Excel. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
The MariaDB ODBC drivers can be installed from the platform’s default package repositories. As it is missing on this test environment, let’s install the MariaDB RDBMS (v10.5.11-1 ) too, and the ODBC drivers (v3.1.9-1) as root:

# apt install mariadb-server
# apt install odbc-mariadb

A systemd service was set up and launched:

# systemctl status mariadb
● mariadb.service - MariaDB 10.5.11 database server
     Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2021-08-23 21:10:48 CEST; 5min ago
       Docs: man:mariadbd(8)
             https://mariadb.com/kb/en/library/systemd/
...

Check the ODBC driver file /etc/odbcinst.ini:

# odbcinst -q -d -n "MariaDB Unicode" 
[MariaDB Unicode]
Driver=libmaodbc.so
Description=MariaDB Connector/ODBC(Unicode)
Threading=0
UsageCount=1

Grant the necessary permissions to the test user debian using the provided native administration client, mysql:

# mysql
MariaDB [(none)]> GRANT CREATE, INSERT, SELECT, DELETE, UPDATE, DROP, ALTER ON *.* TO 'debian'@'localhost';
MariaDB [(none)]> exit;
Bye

As the user debian, create the sampledb database and its tables, and populate them using mysql again:

$ mysql --user=debian --password=debian
create database sampledb;
use sampledb;
-- create & populate the tables for mariadb by copying and pasting the statements from the above files;
-- test the data;
SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN');
Output:
+--------------------------+------------+------------+------------------------------------------+---------------------+
| country_name             | country_id | country_id | street_address                           | city                |
+--------------------------+------------+------------+------------------------------------------+---------------------+
| China                    | CN         | NULL       | NULL                                     | NULL                |
| United Kingdom           | UK         | UK         | 8204 Arthur St                           | London              |
| United Kingdom           | UK         | UK         | Magdalen Centre, The Oxford Science Park | Oxford              |
| United States of America | US         | US         | 2014 Jabberwocky Rd                      | Southlake           |
| United States of America | US         | US         | 2011 Interiors Blvd                      | South San Francisco |
| United States of America | US         | US         | 2004 Charade Rd                          | Seattle             |
+--------------------------+------------+------------+------------------------------------------+---------------------+
6 rows in set (0.001 sec)
\q

The data are OK. Configure ODBC by editing the user’s DSN file:

$ vi ~/.odbc.ini
[mysqldb]
Description=My mysql sample database
Driver=MariaDB Unicode
Database=sampledb

Check the DSN definition:

$ odbcinst -q -s -n mysqldb
[mysqldb]
Description=My mysql sample database
Driver=MariaDB Unicode
Database=sampledb
Socket=/var/run/mysqld/mysqld.sock

See below for an explanation about the highlighted “Socket=…” line.
Test the DSN via isql:

$ isql mysqldb -v debian debian
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN');
+---------------------------+-----------+-----------+-----------------------------------------+---------------------+
| country_name              | country_id| country_id| street_address                          | city                |
+---------------------------+-----------+-----------+-----------------------------------------+---------------------+
| China                     | CN        |           |                                         |                     |
| United Kingdom            | UK        | UK        | 8204 Arthur St                          | London              |
| United Kingdom            | UK        | UK        | Magdalen Centre, The Oxford Science Park| Oxford              |
| United States of America  | US        | US        | 2014 Jabberwocky Rd                     | Southlake           |
| United States of America  | US        | US        | 2011 Interiors Blvd                     | South San Francisco |
| United States of America  | US        | US        | 2004 Charade Rd                         | Seattle             |
+---------------------------+-----------+-----------+-----------------------------------------+---------------------+
SQLRowCount returns 6
6 rows fetched

The ODBC connection works fine. Test the DSN from a python script using the pyodbc module:

$ python3
Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
import pyodbc 
cnxn = pyodbc.connect(DSN='mysqldb;user=debian;password=debian')
cursor = cnxn.cursor()      
cursor.execute("""SELECT
...         c.country_name,
...         c.country_id,
...         l.country_id,
...         l.street_address,
...         l.city
... FROM
...         countries c
... LEFT JOIN locations l ON l.country_id = c.country_id
... WHERE
...         c.country_id IN ('US', 'UK', 'CN')""")

row = cursor.fetchone() 
 while row:
...     print (row) 
...     row = cursor.fetchone()
... 
Output:
('China', 'CN', None, None, None)
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre, The Oxford Science Park', 'Oxford')
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')

Everything is just fine.
Note: When the MariaDB database is restarted, an error message such as the one below from isql is displayed when connecting to a database as a non-root user:

$ isql -v mysqldb debian debian
[S1000][unixODBC][ma-3.1.7]Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
[ISQL]ERROR: Could not SQLConnect

or from pyodbc:

pyodbc.Error: ('HY000', "[HY000] [ma-3.1.7]Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) (2002) (SQLDriverConnect)")

If this happens, just make sure the line Socket=/var/run/mysqld/mysqld.sock is present in ~/.odbc.ini. Alternatively, but not as good because as it is needed each time the database is restarted, create the symlink below as root:

# ln -s /run/mysqld/mysqld.sock /tmp/mysql.sock

MariaDB is now fully accessible from any ODBC application under the debian account.
Instructions for the other data sources can be accessed through the following links:
SQLite
Firebird
HSQLDB
PostgreSQL
Oracle
Microsoft SQLServer for Linux
MongoDB

Cet article Installing the ODBC drivers for MariaDB est apparu en premier sur Blog dbi services.

Installing the ODBC drivers for Firebird

Yann Neuhaus - Thu, 2022-01-13 10:31

This article is part of a series that includes SQLite, Postgresql, Microsoft SQLServer, Oracle RDBMS, HSQLDB, MariaDB, MongoDB, and Excel. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
As root, install the Firebird RDBMS and its ODBC drivers from the official repositories:

# apt install firebird3.0-server

A systemd service was set up and launched:

# systemctl status firebird3.0.service
● firebird3.0.service - Firebird Database Server ( SuperServer )
     Loaded: loaded (/lib/systemd/system/firebird3.0.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2021-10-18 15:00:25 CEST; 50min ago
    Process: 546 ExecStart=/usr/sbin/fbguard -daemon -forever (code=exited, status=0/SUCCESS)
   Main PID: 576 (fbguard)
      Tasks: 4 (limit: 4659)
     Memory: 10.1M
        CPU: 75ms
     CGroup: /system.slice/firebird3.0.service
             ├─576 /usr/sbin/fbguard -daemon -forever
             └─577 /usr/sbin/firebird

Oct 18 15:00:23 debian systemd[1]: Starting Firebird Database Server ( SuperServer )...
Oct 18 15:00:25 debian systemd[1]: Started Firebird Database Server ( SuperServer ).

The service runs as the newly created firebird account:

# ps -ef | grep firebird
firebird   28053       1  0 15:15 ?        00:00:00 /usr/sbin/fbguard -daemon -forever
firebird   28054   28053  0 15:15 ?        00:00:00 /usr/sbin/firebird

Create the symlink below:

# ln -s /usr/lib/x86_64-linux-gnu/libfbclient.so.3.0.7 /usr/lib/x86_64-linux-gnu/libgds.so

Get and install the binaries ODBC drivers for Firebird:

# wget https://sourceforge.net/projects/firebird/files/firebird-ODBC-driver/2.0.5-Release/OdbcFb-LIB-2.0.5.156.amd64.gz/download
# gunzip OdbcFb-LIB-2.0.5.156.amd64.gz
# tar xvf OdbcFb-LIB-2.0.5.156.amd64
# cp libOdbcFb.so /usr/lib/x86_64-linux-gnu/odbc/.

Edit the odbcinst.ini file:

# vi /etc/odbcinst.ini
...
[Firebird]
Description=InterBase/Firebird ODBC
Driver=/usr/lib/x86_64-linux-gnu/odbc/libOdbcFb.so
Setup=/usr/lib/x86_64-linux-gnu/odbc/libOdbcFb.so
Threading=1
FileUsage=1
CPTimeout=
CPReuse=

Check the ODBC driver file /etc/odbcinst.ini:

# odbcinst -q -d
...
[Firebird]

# odbcinst -q -d -n Firebird
[Firebird]
Description=InterBase/Firebird ODBC
Driver=/usr/lib/x86_64-linux-gnu/odbc/libOdbcFb.so
Setup=/usr/lib/x86_64-linux-gnu/odbc/libOdbcFb.so
Threading=1
FileUsage=1
CPTimeout=
CPReuse=

As debian, create the sampledb database using the native administrative tool isql-fb:

$ cd $workdir
$ mkdir firebird
$ cd firebird
$ isql-fb -user SYSDBA -password 'SYSDBA'

Use CONNECT or CREATE DATABASE to specify a database

SQL> create database "/home/debian/odbc4gawk/firebird/sampledb.fb" page_size 8192 user SYSDBA password 'SYSDBA';
SQL> commit;
SQL> quit;

As there are no sample scripts specifically for Firebird, use the postgresql’s ones to create the tables but first replace the “SERIAL” keyword in the “CREATE TABLE” statements with the equivalent firebird’s syntax “INT GENERATED BY DEFAULT AS IDENTITY”. Use your favorite text editor or even sed for this. Save the above file as create_tables_postgresl.sql. With the vi editor, this could be done as shown below:

$ vi create_tables_postgresl.sql
:1,$s/ SERIAL / INT GENERATED BY DEFAULT AS IDENTITY /g
:w create_tables_firebird.sql
:q

Launch again isql-fb with a connection to the newly created database and copy/paste the statements from create_tables_firebird.sql as they are just a few of them:

$ isql-fb /home/debian/odbc4gawk/firebird/sampledb.fb -u sysdba -p sysdba
Database: /home/debian/odbc4gawk/firebird/sampledb.fb, User: SYSDBA
SQL> show tables;

There are no tables in this database
Paste the create table statements here.
Also, copy/paste the statements from here and append a final commit; to confirm the INSERT. Finally, check the data:

SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN');
Output:
COUNTRY_NAME                             COUNTRY_ID COUNTRY_ID STREET_ADDRESS                           CITY                           
======================================== ========== ========== ======================================== =====================
China                                    CN                                                                          
United Kingdom                           UK         UK         8204 Arthur St                           London                         
United Kingdom                           UK         UK         Magdalen Centre, The Oxford Science Park Oxford                         
United States of America                 US         US         2014 Jabberwocky Rd                      Southlake                      
United States of America                 US         US         2011 Interiors Blvd                      South San Francisco            
United States of America                 US         US         2004 Charade Rd                          Seattle                        
SQL> 

Configure an ODBC DSN by editing the user’s DSN file:

$ vi ~/.odbc.ini
...
[myfbdb]
Description=Firebird
Driver=Firebird
Dbname=/home/debian/odbc4gawk/firebird/sampledb.fb
User=SYSDBA
Password=SYSDBA
Role=
CharacterSet=UTF8
ReadOnly=No
NoWait=No

Check it:

$ odbcinst -q -s -n 
...
[myfbdb]
...

$ odbcinst -q -s -n myfbdb
[myfbdb]
Description=Firebird
Driver=Firebird
Dbname=/home/debian/odbc4gawk/firebird/sampledb.fb
User=SYSDBA
Password=SYSDBA
Role=
CharacterSet=UTF8
ReadOnly=No
NoWait=No

Try a connection to the firebird db via ODBC:

$ isql -v myfbdb sysdba sysdba
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN');
Output:
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| country_name                            | country_id| country_id| street_address                          | city          |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| China                                   | CN        |           |                                         |               |
| United Kingdom                          | UK        | UK        | 8204 Arthur St                          | London        |
| United Kingdom                          | UK        | UK        | Magdalen Centre, The Oxford Science Park| Oxford        |
| United States of America                | US        | US        | 2014 Jabberwocky Rd                     | Southlake     |
| United States of America                | US        | US        | 2011 Interiors Blvd                     | South San Francisco           |
| United States of America                | US        | US        | 2004 Charade Rd                         | Seattle       |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
SQLRowCount returns 6
6 rows fetched

Test the DSN with pyodbc:

$ python3
Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyodbc 
>>> con = pyodbc.connect('DSN=myfbdb;UID=SYSDBA;PWD=SYSDBA;DBNAME=sampledb;')
Segmentation fault

Although isql has no problem with them, it looks like the official Firebird ODBC drivers don’t work well with pyodbc. The same behavior was noticed later with the gawk the extension (see article here …). For this reason, we tried the commercial drivers from Devart available here. After having installed them by following the clear and simple instructions and having created the DSN myfbdb_devart using those drivers, pyodbc could work fine.
After installation of those drivers, check the DSN definition:

$ odbcinst -q -s -n myfbdb_devart
[myfbdb_devart]
Description=Firebird
Driver=Devart ODBC Driver for Firebird
Database=/media/sf_customers/dbi/odbc4gawk/sampledb.fb
Port=3050
User=SYSDBA
Password=SYSDBA
Role=
CharacterSet=UTF8
ReadOnly=No
NoWait=No

And the drivers:

$ odbcinst -q -d -n "Devart ODBC Driver for Firebird"
[Devart ODBC Driver for Firebird]
Driver=/usr/local/lib/libdevartodbcfirebird.so

Retry the pyodbc module:

$ python3
Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
import pyodbc 
cnxn = pyodbc.connect('DSN=myfbdb_devart;UID=SYSDBA;PWD=SYSDBA;DBNAME=sampledb;')
cursor = cnxn.cursor()      
cursor.execute("""SELECT
...         c.country_name,
...         c.country_id,
...         l.country_id,
...         l.street_address,
...         l.city
... FROM
...         countries c
... LEFT JOIN locations l ON l.country_id = c.country_id
... WHERE
...         c.country_id IN ('US', 'UK', 'CN')""")

row = cursor.fetchone() 
 while row:
...     print (row) 
...     row = cursor.fetchone()
... 
Output:
('China', 'CN', None, None, None)
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre, The Oxford Science Park', 'Oxford')
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')

The Devart’s drivers work fine. However, let’s try to recompile the official ODBC drivers for Firebird from their source code available here. To compile them, follow the following steps:

$ apt install unixodbc-dev
$ apt install firebird-dev

Once downloaded and untarred, move the directory OdbcJdbc/Builds/Gcc.lin and compile the drivers:

$ export FBINCDIR=/usr/include/firebird
$ export FBLIBDIR=/usr/lib/x86_64-linux-gnu
$ make -e -d -f makefile.linux

The compiled libraries are put in OdbcJdbc/Builds/Gcc.lin/Release_x86_64. As root, install the shared library libOdbcFb.so in its usual sub-directory and make it readable for everyone:

# cp OdbcJdbc/Builds/Gcc.lin/Release_x86_64/libOdbcFb.so /usr/lib/x86_64-linux-gnu/odbc/.
# chmod a+r /usr/lib/x86_64-linux-gnu/odbc/libOdbcFb.so

With those recompiled drivers, isql still works and pyodbc still fails but only while executing the SELECT statement. As shown later, the gawk extension has no problem anymore: it works with the drivers from Devart as well as the recompiled drivers; thus, we still have the option of free drivers.
Despite pyodbc, Firebird is now fully accessible from any ODBC application under the debian account.
Instructions for the other data sources can be accessed through the following links:
SQLite
HSQLDB
MariaDB
PostgreSQL
Oracle
Microsoft SQLServer for Linux
MongoDB
Excel

Cet article Installing the ODBC drivers for Firebird est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator