Feed aggregator

Zero Downtime Migration In Oracle Cloud Infrastructure (OCI)

Online Apps DBA - Sat, 2020-02-08 05:56

Oracle has recently launched a cloud migration utility named Zero Downtime Migration (ZDM), that will help you in migrating your On-Prem Oracle Database to Oracle Cloud Database (VM DB, ExaCS, Bare Metal, Exadata Cloud at Customer) With the aim of keeping the migration process as simple as possible and minimal downtime on production workloads. If […]

The post Zero Downtime Migration In Oracle Cloud Infrastructure (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

New Feature: Exadata (ExaCs) DB System: Shared Database Homes

Online Apps DBA - Sat, 2020-02-08 04:11

Oracle recently announced that now you can add Oracle Database Homes to an existing Exadata DB system by using the Oracle Cloud Infrastructure (OCI)Console, the API (Rest API), or the CLI (dbaascli). Check out K21 Academy’s blog post at https://k21academy.com/clouddba53 that covers: • What Is Oracle Database Home? • How To Manage DB Home In […]

The post New Feature: Exadata (ExaCs) DB System: Shared Database Homes appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

how to load XML file into oracle table and using XML query to get result

Tom Kyte - Fri, 2020-02-07 18:03
Hi Tom, I have XML file from Web Report. there are some error records in the file, it is difficult to find the error in XML file, I want to load this file into oracle table. then use XML query. below is sample file and expected result. This is fir...
Categories: DBA Blogs

ORA-29279: SMTP permanent error: 550 XXXsmtpXX: Host xxx.xxx.xxx.xxx: No unauthenticated relaying permitted

Tom Kyte - Fri, 2020-02-07 18:03
Hi Tom, I want to send email through PL/SQL by using utl_mail package, I have completed below steps 1. Installed scripts <b>@$ORACLE_HOME/rdbms/admin/utlmail.sql @$ORACLE_HOME/rdbms/admin/prvtmail.plb</b> 2. granted execute to us...
Categories: DBA Blogs

Should I go for ODA 19.5 or should I wait until 19.6?

Yann Neuhaus - Fri, 2020-02-07 05:25
Introduction

As you may know, Oracle Database 19c is available for new (X8-2) or older Oracle Database Appliances since several weeks. Current version is 19.5. But when you go to the official ODA documentation , it still first proposes version 18.7 not compatible with 19c databases. Here is why.

19c database is the final 12.2

First of all, 19c is an important release because it’s the terminal release of the 12.2, as 11.2.0.4 was for 11.2. Please refer to my other blog to understand the new Oracle versioning. ODA always supports new releases few months after being available on Linux, and it’s why it’s only available now.

Drawbacks of 19.5

19.5 is available on your ODA, but you will not be able to patch to this version. Reason is quite simple, it’s not a complete patch, you can only download ISO for reimaging and 19c grid and database software and that’s it. The reason for not yet having a patch resides in the difficulty of updating the OS part. 19.5 runs on Linux 7.7, and all previous releases are stuck with Linux 6.10, meaning that the patch should include the OS upgrade, and this jump is not so easy. It’s the first drawback.

Second drawback is that you cannot run another database version. If you still need 18c, 12.2, 12.1 or 11.2, this 19.5 is not for you.

The third drawback is that you will not be able to patch from 19.5 to 19.6 or newer version. Simply because 19.5 is an out of the way release.

Another drawback concerns the documentation not yet complete: many parts are copy/paste from 18.7. For example, described initcl command to restart the dcs agent is not a command that actually exists on Linux 7.

Moreover, my first tests on this version show annoying bugs related to database creation, those under investigation by Oracle.

When 19.6 will be ready?

19.6 is planned for 2020, yes but which month? There is no official date, it could come in march, or during the summer, nobody knows. As a result, you will have to wait for this patch to be released to start your migration to 19c on ODA.

So, what to do?

3 solutions are possible:

  • You can deal with your old databases until the patch is released: buy extended support for 11gR2/12cR1. Premier support is still OK for 12.2.0.1 and 18c
  • Migrate your old 11gR2 and 12cR1 to 18c to be prepared for 19c and avoid buying extended support, differences between 18c and 19c should be minimal
  • Deploy 19.5 for testing purpose on a test ODA and start your migration project to get prepared for 19.6. Once available, patch or redeploy your ODAs and migrate all your databases
Conclusion

Not having 19.6 now is really annoying. Afterall we choose ODA because it’s easier to get updates. But you can still prepare everything for 19c migration, by first migrate to 18c or give a try to 19c with this 19.5 release.

Cet article Should I go for ODA 19.5 or should I wait until 19.6? est apparu en premier sur Blog dbi services.

ROLLBACK TO SAVEPOINT;

Yann Neuhaus - Tue, 2020-02-04 14:07
By Franck Pachot

.
I love databases and, rather than trying to compare and rank them, I like to understand their difference. Sometimes, you make a mistake and encounter an error. Let’s take the following example:
create table DEMO (n int);
begin transaction;
insert into DEMO values (0);
select n "after insert" from DEMO;
update DEMO set n=1/n;
select n "after error" from DEMO;
commit;
select n "after commit" from DEMO;

The “begin transaction” is not valid syntax in all databases because transactions may be started implicitly, but the other statements are valid syntax in all the common SQL databases. They all raise an error in the update execution because there’s one row with N=0 and then we cannot calculate 1/N as it is a math error. But, what about the result of the last select?

If I run this with Oracle, DB2, MS SQL Server, My SQL (links go to example in db<>fiddle), the row added by the insert is always visible by my session: after the insert, of course, after the update error, and after the commit (then visible by everybody).

The same statements run with PostgreSQL have a different result. You cannot do anything after the error. Only rollback the transaction. Even if you “commit” it will rollback. 

Yes, no rows are remaining there! Same code but different result.

You can have the same behavior as the other databases by defining a savepoint before the statement, and rollback to savepoint after the error. Here is the db<>fiddle. With PostgreSQL you have to define an explicit savepoint if you want to continue in your transaction after the error. Other databases take an implicit savepoint. By the way, I said “statement” but here is Tanel Poder showing that in Oracle the transaction is actually not related to the statement but the user call: Oracle State Objects and Reading System State Dumps Hacking Session Video – Tanel Poder’s blog

In Oracle, you can run multiple statements in a user call with a PL/SQL block. With PostgreSQL, you can group multiple statements in one command but you can also run a PL/pgSQL block. And with both, you can catch errors in the exception block. And then, it is PostgreSQL that takes now an implicit savepoint as I explained in a previous post: PostgreSQL subtransactions, savepoints, and exception blocks

This previous post was on Medium ( you can read https://www.linkedin.com/pulse/technology-advocacy-why-i-am-still-nomad-blogger-franck-pachot/ where I explain my blog “nomadism”), but as you can see I’m back on the dbi-services blog for my 500th post there. 

My last post here was called “COMMIThttps://blog.dbi-services.com/commit/ where I explained that I was quitting consulting for CERN to start something new. But even if I decided to change, I was really happy at dbi-services (as I mentioned on a LinkedIn post about great places to work). And when people like to work together it creates an implicit SAVEPOINT where you can come back if you encounter some unexpected results. Yes… this far-fetched analogy just to mention that I’m happy to come back to dbi services and this is where I’ll blog again.

As with many analogies, it reaches the limits of the comparison very quickly. You do not ROLLBACK a COMMIT and it is not a real rollback because this year at CERN was a good experience. I’ve met great people there, learned interesting things about matter and anti-matter, and went out of my comfort zone like co-organizing a PostgreSQL meetup and inviting external people ( https://www.linkedin.com/pulse/working-consultants-only-externalization-franck-pachot/) for visits and conferences. 

This “rollback” is actually a step further, but back in the context I like: solve customer problems in a company that cares about its employees and customers. And I’m not exactly coming back at the same “savepoint”. I was mostly focused on Oracle and I’m now covering more technologies in the database ecosystem. Of course, consulting on Oracle Database will still be a major activity. But today, many other databases are raising: NoSQL, NewSQL… Open Source is more and more relevant. And in this jungle, the replication and federation technologies are raising. I’ll continue to share on these areas and you can follow this blog, the RSS feed, and/or my twitter account.

Cet article ROLLBACK TO SAVEPOINT; est apparu en premier sur Blog dbi services.

maxquerylen

Jonathan Lewis - Tue, 2020-02-04 09:50

The view v$undostat is a view holding summary information about undo activity that can be used by the automatic undo mechanism to deal with optimising the undo retention time (hence undo space allocation). The view holds one row for every ten minute interval in the last 4 days (96 hours) and includes two columns called maxquerylen and maxqueryid – which tell you something about the query that was considered to be the longest running query active in the interval.

In this note I want to explain why the contents of these two columns are sometimes (possibly often) completely irrelevant despite there being a few notes on the internet about how you should investigate them to help you decide on a suitable setting for the undo_retention.

The descriptions in the 19c reference manual for these columns is as follows:

  • maxquerylen – Identifies the length of the longest query (in seconds) executed in the instance during the period. You can use this statistic to estimate the proper setting of the UNDO_RETENTION initialization parameter. The length of a query is measured from the cursor open time to the last fetch/execute time of the cursor. Only the length of those cursors that have been fetched/executed during the period are reflected in the view.
  • maxqueryid – SQL identifier of the longest running SQL statement in the period

It would be hard to misunderstand the meaning of the second column – if the first column tells us that Oracle has spotted a “longest query” then the second column gives us the sql_id so we can check v$sql to find out what it was. But what sort of queries are going to show up as the “longest query” in the interval?

Here’s an example from a few hours of a nearly idle instance, querying the begin and end times (formatted to show just day of month + time), with the two critical columns,


select  begin_time, end_time, maxquerylen, maxqueryid 
from    v$undostat 
order by 
    begin_time
;

BEGIN_TIME	END_TIME    MAXQUERYLEN MAXQUERYID
--------------- ----------- ----------- -------------
...
04 10:50:18	04 11:00:18	      0
04 11:00:18	04 11:10:18	    356 f3yfg50ga0r8n
04 11:10:18	04 11:20:18	    883 25u1mbkcr9rnu
04 11:20:18	04 11:30:18	   1486 25u1mbkcr9rnu
04 11:30:18	04 11:40:18	   2090 25u1mbkcr9rnu
04 11:40:18	04 11:50:18	      0
04 11:50:18	04 12:00:18	   3299 25u1mbkcr9rnu
04 12:00:18	04 12:10:18	   3903 25u1mbkcr9rnu
04 12:10:18	04 12:20:18	   4507 25u1mbkcr9rnu
04 12:20:18	04 12:30:18	      0
04 12:30:18	04 12:40:18	      0
04 12:40:18	04 12:50:18	      0
04 12:50:18	04 13:00:18	      0
04 13:00:18	04 13:10:18	      0
04 13:10:18	04 13:20:18	      0
04 13:20:18	04 13:30:18	      0
04 13:30:18	04 13:37:27	   9035 25u1mbkcr9rnu

173 rows selected.

Notice, particularly, that the SQL_ID 25u1mbkcr9rnu disappears from the 11:40 interval, then re-appears at 11:50, then disappears again from 12:20 through 13:20 (lunchtime), then reappears again at 13:30. And when it reappears after an absence the query length has increased in a way that’s consistent with the gap. So it looks as if the query wasn’t running during the gap, but turns out to have been running after the gap ended. (Is this Quantum queries?)

The explanation is in the detail of the definition: “from the cursor open time to the last fetch/execute time”. From an SQL*Plus session I “set pause on” then executed the query “select * from all_objects” and hit return a few times to get a few pages of output. Then, spread over the next couple of hours, I hit return a few more times to get a few more pages of output. Each time I hit return my session does another fetch call, and the code behind v$undostat notices that my query is still active.

I don’t know exactly how Oracle is keeping track of “active” statements because there seem to be some inconsistencies in the reporting (and I’ll comment on those later), but as a first approximation, until you close a cursor (either explicitly or implicitly) some piece of Oracle’s code registers the fact that the query might do further fetches, which means it might need to apply more undo to make current data read-consistent with the cursor’s start SCN, which means that it should take note of the time the cursor has been open because the undo retention time might need to be that long.

Inconsistencies.

I said there were some inconsistencies in the reporting. I’ve noticed three anomalies – perhaps due to the extreme idleness of the instance I was using for testing.

  1. At about 12:45 I hit return a few times to get the maxquerylen and maxqueryid refreshed – but the code didn’t seem to notice that I had done a few more fetches of a long running query. So it seems to be possible for Oracle to miss the query that should be reported.
  2. At about 11:52 I hit return a few times again, and you can see that the two critical columns were refreshed, but if you do the arithmetic Oracle has attributed 600 seconds to the query – the whole of the 10 minute refresh interval, not just the time up to the moment of the last fetch that I had done in that interval.
  3. I didn’t hit return again until about 12:45 (the moment in point 1 above when the two columns didn’t refresh the way I though they should), but the columns kept updating through 12:00 and 12:10 intervals anyway before disappearing from the 12:20 interval. So it looks like queries can be reported as long running even when they haven’t been executing or fetching.

The is one last anomaly – and this relates to the reason I started looking at this columns. If you check the entry for 11:00 you’ll see that the SQL Id ‘f3yfg50ga0r8n’ has been reported as “running” for 356 seconds. But here’s what I found in v$sql for that query:


select  executions, parse_calls, fetches, end_of_fetch_count, elapsed_time, cpu_time, rows_processed, sql_text 
from    v$sql 
where   sql_id = 'f3yfg50ga0r8n'
;

EXECUTIONS PARSE_CALLS	  FETCHES END_OF_FETCH_COUNT ELAPSED_TIME   CPU_TIME ROWS_PROCESSED SQL_TEXT
---------- ----------- ---------- ------------------ ------------ ---------- -------------- ------------------------------------------
        79          79         79                  0        20487      10667             79 select obj# from obj$ where dataobj# = :1


The SQL lookes like a sys-recursive statement which, in 79 executions, has accumulated 20 milliseconds of elapsed time (rather than 356 seconds – but that difference could just be related to one or other of the anomalies I reported earlier). The key thing to note is that the value of column end_of_fetch_count is zero: this looks like a statement where Oracle has simply not bothered to fetch all the data and has then not bothered to close the cursor. As a result it’s possible that each time the statement is executed (note that parse_calls = executions, so we’re not looking at a “held” cursor) the code behind v$undostat looks back at the time the cursor was initially opened to measure the query length, rather than looking at the time the statement was re-executed.

This may go a long way to answering the question that came up on Oracle-l earlier on today as follows:

The following query (sql_id is 89w8y2pgn25yd) was recorded in v$undostat.maxqueryid in a 12.2. database during a period of a high undo usage: select ts# from sys.ts$ where ts$.online$ != 3 and bitand(flags,2048) != 2048;

select
        undoblks,txncount,maxquerylen,maxconcurrency,activeblks
from    v$undostat u
where   maxqueryid='89w8y2pgn25yd'
;

UNDOBLKS  TXNCOUNT  MAXQUERYLEN  MAXCONCURRENCY  ACTIVEBLKS
--------  --------  -----------  --------------  ----------
   39199      4027         1378               5     2531960

What is this query, and how does it manage to report a maximum query length of 1,378 seconds? Just like the one above it’s a sys-recursive query; and this one appears when you query dba_tablespaces – and though it executes once for every row and takes just fractions of a second to execute. But if you trace a query like “select tablespace_name from dba_tablespaces” you’ll see that every time the query is called the trace file will show lines for: “Parse, Exec, Fetch, Close” until the last call – which doesn’t report a “Close”.

Just like my original “select * from all_objects” there’s a query dangling with an open cursor, leaving Oracle with the opportunity to go back to the moment it was opened and report it as a “long running query”.

tl;dr

The maxquerylen and maxqueryid in v$undostat don’t tell you about statements that have taken a long time to change a lot of data and generate a lot of undo; they tell you about statements that might need to apply a lot of undo to see read-consistent data and therefore might become victims in a “ORA-01555: snapshot too old” event.

For various reasons the statements reported may be completely irrelevant because there are various reasons (some, possibly, bug-related) why a cursor may be treated as if it opened a long time in the past when it was actually a re-use of an existing “dangling” cursor. It’s also possible that some bad client code will treat cursor in a way that does no harm to the client program but hides a more interesting query that would otherwise have been reported by these two columns.

 

 

Select returns rows that it should not

Tom Kyte - Tue, 2020-02-04 09:02
Hi. I have a reproduction of strange behavior of select statement. Reproduction: <code> --drop table test_a1 --drop table test_a2 create table test_a1 ( id1 number(19), value1 number(1) ) / create table test_a2 ( id2 number(19), v...
Categories: DBA Blogs

Creating index for interval values

Tom Kyte - Tue, 2020-02-04 09:02
Hi! Is it possible to create an index with some kind of "between" option? I have a dimension-table with 30 columns where the unique key is a combination of 2 columns (from_minute, to_minute). Primary key is a unique sequence. Total number of ...
Categories: DBA Blogs

Archiving Using DBMS_DATAPUMP

Tom Kyte - Tue, 2020-02-04 09:02
I have a requirement to archive historical data, that is delete data from a number of tables that is more than 36 months old, but only a subset of data, for arguments sake lets call them quotes given to customers that have not been taken up. We curre...
Categories: DBA Blogs

How many Not guaranteed Restore Points can I use at given point

Tom Kyte - Tue, 2020-02-04 09:02
Hello! We have a development database where we won't do any backup of the database. Our method to recreate it will be rman duplicate from the production one. But we want to give the option to our development team to do a flashback of that database...
Categories: DBA Blogs

How to tune a create table that runs in loop

Tom Kyte - Tue, 2020-02-04 09:02
Hi All, can someone help me to tune this peice of code. It is taking 45 minutes to complete the execution as of now. We have tried taking off the loop and creating 5 different create statements but did gain just 2 minutes Each of the table have and...
Categories: DBA Blogs

Truncate Multiple tables in a single line statement.

Tom Kyte - Tue, 2020-02-04 09:02
My question: Is it possible to truncate multiple tables in a single line statement? Firstly, thanks for giving this opportunity to ask a question. I have developed a code to truncate multiple tables in a single line statement. Please refer the ...
Categories: DBA Blogs

Getting error -12842 : ORA-12842: CURSOR INVALIDATED DURING PARALLEL EXECUTION

Tom Kyte - Tue, 2020-02-04 09:02
Below is my scenario. We are getting error randomly in production. when we restart the process error disappears. we are not able to reproduce even in lower environment. Can you please help in letting us know the potential issue with insert statement ...
Categories: DBA Blogs

How to retrieve the 200millions of record into txt file using select query?

Tom Kyte - Tue, 2020-02-04 09:02
Hi Tom, We have 200+millions of records in our table, we have to retrieve all the records into text file without impacting the performance. I tried using DBMS_parallel_execute but this doesn?t have return type. Is there a way to get all Millions of...
Categories: DBA Blogs

EDB PEM – Monitor your Postgres cluster

Yann Neuhaus - Tue, 2020-02-04 01:00

In my last post, I explained, how to setup a HA Postgres Cluster using EDB Postgres Advanced Server and Failover Manager. As a next step, we want to install the EDB Postgres Enterprise Manager to monitor what we setup before.

Introduction

There are, of course, many good tools to monitor your Postgres Cluster, but in case you run your Postgres Cluster using EDB tool, you should really think about using EDB Postgres Enterprise Manager . It allows you to monitor EDB Postgres Advanced Server Clusters as well as open source Postgres Clusters. But PEM is not only a nice GUI, it alerts and you can tune your clusters from one single point.

Installation

Before you start with the Installation of PEM, you need a PostgreSQL cluster on a host.
Once you have it, you can go on with the installation of edb-pem-server.

$ sudo yum install wxBase mod_wsgi mod_ssl edb-pem-server -y

Afterwards you need to set a password for the enterprisedb user in your cluster

$ psql
postgres=# alter user enterprisedb with password '******';
ALTER ROLE
postgres=# \q

That’s it. So let’s go on with the configuration.

Configuration

EDB delivers the PEM Server with a nice script to configure the server. This makes it really easy!

$ sudo /usr/edb/pem/bin/configure-pem-server.sh
------------------------------------------------------
 EDB Postgres Enterprise Manager
 -----------------------------------------------------
Install type: 1:Web Services and Database, 2:Web Services 3: Database [ ] :1
Enter local database server installation path (i.e. /usr/edb/as10 , or /usr/pgsql-10, etc.) [ ] :/usr/edb/as11
Enter database super user name [ ] :enterprisedb
Enter database server port number [ ] :5444
Enter database super user password [ ] :
Please enter CIDR formatted network address range that agents will connect to the server from, to be added to the server's pg_hba.conf file. For example, 192.168.1.0/24 [ 0.0.0.0/0 ] :192.168.22.53/32
Enter database systemd unit file or init script name (i.e. edb-as-10 or postgresql-10, etc.) [ ] :edb-as-11
Please specify agent certificate path (Script will attempt to create this directory, if it does not exists) [ ~/.pem/ ] :
CREATE EXTENSION
[Info] Configuring database server.
CREATE DATABASE
..
..
[Info] Configured the webservice for EDB Postgres Enterprise Manager (PEM) Server on port '8443'.
[Info] PEM server can be accessed at https://192.168.22.53:8443/pem at your browser

Now you can try to access PEM with your Webbrowser.

Agent installation and configuration

Once the PEM Server is setup, you need to install the edb-pem-agent on all hosts, which you want to monitor.

$ sudo yum install edb-pem-agent -y

Be sure to have the pg_hba.conf entries correct on the Agent and PEM Node.
As soon as the installation is finished, you can register the agent to the PEM Server. For this you need the IP, the port and the user for PEM.

$ sudo bash
$ export PEM_MONITORED_SERVER_PASSWORD=edb
$ export PEM_SERVER_PASSWORD=edb
$ /usr/edb/pem/agent/bin/pemworker --register-agent --pem-server 192.168.22.53 --pem-port 5400 --pem-user enterprisedb --allow_server_restart true --allow-batch-probes true --batch-script-user enterprisedb 

As last step you need to add the configuration to the agent.cfg and start/enable the pemagent service.

$ echo "allow_streaming_replication=true" >> /usr/edb/pem/agent/etc/agent.cfg
$ echo "ca_file=/usr/libexec/libcurl-pem/share/certs/ca-bundle.crt" >> /usr/edb/pem/agent/etc/agent.cfg
$ systemctl enable pemagent
$ systemctl start pemagent

Now you can have a look at the PEM Dashboard and you will see the registered agents.

As a next step you can add the PostgreSQL clusters to PEM, as I already explained that here I want go into this here.

Conclusion

Now you can enjoy the full monitoring experience using PEM. As well as all the nice put into graphs.

Cet article EDB PEM – Monitor your Postgres cluster est apparu en premier sur Blog dbi services.

SQL query join by GUID

Tom Kyte - Tue, 2020-02-04 00:02
Hello, Ask Tom Team. I have to create a report to show some business data. The data to show is stored in two different Oracle databases. I have to write a SQL query joining two the databases (dblink). The two databases were designed for very diffe...
Categories: DBA Blogs

AZORA with Viscosity lunch and learn Thursday 2/13/20

Bobby Durrett's DBA Blog - Mon, 2020-02-03 17:51

Quick note. The next AZORA meetup will be next Thursday, 2/13/2020.

#meetup_oembed .mu_clearfix:after { visibility: hidden; display: block; font-size: 0; content: " "; clear: both; height: 0; }* html #meetup_oembed .mu_clearfix, *:first-child+html #meetup_oembed .mu_clearfix { zoom: 1; }#meetup_oembed { background:#eee;border:1px solid #ccc;padding:10px;-moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;margin:0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; }#meetup_oembed h3 { font-weight:normal; margin:0 0 10px; padding:0; line-height:26px; font-family:Georgia,Palatino,serif; font-size:24px }#meetup_oembed p { margin: 0 0 10px; padding:0; line-height:16px; }#meetup_oembed img { border:none; margin:0; padding:0; }#meetup_oembed a, #meetup_oembed a:visited, #meetup_oembed a:link { color: #1B76B3; text-decoration: none; cursor: hand; cursor: pointer; }#meetup_oembed a:hover { color: #1B76B3; text-decoration: underline; }#meetup_oembed a.mu_button { font-size:14px; -moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;border:2px solid #A7241D;color:white!important;text-decoration:none;background-color: #CA3E47; background-image: -moz-linear-gradient(top, #ca3e47, #a8252e); background-image: -webkit-gradient(linear, left bottom, left top, color-stop(0, #a8252e), color-stop(1, #ca3e47));disvplay:inline-block;padding:5px 10px; }#meetup_oembed a.mu_button:hover { color: #fff!important; text-decoration: none; }#meetup_oembed .photo { width:50px; height:50px; overflow:hidden;background:#ccc;float:left;margin:0 5px 0 0;text-align:center;padding:1px; }#meetup_oembed .photo img { height:50px }#meetup_oembed .number { font-size:18px; }#meetup_oembed .thing { text-transform: uppercase; color: #555; }
Kickstart to the Cloud Begin Your Journey and Optimize Along the Way

Thursday, Feb 13, 2020, 11:00 AM

The Henry
4455 E Camelback Rd Phoenix, AZ

2 AZORAS Attending

Register for one of two Lunch & Learns at The Henry in Phoenix, or stay for both presentations! Seating is limited and an RSVP is required. To RSVP for Session 1, Session 2 or the entire event go to https://tips.viscosityna.com/modernize2020-phoenix/ and complete the form and make your selection. Oracle continues to be the fastest, state-of-the-art…

Check out this Meetup →

Be sure to RSVP on Viscosity’s web site: https://tips.viscosityna.com/modernize2020-phoenix/

I don’t know about you but we are busy moving to or implementing on 19c so these sessions related to 19c and other topics should be valuable.

Bobby

Categories: DBA Blogs

Oracle Analytics Server is here

Rittman Mead Consulting - Mon, 2020-02-03 09:47
Oracle Analytics Server is here

Oracle has been talking about this product for months and we’re pleased to tell you the wait is over (for Linux users anyway), and Oracle Analytics Server (OAS) 5.5.0 is now available for download on edelivery(search Oracle Analytics Server).

Oracle Analytics Server is hereOracle Fusion Middleware needs to be downloaded separately

If your organisation uses Oracle technology for data analytics, then the likelihood is you’ve heard of Oracle Analytics Cloud (OAC). You may have also come across the latest addition to the family, OAS, the new on-prem version of OAC, set to eventually replace OBIEE.


The umbrella term, Oracle Analytics, now includes:

Oracle Analytics Server is here
  • Oracle Analytics Server (OAS)
  • Oracle Analytics Cloud (OAC)
  • Oracle Analytics for Applications (OAX)

Whilst OAC is the jewel in the crown, and will receive regular quarterly updates, these updates will be reflected in OAS. You’ll be pleased to hear current OBIEE users will be automatically licensed for OAS - the logic behind this is that OAS becomes a stepping stone in your journey to using Cloud. If you’re buying OAS new, the licensing model is the same as the current OBIEE model.

OAS looks almost the same as OAC, minus some features, like the Natural Language Generator. This feature generates explanations of your visualisations in 28 different languages and will probably be included in a later version of the tool.  

How does OAS compare to OBIEE and OAC?

  • Licensing: OAS now includes options like Data Visualization (DV) and Mobile which were previously considered extra.
  • Data Visualization: Oracle’s self-service visualisation tool does what it says on the tin. Allows you to decipher your enterprise data with intelligent visuals. Now it includes almost all the new features available in OAC. A big step forward compared to the DV version available in the latest OBIEE
  • Data Flows: Clean and Transform your data via a GUI based tool without leaving your analytical platform.
  • Machine Learning: All the goodies related to “one-click forecast” or  “Explain” and the full ML capabilities are now included in the on-prem Oracle Analytics Edition!
  • Configuration Options: OAS provides the "OBIEE"-type configuration options, where you can tweak each componend individually

Oracle’s aim is for users to achieve “100% data literacy” and plan to do this via their vision for analytics: augmented, collaborative and integrated. OAS really plays into this strategy, allowing users to employ data science and machine learning techniques to both analyse current trends and predict future ones (find out more in this blog post)

Oracle Analytics Server is here


Talk to us about how to migrate from OBIEE to OAS or OAC. We can help you with every deployment scenario including on-prem, hybrid, full public cloud, or a mix and match of these suited to your needs. Email us: info@rittmanmead.com to arrange a chat with one of our team.

Categories: BI & Warehousing

[New Feature] Backup Boot Volume Across Region In Oracle Cloud (OCI)

Online Apps DBA - Mon, 2020-02-03 07:26

[New Feature] Backup Boot Volume Across Region In Oracle Cloud (OCI) Boot Volume that holds the Operating System and its supported files. Practically now we can migrate the complete instance to another region after taking the backup of Boot and Block volumes. Check out K21Academy’s post at https://k21academy.com/oci57 which covers: • Boot Volume Backup Overview […]

The post [New Feature] Backup Boot Volume Across Region In Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator