Oracle Service Cloud (OSvC) comes with a data import capability to allow us to migrate data from legacy systems or databases into the OSvC database in the cloud.
Data can be migrated using data files with various types of delimiters – the most commonly used is CSV (comma-separated value).
It is possible to import data into most primary standard and custom objects, and associated secondary objects.
We can use the Data Mapping Templates map the columns in our files to the fields in OSvC database, and also set duplicate criteria.
To import the data, we can use the easy-to-use and intuitive Data Import Wizard, which not only allows import but also reports on success/failure.
It is a great and useful feature, but it has its limitations. Some of them, important to bear in mind, are:
- Opportunity and Task objects are not supported. If you want to import records to these two objects you will need to use the APIs.
- Importing data in to associated (secondary) objects – e.g. Message Thread on Incidents, Notes in Contacts – is allowed on create but ignored on update.
- Products and Categories fields are not available for mapping when you are importing Answers. These have to be updated manually after import.
- Special characters (e.g. apostrophe, commas) or words (e.g. “Union”) might cause record import to fail when they exist in lookups fields (e.g. Contact Email, Organisation Name).
- Due to Incident reference number format (YYYYMMDD-xxxxxx) we are limited to the import of Incidents to 999,999 per day.
Apart from that, the Data Import capability seems to work well.
Data Import Bug
We recently bumped into a bug… When we were importing data into the OSvC instance of two of Capventis customers, the system started creating loads of duplicates, forcing us to stop the import, delete the records, and import back again.
Unfortunately this was happening again and again. And we were not even using loads of data. Our files contained a few thousands or a couple of tens of thousands of records. When trying to import a file with 5,000 records, the system was creating 15,000 or 20,000, and kept going if we didn’t cancel the import.
We reported to Oracle and, after many hours on the phone and emails exchanged with Oracle Support, they finally recognised the bug and promised to resolve in the next few patches or releases. Hopefully they will resolve soon as this is crucial to all our projects.
In the meantime the workaround we found was to break data files down in 500 records each. Otherwise, create tools that read CSV files and use the APIs to import data.
Data Import Performance
Some times the data import process via the Data Import Wizard can be slower than usual – obviously depending on the type and amount of data in your data files.
This can be fixed or improved. During the data import process the data in the data files is divided in batches that are processed one at a time.
If you feel the process is slower than usual you may want to reduce the number of records in each batch, by changing the value in the DATA_IMPORT_BATCH_LIMIT configuration setting.
The DATA_IMPORT_BATCH_LIMIT configuration setting limits the number of records processed in a single batch when performing a Data Import. Maximum is 5000. Default is 1000.