Sometimes, before diving headfirst into code and technical specifics, it’s beneficial to step back and see the bigger picture. In this series of articles, we’ll explore how to integrate OpenAI’s capabilities with Oracle APEX to build a robust and customized file upload and management process. The goal is to facilitate secure and efficient interaction with the OpenAI API by leveraging the powerful tools of Oracle APEX while keeping a user-centric approach in mind.
In this first installment, we’ll focus on laying the groundwork for the integration: setting up a REST Data Source that connects to OpenAI’s /v1/files
endpoint. This is a crucial first step, as it will enable us to display and manage information about the uploaded files (such as their IDs, statuses, and purposes) within an Interactive Report in Oracle APEX. Before we dive into the technical details, we’re establishing the framework necessary to build a solution that not only meets technical requirements but also offers an intuitive and friendly user experience.
Throughout this series, we’ll cover everything from file validation and pre-processing to error handling and adapting the API for specialized tasks like fine-tuning or batch processing. By balancing functionality and usability, we’ll discover how to optimize the interaction between enterprise applications and AI technologies.
The first step in integrating OpenAI with Oracle APEX is creating a REST Data Source that connects to the /v1/files
endpoint. This will allow us to retrieve and display information about uploaded files (e.g., their IDs, statuses, and purposes) in an Interactive Report within Oracle APEX.

Rest Data Source GIA_OPENAI_FILES
We will use the REST Data Source GIA_OPENAI_FILES
to create an Interactive Report in Oracle APEX. This report will display key fields from the /v1/files
endpoint, such as File ID
, File Name
, Purpose
, Status
, File Size (KB)
, and Created Date
.
Transformations will be applied, including converting file size to megabytes and formatting timestamps into readable date formats. Additional features like sorting, filtering (e.g., by Status
or Purpose
), and conditional formatting will be configured to enhance usability and provide a user-friendly interface for managing files retrieved from OpenAI.
One of my passions is psychology, and I’ve uploaded a couple of psychology books and an image to the repository for analysis by OpenAI. With the proper design, the user interface now displays the uploaded files in an organized and visually appealing format, showing essential details such as file names, purposes, and statuses.

user interface displays uploaded files
Additionally, I’ve added a button to prepare files for upload through a modal window, which allows users to validate and configure files before sending them to OpenAI. This setup ensures a smooth and user-friendly experience while leveraging OpenAI’s capabilities to analyze the uploaded content.

modal window to upload file
When the Upload button is clicked, a modal window will appear, allowing users to select the purpose of the file they wish to upload. The available purposes include:
-
Assistants and Message: This purpose is for uploading files with structured text content, such as messages or conversations, for analysis or AI-powered interaction. Examples include extracting insights, sentiment analysis, or preparing data for conversational AI tasks. File formats like
TXT
orCSV
are typically used for this purpose. -
Assistants Image: For uploading image files to perform analysis tasks like object detection, classification, or extracting text (OCR). This purpose enables AI-driven processing of visual data to extract meaningful insights. Supported formats often include
JPG
orPNG
. -
Batch API: This is designed for processing large datasets in bulk. Files must be in
JSONL
format (JSON Lines), where each line represents an individual data entry. This purpose is ideal for use cases like analyzing extensive customer datasets or processing structured data at scale. -
Fine-Tuning: Fine-tuning allows users to train custom AI models using their own data. Files must be in
JSONL
format, with properly formatted entries containing theprompt
andcompletion
fields required for model training. This purpose is critical for tailoring OpenAI models to specific business or project needs.
Each option is clearly explained in the modal, ensuring users understand the requirements for their selected purpose. For example, Batch API and Fine-Tuning will validate that the file is in JSONL
format before allowing the upload to proceed. This ensures proper handling of file types and formats for OpenAI’s API capabilities.
In the modal window, the second field shown, File Object, refers to the file the user will upload. Once the user selects the file and sets the purpose, clicking the Create button will trigger a process to handle the upload. This process will validate and prepare the file before sending it to OpenAI’s API for the specified purpose.
Let’s take a look at the procedure:
If we divide the file upload process into smaller parts, each plays a critical role in ensuring smooth integration with OpenAI’s API. Let’s go through them step by step with the relevant code snippets:
1. File Retrieval
The first step is to retrieve the file from the APEX temporary storage table (APEX_APPLICATION_TEMP_FILES
). This ensures that the file selected by the user exists and can be processed further.
Code Example:
-
The file’s name and content (
FILENAME
andBLOB_CONTENT
) are retrieved using the unique file object identifier (:P20020_FILE_OBJECT
). -
If the file doesn’t exist, an error message (
GIA_FILE_NOT_FOUND
) is displayed, and the process terminates.
2. File Extension Extraction
Next, the filename is analyzed to extract its extension. This is critical for determining whether the file format is valid for the selected purpose.
Code Example:
-
The function
GET_FILE_EXTENSION
extracts the file extension by locating the last dot (.
) in the filename. -
If the filename doesn’t contain an extension, it returns
NULL
.
3. Validation
The extracted file extension is validated against the selected purpose to ensure compatibility. For example, files intended for fine-tune
and batch
must be in JSONL
format.
Code Example:
-
The purpose (
:P20020_PURPOSE
) is checked against the file extension (l_file_extension
) using specific rules. -
If the validation fails, an error message is displayed, and the process stops.
4. Multipart Request Construction
After validation, the file data and its purpose are added to a multipart request. This ensures that the data is sent in the format expected by OpenAI’s API.
Code Example:
-
The
APEX_WEB_SERVICE.append_to_multipart
procedure is used to include both the file content (l_blob
) and its purpose (:P20020_PURPOSE
) in the request body. -
Each piece of data is added with its respective name (
file
orpurpose
).
5. REST API Call
Once the multipart request is ready, it is sent to OpenAI’s /v1/files
endpoint using a POST
method. The response from the API is stored for further processing.
Code Example:
-
The
APEX_WEB_SERVICE.generate_request_body
procedure constructs the final request body from the multipart parts. -
The
APEX_WEB_SERVICE.make_rest_request
procedure sends the request to OpenAI’s endpoint and stores the response inl_response
.
6. Error Handling
If the API call fails or returns an error, appropriate feedback is provided to the user. Success responses display a confirmation message.
Code Example:
-
If
l_response
contains an error, the process stops, and an error message (GIA_UPLOAD_FAILED
) is shown. -
If successful, a confirmation message (
GIA_FILE_UPLOAD_SUCCESS
) is displayed.
With all this implemented, we can close the dialog once the procedure has been executed and capture the dialog’s closure to trigger a refresh of the list of items. This ensures that the Interactive Report is updated with the information of the new file that we have uploaded.
Thank you for reading Part 1. Stay tuned for Part 2, where we’ll dive even deeper into customizing the integration and further enhancing the file management process.
For more information, please visit our company website or find us on APEX World for additional resources and updates!
Leave A Comment