使用ABAP併發程式設計解決一個實際應用場景中的效能瓶頸問題
When I was responsible for CRM Fiori application, I once meet with a performance issue.
When the users perform the synchronization for the first time on their mobile device, the opportunities belonging to them will be downloaded to mobile which is so called the initial load phase. The downloaded data includes attachment header information.
Since for Attachment read in CRM, there is no multiple-enabled API, so we have to perform the single read API within the LOOP, which means the read is performed sequentially:
We really suffer from this poor performance. As explained in my blog What should an ABAPer continue to learn as an application developer,
I get inspiration from the concept Parallel computing which is usually related to functional programming language. A function which has no side-effect, only manipulates with immutable data set is a good candidate to be handled concurrently. When looking back on my performance issue, the requirement to read opportunity attachment header data perfectly fits the criteria: only read access on header data, each read is segregated from others – no side effect. As a result it is worth a try to rewrite the read implementation into a parallelism version.
The idea is simple: split the opportunities to be read into different parts, and spawn new ABAP sessions via keyword STARTING NEW TASK, each session is responsible for a dedicated part.
The screenshot below is an example that totally 100 opportunities are divided into 5 sub groups, which will be handled by 5 ABAP sessions, each session reads 100 / 5 = 20 opportunity attachments.
The parallel read version:
METHOD PARALLEL_READ.
DATA:lv_taskid TYPE c LENGTH 8,
lv_index TYPE c LENGTH 4,
lv_current_index TYPE int4,
lt_task LIKE it_orders,
lt_attachment TYPE crmt_odata_task_attachmentt.* TODO: validation on iv_process_num and lines( it_orders )
DATA(lv_total) = lines( it_orders ).
DATA(lv_additional) = lv_total MOD iv_block_size.
DATA(lv_task_num) = lv_total DIV iv_block_size.
IF lv_additional <> 0.
lv_task_num = lv_task_num + 1.
ENDIF.
DO lv_task_num TIMES.
CLEAR: lt_task.
lv_current_index = 1 + iv_block_size * ( sy-index - 1 ).
DO iv_block_size TIMES.
READ TABLE it_orders ASSIGNING FIELD-SYMBOL(<task>) INDEX lv_current_index.
IF sy-subrc = 0.
APPEND INITIAL LINE TO lt_task ASSIGNING FIELD-SYMBOL(<cur_task>).
MOVE-CORRESPONDING <task> TO <cur_task>.
lv_current_index = lv_current_index + 1.
ELSE.
EXIT.
ENDIF.
ENDDO.
IF lt_task IS NOT INITIAL.
lv_index = sy-index.
lv_taskid = 'Task' && lv_index.
CALL FUNCTION 'ZJERRYGET_ATTACHMENTS'
STARTING NEW TASK lv_taskid
CALLING read_finished ON END OF TASK
EXPORTING
it_objects = lt_task.
ENDIF.
ENDDO.
WAIT UNTIL mv_finished = lv_task_num.
rt_attachments = mt_attachment_result.
ENDMETHOD.The method READ_FINISHED:METHOD READ_FINISHED.
DATA: lt_attachment TYPE crmt_odata_task_attachmentt.
ADD 1 TO mv_finished.
RECEIVE RESULTS FROM FUNCTION 'ZJERRYGET_ATTACHMENTS'
CHANGING
ct_attachments = lt_attachment
EXCEPTIONS
system_failure = 1
communication_failure = 2.
APPEND LINES OF lt_attachment TO mt_attachment_result.
ENDMETHOD.
In function module ZJERRYGET_ATTACHMENTS, I still use the attachment single read API:
```ABAP FUNCTION ZJERRYGET_ATTACHMENTS. "---------------------------------------------------------------------- " "Local Interface: " IMPORTING " VALUE(IT_OBJECTS) TYPE CRMT_OBJECT_KEY_T " CHANGING " VALUE(CT_ATTACHMENTS) TYPE CRMT_ODATA_TASK_ATTACHMENTT "----------------------------------------------------------------------
DATA(lo_tool) = new zcl_crm_attachment_tool( ).
ct_attachments = lo_tool->get_attachments_origin( it_objects ).
ENDFUNCTION. ```ABAP
So in fact I didn’t spend any effort to optimize the single read API. Instead, I call it in parallel. Let’s see if there is any performance improvement.
In this test report, first I generate an internal table with 100 entries which are opportunity guids. Then I perform the attachment read twice, one done in parallel and the other done sequentially. Both result are compared in method compare_read_result to ensure there is no function loss in the parallel version.
Testing result ( unit: second )
It clearly shows that the performance increases with the number of running ABAP sessions which handles with the attachment read. When the block size = 100, the parallel solution degrades to the sequential one – even worse due to the overhead of WAIT.
For sure in productive usage the number of block size should not be hard coded. In fact in my test code why I use the variable name iv_block_size is to express my respect to the block size customizing in tcode R3AC1 in CRM middleware.
要獲取更多Jerry的原創文章,請關注公眾號"汪子熙":
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/24475491/viewspace-2715906/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 解決資料庫高併發訪問瓶頸問題資料庫
- Map-Reduce 思想在 ABAP 程式設計中的一個實際應用案例程式設計
- 【docker專欄1】docker解決的實際問題及應用場景Docker
- 使用chrome開發者工具中的performance皮膚解決效能瓶頸ChromeORM
- 處理高併發 IO瓶頸解決紅包程式
- [Elasticsearch] ES 的Mapping 設計在實際場景中應用ElasticsearchAPP
- 用 pprof 找出程式碼效能瓶頸
- 從一個問題中瞭解數學在程式設計中的應用程式設計
- java併發程式設計 --併發問題的根源及主要解決方法Java程式設計
- MySQL 在併發場景下的問題及解決思路MySql
- 高併發下log4j的效能瓶頸
- printStackTrace()造成的併發瓶頸
- 實用技巧:快速定位Zuul的效能瓶頸Zuul
- 漫談前端效能 突破 React 應用瓶頸前端React
- 10個常見觸發IO瓶頸的高頻業務場景
- 代理IP的三個實際應用場景
- 給中級程式設計師突破瓶頸的幾個建議,收藏~程式設計師
- 各種儲存效能瓶頸場景的分析與最佳化手段
- nodejs實際應用場景NodeJS
- [併發程式設計]-關於 CAS 的幾個問題程式設計
- 擴充套件jwt解決oauth2 效能瓶頸套件JWTOAuth
- 使用 sar 和 kSar 來發現 Linux 效能瓶頸Linux
- Redis+Lua解決高併發場景搶購秒殺問題Redis
- 工作中使用 Git 解決問題的場景Git
- 效能測試瓶頸之CPU問題分析與調優
- Redis效能瓶頸揭秘:如何最佳化大key問題?Redis
- 閉包實際場景應用
- 如何使用 Restful ABAP Programming 程式設計模型開發一個支援增刪改查的 Fiori 應用REST程式設計模型
- IT職場:用TRIZ解決實際問題的一般流程是什麼?
- JAVA程式設計題-用java解決兔子問題Java程式設計
- 請求合併與拆分在併發場景中應用
- Google Guava 在實際場景中的應用封裝GoGuava封裝
- MobSDK垂直化場景使用 助力程式設計師設計更美好的應用程式設計師
- 沒有效能瓶頸的無限級選單樹應該這樣設計
- 使用 Chrome 開發者工具分析 SAP UI5 應用的 JavaScript 程式碼執行效能瓶頸試讀版ChromeUIJavaScript
- [分散式][高併發]訊息佇列的使用場景、概念、常見問題及解決方案分散式佇列
- 程式設計謎題:提升你解決問題的訓練場程式設計
- Python併發程式設計之從效能角度來初探併發程式設計(一)Python程式設計