ETL的38個子系統(轉)
When asked about breaking down the three big steps, many designers say, "Well, that depends." It depends on the source, it depends on funny data idiosyncrasies, it depends on the scripting languages and ETL tools available, it depends on the skills of the in-house staff, and it depends on the query and reporting tools the end users have.
The "it depends" response is dangerous because it becomes an excuse to roll your own ETL system, which in the worst-case scenario results in an undifferentiated spaghetti-mess of tables, modules, processes, scripts, triggers, alerts, and job schedules. Maybe this kind of creative design approach was appropriate a few years ago when everyone was struggling to understand the ETL task, but with the benefit of thousands of successful data warehouses, a set of best practices is ready to emerge.
I have spent the last 18 months intensively studying ETL practices and ETL products. I have identified a list of 38 subsystems that are needed in almost every data warehouse back room. That's the bad news. No wonder the ETL system takes such a large fraction of the data warehouse resources. But the good news is that if you study the list, you'll recognize almost all of them, and you'll be on the way to leveraging your experience in each of these subsystems as you build successive data warehouses.
The 38 Subsystems
1. Extract system. Source data adapters, push/pull/dribble job schedulers, filtering and sorting at the source, proprietary data format conversions, and data staging after transfer to ETL environment.
2. Change data capture system. Source log file readers, source date and sequence number filters, and CRC-based record comparison in ETL system.
3. Data profiling system. Column property analysis including discovery of inferred domains, and structure analysis including candidate foreign key — primary relationships, data rule analysis, and value rule analysis.
4. Data cleansing system. Typically a dictionary driven system for complete parsing of names and addresses of individuals and organizations, possibly also products or locations. "De-duplication" including identification and removal usually of individuals and organizations, possibly products or locations. Often uses fuzzy logic. "Surviving" using specialized data merge logic that preserves specified fields from certain sources to be the final saved versions. Maintains back references (such as natural keys) to all participating original sources.
5. Data conformer. Identification and enforcement of special conformed dimension attributes and conformed fact table measures as the basis for data integration across multiple data sources.
6. Audit dimension assembler. Assembly of metadata context surrounding each fact table load in such a way that the metadata context can be attached to the fact table as a normal dimension.
7. Quality screen handler. In line ETL tests applied systematically to all data flows checking for data quality issues. One of the feeds to the error event handler (see subsystem 8).
8. Error event handler. Comprehensive system for reporting and responding to all ETL error events. Includes branching logic to handle various classes of errors, and includes real-time monitoring of ETL data quality
9. Surrogate key creation system. Robust mechanism for producing stream of surrogate keys, independently for every dimension. Independent of database instance, able to serve distributed clients.
10. Slowly Changing Dimension (SCD) processor. Transformation logic for handling three types of time variance possible for a dimension attribute: Type 1 (overwrite), Type 2 (create new record), and Type 3 (create new field).
11. Late arriving dimension handler. Insertion and update logic for dimension changes that have been delayed in arriving at the data warehouse.
12. Fixed hierarchy dimension builder. Data validity checking and maintenance system for all forms of many-to-one hierarchies in a dimension.
13. Variable hierarchy dimension builder. Data validity checking and maintenance system for all forms of ragged hierarchies of indeterminate depth, such as organization charts, and parts explosions.
14. Multivalued dimension bridge table builder. Creation and maintenance of associative (bridge) table used to describe a many-to-many relationship between dimensions. May include weighting factors used for allocations and situational role descriptions.
15. Junk dimension builder. Creation and maintenance of dimensions consisting of miscellaneous low cardinality flags and indicators found in most production data sources.
16. Transaction grain fact table loader. System for updating transaction grain fact tables including manipulation of indexes and partitions. Normally append mode for most recent data. Uses surrogate key pipeline (see subsystem 19).
17. Periodic snapshot grain fact table loader. System for updating periodic snapshot grain fact tables including manipulation of indexes and partitions. Includes frequent overwrite strategy for incremental update of current period facts. Uses surrogate key pipeline (see subsystem 19).
18. Accumulating snapshot grain fact table loader. System for updating accumulating snapshot grain fact tables including manipulation of indexes and partitions, and updates to both dimension foreign keys and accumulating measures. Uses surrogate key pipeline (see subsystem 19).
19. Surrogate key pipeline. Pipelined, multithreaded process for replacing natural keys of incoming data with data warehouse surrogate keys.
20. Late arriving fact handler. Insertion and update logic for fact records that have been delayed in arriving at the data warehouse.
21. Aggregate builder. Creation and maintenance of physical database structures, known as aggregates, that are used in conjunction with a query-rewrite facility, to improve query performance. Includes stand-alone aggregate tables and materialized views.
22. Multidimensional cube builder. Creation and maintenance of star schema foundation for loading multidimensional (OLAP) cubes, including special preparation of dimension hierarchies as dictated by the specific cube technology.
23. Real-time partition builder. Special logic for each of the three fact table types (see subsystems 16, 17, and 18) that maintains a "hot partition" in memory containing only the data that has arrived since the last update of the static data warehouse tables.
24. Dimension manager system. Administration system for the "dimension manager" who replicates conformed dimensions from a centralized location to fact table providers. Paired with subsystem 25.
25. Fact table provider system. Administration system for the "fact table provider" who receives conformed dimensions sent by the dimension manager. Includes local key substitution, dimension version checking, and aggregate table change management.
26. Job scheduler. System for scheduling and launching all ETL jobs. Able to wait for a wide variety of system conditions including dependencies of prior jobs completing successfully. Able to post alerts.
27. Workflow monitor. Dashboard and reporting system for all job runs initiated by the Job Scheduler. Includes number of records processed, summaries of errors, and actions taken.
28. Recovery and restart system. Common system for resuming a job that has halted, or for backing out a whole job and restarting. Significant dependency on backup system (see subsystem 36).
29. Parallelizing/pipelining system. Common system for taking advantage of multiple processors, or grid computing resources, and common system for implementing streaming data flows. Highly desirable (eventually necessary) that parallelizing and pipelining be invoked automatically for any ETL process that meets certain conditions, such as not writing to the disk or waiting on a condition in the middle of the process.
30. Problem escalation system. Automatic plus manual system for raising an error condition to the appropriate level for resolution and tracking. Includes simple error log entries, operator notification, supervisor notification, and system developer notification.
31. Version control system. Consistent "snapshotting" capability for archiving and recovering all the metadata in the ETL pipeline. Check-out and check-in of all ETL modules and jobs. Source comparison capability to reveal differences between different versions.
32. Version migration system. development to test to production. Move a complete ETL pipeline implementation out of development, into test, and then into production. Interface to version control system to back out a migration. Single interface for setting connection information for entire version. Independence from database location for surrogate key generation.
33. Lineage and dependency analyzer. Display the ultimate physical sources and all subsequent transformations of any selected data element, chosen either from the middle of the ETL pipeline, or chosen on a final delivered report (lineage). Display all affected downstream data elements and final report fields affected by a potential change in any selected data element, chosen either in the middle of the ETL pipeline, or in an original source (dependency).
34. Compliance reporter. Comply with regulatory statutes to prove the lineage of key reported operating results. Prove that the data and the transformations haven't been changed. Show who has accessed or changed any such data.
35. Security system. Administer role-based security on all data and metadata in the ETL pipeline. Prove that a version of a module hasn't been changed. Show who has made changes.
36. Backup system. Backup data and metadata for recovery, restart, security, and compliance requirements.
37. Metadata repository manager. Comprehensive system for capturing and maintaining all ETL metadata, including all transformation logic. Includes process metadata, technical metadata, and business metadata.
38. Project management system. Comprehensive system for keeping track of all ETL development.
If you've survived to the end of this list, congratulations! Here are the important observations I'd like you to carry away: It's really difficult to argue that any of these subsystems are unnecessary as this list makes it really clear that without dividing up the task (perhaps 38 ways), the descent into chaos is inevitable. The industry is ready to define best-practices goals and implementation standards for each of these 38 subsystems, and it would be a tremendous contribution for the ETL tool vendors to provide wizards or serious templates for each of these 38 subsystems. We have a lot to talk about. Maybe 38 more columns!
轉自
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/16723161/viewspace-1014176/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- ETL架構中的38個子系統架構
- 系統鉤子的使用 (轉)
- 國產 ETL工具 ETL產品 資料交換系統
- 初探pinctrl子系統和GPIO子系統
- gpio子系統與pinctrl子系統通用APIAPI
- 一個簡單的Webmail系統 (轉)WebAI
- 一個簡單的考勤系統 (轉)
- ETL連線企業三層資訊系統
- SDD子系統
- JVM 的執行子系統JVM
- 轉享: 5個頂級的CMS系統
- 《Kettle構建Hadoop ETL系統實踐》簡介Hadoop
- [原創] 淺談ETL系統架構如何測試?架構
- Linux驅動之GPIO子系統和pinctrl子系統Linux
- Innodb 鎖子系統
- 帶你玩轉OpenHarmony AI:打造智慧語音子系統AI
- 電子採購系統的優勢是什麼 常用的電子採購系統介紹
- 遷移windows子系統Windows
- 類載入子系統
- linux核心--子系統Linux
- C#系統鉤子C#
- 原本只是想裝個系統 結果變成了這個樣子OTZ
- 38 歲裸辭讀書 4 個月,轉戰紐西蘭的經歷!
- 拖慢系統啟動的8個原因[上](轉)
- 拖慢系統啟動的8個原因[下](轉)
- SCO UNIX5 的幾個主要系統程式(轉)
- 開啟/關閉子系統的命令
- 好奇: windows10+都可以執行多個linux子系統了,為什麼不支援執行多個windows子系統呢?WindowsLinux
- linux系統中 SElinux安全子系統Linux
- Windows 10 子系統(WSL) Ubuntu 快樂玩轉 Laravel 開發WindowsUbuntuLaravel
- 電子政務特點及其系統安全全攻略(轉)
- 京華網路推出Linux桌面電子政務系統(轉)Linux
- 轉:三大主流ETL工具選型
- Linux系統線上增加多個ip(轉)Linux
- 電子元器件電子採購管理系統
- 國產ETL工具 etl-engine
- 六個提升Windows XP系統執行速度的妙招(轉)Windows
- 我的六個系統安裝方法及其應用(轉)