Data Architecture and Management Designer
The Data Architecture and Management Designer Certification is a credential developed for Salesforce professionals who have experience in designing Data Architecture and Management solutions on the Salesforce platform and are looking to verify their expertise. Working experience of the product is important for this certification in particular as it’s designed specifically for professionals who can architect a solution for a particular customer scenario.
The exam is made up of 60 multiple choice questions
105 minutes to complete
The passing score is 67%
There are no prerequisites
Cost is USD $400 and the retake fee is is USD $200 if you are unsuccessful
In the Data Architecture and Management Designer exam, there are 9 topics covered. Data Modeling is the area with the highest weighting at 20%. As it is weighted highest, this is an area that you must focus on to do well in the exam.
Business Intelligence, Reporting, and Analytics
Master Data Management
Data Architecture and Management Designer Topic Weighting Chart
The following are the core topic areas of the Data Architecture and Management Designer certification and what you’re expected to know:
The data modeling topic has 2 objectives and is the largest section of the exam.
The first objective is requires you to compare and contrast various techniques for designing a Lightning Platform data model. This includes how to use standard and custom objects, standard and custom fields, different types of relationships and object features that are available as part of the platform. A data model typically includes standard and custom objects, relationships among those objects, fields, and other features such as record types. An entity relationship diagram can be utilized to visualize the data model. Metadata API can be used to retrieve, deploy, create, update or delete customization information.
The second objective is related to designing a data model that is scalable, supports business processes and considers performance for large data volumes. There are various features in Salesforce that support different business processes, such as Person accounts which can be utilized to store information about individual customers. External objects can be used to make external data visible in Salesforce. Picklist fields allow users to select a value from a predefined list of values, which ensures high data quality. It is also important when designing a data model and storing data in Salesforce to ensure that there is sufficient storage space.
There are 4 objectives in the Conceptual Design section.
The first objective is given a customer scenario, identify the issues impacting data quality along key dimensions. Data quality issues can exist along various dimensions, including age, accuracy, completeness, consistency, duplication, and usage. These issues can cause missing insights, wasted time and resources, poor customer service, and reduced adoption by users.
The second objective is given a customer scenario, recommend approaches for improving data quality along key dimensions using various techniques. Data quality issues can exist along various dimensions, such as duplication, completeness, accuracy, age, etc. In order to improve the quality of data along these dimensions, techniques such as duplicate management, validation rules, data cleansing, standardization, and dependent picklists can be utilized.
The third objective is given a customer scenario, recommend appropriate techniques and tools to monitor data quality on an ongoing basis. Various techniques and tools can be used to monitor data quality including the use of data quality reports, metrics, dashboards and AppExchange applications.
The last objective is given a customer scenario, recommend appropriate techniques and methods for ensuring high data quality at the point of entry. Validation rules can be utilized to ensure that users enter the correct data and use the correct format. Workflow rules can be used for automatic field updates on records. Approval processes can be used to allow users to submit records for approval. Duplicate and matching rules can be used to prevent the creation of duplicate records and show duplicate alerts to users.
There is just 1 objective in the Performance Tuning section, which is understanding techniques for improving performance when migrating large data volumes into Salesforce, generating reports and dashboards, and querying large datasets in Salesforce.
While loading large data volumes such as tens of millions of records into Salesforce, Bulk API jobs should be used in parallel mode in order to achieve optimal performance. The timing and sequence of sharing rule configurations should be carefully planned. Loading lean and suspending sharing rules temporarily should be part of the strategy. Querying or extracting large amount of data from Salesforce requires effective use of indexes. Fields can be indexed to make SOQL queries selective and improve their performance. Skinny tables can be created for a specific set of fields on a particular object in order to improve the performance of reports and list views. Use of PK Chunking for extracting a large number of records is recommended as it helps in partitioning the data and extracting it in smaller chunks.
There are 2 objectives in this section.
The first objective is to compare and contrast approaches and techniques for creating analytical reports and dashboards, discuss Salesforce offerings such as Wave, options available on AppExchange for exposing data quality metrics and adoption metrics. Features such as Einstein Analytics and dashboard apps available in the AppExchange can be utilized to meet custom reporting requirements. Einstein Analytics dashboards allow exploring data through interactive views and creating ad-hoc filters. Various types of dashboard apps are available in the AppExchange for various use cases.
The second objective is given a customer scenario, recommend approaches for leveraging Salesforce and partner analytics offerings to enhance and optimize customer's enterprise analytics capabilities as well as recommending techniques to maintain the desired performance level (SOQL queries, reports) given customer's large data volumes. Einstein Analytics and AppExchange apps can be utilized if an organization requires additional ways of gaining insights into data. Custom indexes and skinny tables can be used to improve the performance of reports. Selective filter conditions can be used in SOQL queries to improve their performance. The performance of Visualforce pages that use SOQL can be improved by reducing the number of records that are returned.
There are 2 objectives in the Data Archiving section.
The first objective is to compare and contrast various approaches and considerations for arriving at a data archiving and purging plan. There are various options are available for archiving Salesforce data, such as using an on-platform solution like big object or storing data off-platform in an external system or data warehouse. The Bulk API can be considered for removing large volumes of data from Salesforce. An AppExchange solution can be considered to back up data when a company has a custom business requirement that cannot be met using a native solution.
The second objective is given a customer scenario, recommend a data archiving and purging plan that is optimal for customer's data storage management needs. Data can be removed from Salesforce and archived for the purpose of reference or reporting. Tools such as Data Loader and ETL can be utilized to automate the process. Other Salesforce features such as Apex trigger and batch Apex can be used to store summarized data and field value changes instead of all the records.
There are 2 objectives in the Data Migration section.
The first objective is to compare and contrast various techniques and considerations for importing data into and exporting data out of Salesforce. The Bulk API can be used in parallel mode to minimize the data migration time, but it can cause locking issues when migrating child records, which can be avoided by ordering them by the parent record IDs. Sharing rules can be deferred to improve migration performance. Using ‘insert’ and ‘update’ operations is faster than using the ‘upsert’ operation. It is also important to consider other aspects related to migration, such as data storage and API limitations.
The second objective is given a customer scenario, recommend an optimal data migration plan taking into account parallelism, managing locks, and handling sharing rules. The Bulk API in parallel mode can be used to ensure maximum performance while exporting or importing millions of records. Records can be regularly exported by using the Data Export option. When extracting more than 10 million records, PK Chunking can be utilized to avoid a full table scan of records. External ID can be used to avoid duplicates while importing records. Child records should be ordered by the parent record ID to avoid record locking errors.
There are 3 objectives in the Master Data Management section.
The first objective is to compare and contrast the various techniques, approaches, and considerations for implementing Master Data Management Solutions. An MDM solution requires choosing an implementation style, such as registry, consolidation, coexistence or transaction. Data survivorship techniques can be utilized to determine the best candidates for the surviving records. A matching policy can be utilized to determine how the records should be matched. Canonical modeling can be used for communication between different enterprise systems. Furthermore, a typical MDM solution should have certain hierarchy management features.
The second objective is given a customer scenario, recommend and use techniques for establishing a "golden source of truth"/"system of record" for the customer domain. When it comes to an MDM implementation, it is necessary to outline the golden record or the source of truth and define the system of record for different types of data elements. When there are multiple enterprise systems and data integrations, stakeholders can be brought together and data flows can be reviewed to determine the system of record for different objects and fields. It is important to review the flow of data from one system to another to determine which system should act as the system of record for a given type of record or data element when it is modified.
The third objective is given a customer scenario, recommend approaches and techniques for consolidating data attributes from multiple sources. When using an MDM solution, it is necessary to consider how different types of data attributes, such as field values, should be consolidated to create the master record. Data survivorship rules should be established to determine which field value from a particular data source should survive during consolidation of two records. Factors and criteria can be defined for data survivorship.
There are 3 objectives in the Data Governance section.
The first objective is to compare and contrast various approaches and considerations for designing and implementing an enterprise data governance program while taking into account framework for defining roles and responsibilities (for example, stewardship, data custodian, etc.), policies and standards, ownership and accountability, data rules and definitions, monitoring, and measurement. A data governance plan should focus on elements such as data definitions, quality standards, roles and ownership, security and permissions, and quality control. Data stewardship includes defining the teams, roles, and activities for data quality improvement and day-to-day maintenance.
The second objective is given a customer scenario, recommend a data governance model in terms of roles and responsibilities, processes for establishing data standards, metrics and KPIs, classification of attributes by usage, identifying and prioritizing attributes to be used in match and merge, setting attribute scores, and weights. Each company needs to implement a suitable data governance model based on their specific requirements. A data governance framework can be based on a centralized, decentralized or hybrid approach. A centralized approach focuses on the execution of rules, standards, policies, and procedures by a central data governance body, while a decentralized approach focuses on data quality maintenance at the individual level. A hybrid model that combines aspects of both these models can also be utilized.
The third objective is given a customer scenario, recommend an approach for optimizing data stewardship engagement for mitigating duplicates in matching and merging of records. Discuss attribute selection in match and merge, criteria for auto merge, manual merge, and re-parenting considerations. Identify options for auto merge enablers available on AppExchange. Salesforce provides matching and duplicate rules that can be used to identify duplicates and prevent their creation. Users can also be alerted when they try to create a duplicate record. Duplicate jobs can be used to identify duplicates across the org. Auto-merge enables in the AppExchange allow merging duplicate records automatically based on a schedule.
There are 2 objectives in the Metadata Management section.
The first objective is to compare and contrast various techniques, approaches, and considerations for capturing and managing business and technical metadata (for example, business dictionary, data lineage, taxonomy, data classification). Various approaches can be utilized for metadata documentation. A data dictionary can consist of entity relationship diagrams (ERDs). Data taxonomy can be used for classifying data into categories and subcategories and using common terminologies. Defining a data lineage includes specifying the data origin, how data are affected, and how the records move within the lifecycle. Data classification can be used to identify unstructured metadata and categorize them based on security controls and risk levels.
The second objective is given a customer scenario, recommend appropriate approaches and techniques to capture and maintain customer metadata to preserve traceability and establish a common context for business rules. Salesforce provides various features for capturing metadata, such as Event Monitoring for user events. Setup Audit Trail can be used to view and download changes made by users in Setup. Field History Tracking allows tracking of new and old field values. Field Audit Trail allows defining a data retention policy for field history data. Furthermore, custom metadata types and custom settings can be created to store custom configuration information specific to business requirements.
Every topic objective explained thoroughly.
The most efficient way to study the key concepts in the exam.
Test yourself with complete practice exams or focus on a particular topic with the topic exams. Find out if you are ready for the exam.
Copyright 2019 - www.FocusOnForce.com
Copyright 2019 - www.FocusOnForce.com