Data Architecture and Management Designer

Certification Information

The Data Architecture and Management Designer Certification is a credential developed for Salesforce professionals who have experience in designing Data Architecture and Management solutions on the Salesforce platform and are looking to verify their expertise. Working experience of the product is important for this certification in particular as it’s designed specifically for professionals who can architect a solution for a particular customer scenario.

Key Facts

The exam is made up of 60 multiple choice questions

105 minutes to complete

The passing score is 67%

There are no prerequisites

Cost is USD $400 and the retake fee is is USD $200 if you are unsuccessful

This information will assist you if you’re interested in becoming certified as a Data Architecture and Management Designer and includes an overview of the core topics in the exam.

In the Data Architecture and Management Designer exam, there are 9 topics covered. Data Modeling is the area with the highest weighting at 20%. As it is weighted highest, this is an area that you must focus on to do well in the exam.

Objective

Weighting

Data Modeling

20%

Conceptual Design

15%

Performance Tuning

11%

Business Intelligence, Reporting, and Analytics

10%

Data Archiving

10%

Data Migration

10%

Master Data Management

10%

Data Governance

7%

Metadata Management

7%

Data Architecture and Management Designer Topic Weighting Chart

Data Architecture and Management Designer
Certification Contents

The following are the core topic areas of the Data Architecture and Management Designer certification and what you’re expected to know:

Data Modeling

The data modeling  topic has 2 objectives and is the largest section of the exam.

The first objective is requires you to compare and contrast various techniques for designing a Lightning Platform data model. This includes how to use standard and custom objects, standard and custom fields, different types of relationships and object features that are available as part of the platform. A data model typically includes standard and custom objects, relationships among those objects, fields, and other features such as record types. An entity relationship diagram can be utilized to visualize the data model. Metadata API can be used to retrieve, deploy, create, update or delete customization information.


The second objective is related to designing a data model that is scalable, supports business processes and considers performance for large data volumes. There are various features in Salesforce that support different business processes, such as Person accounts which can be utilized to store information about individual customers. External objects can be used to make external data visible in Salesforce. Picklist fields allow users to select a value from a predefined list of values, which ensures high data quality. It is also important when designing a data model and  storing data in Salesforce to ensure that there is sufficient storage space.

Conceptual Design

There are 4 objectives in the Conceptual Design section.


The first objective is given a customer scenario, identify the issues impacting data quality along key dimensions. Data quality issues can exist along various dimensions, including age, accuracy, completeness, consistency, duplication, and usage. These issues can cause missing insights, wasted time and resources, poor customer service, and reduced adoption by users.


The second objective is given a customer scenario, recommend approaches for improving data quality along key dimensions using various techniques. Data quality issues can exist along various dimensions, such as duplication, completeness, accuracy, age, etc. In order to improve the quality of data along these dimensions, techniques such as duplicate management, validation rules, data cleansing, standardization, and dependent picklists can be utilized. 


The third objective is given a customer scenario, recommend appropriate techniques and tools to monitor data quality on an ongoing basis. Various techniques and tools can be used to monitor data quality including the use of data quality reports, metrics, dashboards and AppExchange applications.


The last objective is given a customer scenario, recommend appropriate techniques and methods for ensuring high data quality at the point of entry. Validation rules can be utilized to ensure that users enter the correct data and use the correct format. Workflow rules can be used for automatic field updates on records. Approval processes can be used to allow users to submit records for approval. Duplicate and matching rules can be used to prevent the creation of duplicate records and show duplicate alerts to users.

Performance Tuning

There is just 1 objective in the Performance Tuning section, which is understanding techniques for improving performance when migrating large data volumes into Salesforce, generating reports and dashboards, and querying large datasets in Salesforce.


While loading large data volumes such as tens of millions of records into Salesforce, Bulk API jobs should be used in parallel mode in order to achieve optimal performance. The timing and sequence of sharing rule configurations should be carefully planned. Loading lean and suspending sharing rules temporarily should be part of the strategy. Querying or extracting large amount of data from Salesforce requires effective use of indexes. Fields can be indexed to make SOQL queries selective and improve their performance. Skinny tables can be created for a specific set of fields on a particular object in order to improve the performance of reports and list views. Use of PK Chunking for extracting a large number of records is recommended as it helps in partitioning the data and extracting it in smaller chunks.

Business Intelligence, Reporting, and Analytics

There are 2 objectives in this section.

The first objective is to compare and contrast approaches and techniques for creating analytical reports and dashboards, discuss Salesforce offerings such as Wave, options available on AppExchange for exposing data quality metrics and adoption metrics.

The second objective is given a customer scenario, recommend approaches for leveraging Salesforce and partner analytics offerings to enhance and optimize customer's enterprise analytics capabilities as well as recommending techniques to maintain the desired performance level (SOQL queries, reports) given customer's large data volumes..

Data Archiving

There are 2 objectives in the Data Archiving section.

The first objective is to compare and contrast various approaches and considerations for arriving at a data archiving and purging plan. There are various options are available for archiving Salesforce data, such as using an on-platform solution like big object or storing data off-platform in an external system or data warehouse. The Bulk API can be considered for removing large volumes of data from Salesforce. An AppExchange solution can be considered to back up data when a company has a custom business requirement that cannot be met using a native solution.


The second objective is given a customer scenario, recommend a data archiving and purging plan that is optimal for customer's data storage management needs. Data can be removed from Salesforce and archived for the purpose of reference or reporting. Tools such as Data Loader and ETL can be utilized to automate the process. Other Salesforce features such as Apex trigger and batch Apex can be used to store summarized data and field value changes instead of all the records.


Data Migration

There are 2 objectives in the Data Migration section.

The first objective is to compare and contrast various techniques and considerations for importing data into and exporting data out of Salesforce. The Bulk API can be used in parallel mode to minimize the data migration time, but it can cause locking issues when migrating child records, which can be avoided by ordering them by the parent record IDs. Sharing rules can be deferred to improve migration performance. Using ‘insert’ and ‘update’ operations is faster than using the ‘upsert’ operation. It is also important to consider other aspects related to migration, such as data storage and API limitations.


The second objective is given a customer scenario, recommend an optimal data migration plan taking into account parallelism, managing locks, and handling sharing rules. The Bulk API in parallel mode can be used to ensure maximum performance while exporting or importing millions of records. Records can be regularly exported by using the Data Export option. When extracting more than 10 million records, PK Chunking can be utilized to avoid a full table scan of records. External ID can be used to avoid duplicates while importing records. Child records should be ordered by the parent record ID to avoid record locking errors.

Master Data Management

There are 3 objectives in the Master Data Management section.

The first objective is to compare and contrast the various techniques, approaches, and considerations for implementing Master Data Management Solutions. An MDM solution requires choosing an implementation style, such as registry, consolidation, coexistence or transaction. Data survivorship techniques can be utilized to determine the best candidates for the surviving records. A matching policy can be utilized to determine how the records should be matched. Canonical modeling can be used for communication between different enterprise systems. Furthermore, a typical MDM solution should have certain hierarchy management features.


The second objective is given a customer scenario, recommend and use techniques for establishing a "golden source of truth"/"system of record" for the customer domain. When it comes to an MDM implementation, it is necessary to outline the golden record or the source of truth and define the system of record for different types of data elements. When there are multiple enterprise systems and data integrations, stakeholders can be brought together and data flows can be reviewed to determine the system of record for different objects and fields. It is important to review the flow of data from one system to another to determine which system should act as the system of record for a given type of record or data element when it is modified.


The third objective is given a customer scenario, recommend approaches and techniques for consolidating data attributes from multiple sources. When using an MDM solution, it is necessary to consider how different types of data attributes, such as field values, should be consolidated to create the master record. Data survivorship rules should be established to determine which field value from a particular data source should survive during consolidation of two records. Factors and criteria can be defined for data survivorship.

Data Governance

There are 3 objectives in the Data Governance section.

The first objective is to compare and contrast various approaches and considerations for designing and implementing an enterprise data governance program while taking into account framework for defining roles and responsibilities (for example, stewardship, data custodian, etc.), policies and standards, ownership and accountability, data rules and definitions, monitoring, and measurement.


The second objective is given a customer scenario, recommend a data governance model in terms of roles and responsibilities, processes for establishing data standards, metrics and KPIs, classification of attributes by usage, identifying and prioritizing attributes to be used in match and merge, setting attribute scores, and weights.


The third objective is given a customer scenario, recommend an approach for optimizing data stewardship engagement for mitigating duplicates in matching and merging of records. Discuss attribute selection in match and merge, criteria for auto merge, manual merge, and re-parenting considerations. Identify options for auto merge enablers available on AppExchange.

Metadata Management

There are 2 objectives in the Metadata Management section.

The first objective is to compare and contrast various techniques, approaches, and considerations for capturing and managing business and technical metadata (for example, business dictionary, data lineage, taxonomy, data classification).


The second objective is given a customer scenario, recommend appropriate approaches and techniques to capture and maintain customer metadata to preserve traceability and establish a common context for business rules. Salesforce provides various features for capturing metadata, such as Event Monitoring for user events. Setup Audit Trail can be used to view and download changes made by users in Setup. Field History Tracking allows tracking of new and old field values. Field Audit Trail allows defining a data retention policy for field history data. Furthermore, custom metadata types and custom settings can be created to store custom configuration information specific to business requirements.

To prepare successfully for the certification exam, we recommend to work through our

Data Architecture and Management Designer Practice Exams

Data Architecture and Management
Study Guide

Every topic objective explained thoroughly.
The most efficient way to study the key concepts in the exam.



Data Architecture and Management

Practice Exams

Test yourself with complete practice exams or focus on a particular topic with the topic exams. Find out if you are ready for the exam.


Copyright 2019 -  www.FocusOnForce.com

Copyright 2019 -  www.FocusOnForce.com

@

Not recently active