Adf 11G Interview Questions

Q5) Differences between King Beans and Managed Beans?

Ans:

King Beans Managed Beans
A backing bean is any bean that is referenced by a form. A managed bean is a backing bean that has been registered with JSF (in faces-config.xml) and it automatically created (and optionally initialized) by JSF when it is needed.
The advantage of managed beans is that the JSF framework will automatically create these beans, optionally initialize them with parameters you specify in faces-config.xml,
Backing Beans should be defined only in the request scope The managed beans that are created by JSF can be stored within the request, session, or application scopes

Q1) What is Oracle ADF?

Ans: The Oracle Application Development Framework (Oracle ADF) is an end-to-end application framework that builds on J2EE standards and open-source technologies to simplify and accelerate implementing service-oriented applications. If you develop enterprise solutions that search, display, create, modify, and validate data using web, wireless, desktop, or web services interfaces, Oracle ADF can simplify your job. Used in tandem, Oracle JDeveloper 10g, and Oracle ADF give you an environment that covers the full development lifecycle from design to deployment, with drag-and-drop data binding, visual UI design, and team development features built-in.

9) What is the difference between Azure Data Lake and Azure Data Warehouse?

Azure Data Lake Data Warehouse
Data Lake is a capable way of storing any type, size, and shape of data. Data Warehouse acts as a repository for already filtered data from a specific resource.
It is mainly used by Data Scientists. It is more frequently used by Business Professionals.
It is highly accessible with quicker updates. It becomes a pretty rigid and costly task to make changes in Data Warehouse.
It defines the schema after when the data is stored successfully. Datawarehouse defines the schema before storing the data.
It uses ELT (Extract, Load and Transform) process. It uses ETL (Extract, Transform and Load) process.
It is an ideal platform for doing in-depth analysis. It is the best platform for operational users.

27) What has changed from private preview to limited public preview in regard to data flows?

  • You’ll no longer have to bring your own Azure Databricks clusters.
  • Data Factory will manage cluster creation and tear– down.
  • Blob datasets and Azure Data Lake Storage Gen2 datasets are separated into delimited text and Apache Parquet datasets.
  • You can still use Data Lake Storage Gen2 and Blob storage to store those files. Use the appropriate linked service for those storage engines.
  • Q19) What do you mean by Bean Scope?

    Ans: Bean Scope typically holds beans and other objects that need to be available in the different components of a web application.

    40) What are the steps involved in the ETL process?

    ETL (Extract, Transform, Load) process follows four main steps:

  • Connect and Collect – helps in moving the data on-premises and cloud source data stores
  • Transform – lets users collect the data by using compute services such as HDInsight Hadoop, Spark etc.
  • Publish – Helps in loading the data into Azure data warehouse, Azure SQL database, and Azure Cosmos DB etc
  • Monitor – It helps support the pipeline monitoring via Azure Monitor, API and PowerShell, Log Analytics, and health panels on the Azure Portal.
  • Azure Data Factory Scenarios based Interview Questions and Answers

    Related Posts

    Leave a Reply

    Your email address will not be published. Required fields are marked *