.NET Introduction

The .NET ecosystem is built on a foundation of runtime, libraries, and languages, with C# as the primary language. C# is object-oriented, supports garbage collection, and simplifies asynchronous programming with async/await. The .NET type system supports object orientation, generics, and value types, enabling efficient and safe programming. The runtime features a self-tuning garbage collector and supports both high-level abstraction and low-level control. Reflection allows dynamic programming, and exceptions are the standard error handling mechanism. App stacks like ASP.NET Core are built on top of core libraries and runtime. The SDK and tools, including NuGet for package management, support modern development practices like CI/CD and simple builds using dotnet build.

Components of .NET

  • Runtime – Execute Application Code
  • Libraries – Provide utility functions.
  • Compiler – Compile C# (and other language codes) source code to executable codes
  • SDK and Other Tools – Enable to build and monitor apps with modern workflow.
  • App Stacks – enable to write apps as ASP.net Core and Windows Forms

Reference : https://learn.microsoft.com/en-us/dotnet/core/introduction#components

Enhancements in each .NET framework

Features in all versions

  • No pointers required! C# programs typically have no need for direct pointer manipulation
  • Automatic memory management through garbage collection.
  • C# does not support a delete keyword.
  • Formal syntactic constructs for classes, interfaces, structures, enumerations, and delegates.
  • The C++-like ability to overload operators for a custom type, without the complexity
  • Support for attribute-based programming.

.Net 2.0 (2005)

  • The ability to build generic types and generic members.
  • Support for anonymous methods.
  • The ability to define a single type across multiple code files (or if necessary, as an in-memory representation) using the partial keyword

.Net 3.5 (2008)

  • Support for strongly typed queries (e.g., LINQ) used to interact with various forms of data Support for anonymous types that allow you to model the Structure of a type (rather than its behavior) on the fly in code.
  • The ability to extend the functionality of an existing type (without sub classing) using extension methods.
  • Inclusion of a lambda operator (=>), which even further simplifies working with .NET delegate types.
  • A new object initialization syntax, which allows you to set property values at the time of object creation.

.Net 4.0 (2010)

  • Support for optional method parameters, as well as named method arguments.
  • Support for dynamic lookup of members at runtime via the dynamic keyword. This provides a unified approach to invoking members on the fly, regardless of which framework the member implemented (COM, IronRuby, IronPython, or via .NET reflection services).
  • Working with generic types is much more intuitive, given that you can easily map generic data to and from general System.Object collections via covariance and contra variance.

.Net 4.5

  • C# received a pair of new keywords (async and await), which greatly simplify multithreaded and asynchronous programming. If you have worked with previous versions of C#, you might recall that calling methods via secondary threads required a fair amount of cryptic code and the use of various .NET namespaces. Given that C# now supports language keywords that handle this complexity for you, the process of calling methods asynchronously is almost as easy as calling a method in a synchronous manner.

.Net 4.6

  • Inline initialization for automatic properties as well as support for read-only automatic properties
  • Single-line method implementations using the C# lambda operator
  • Support of “static imports” to provide direct access to static members within a namespace
  • A null conditional operator, which helps check for null parameters in a method implementation
  • A new string formatting syntax termed string interpolation
  • The ability to filter exceptions using the new when keyword

 

Benifits of .NET Framework

Interoperability with existing code: This is (of course) a good thing. Existing COM software can commingle (i.e.,interop) with newer .NET software, and vice versa. As of .NET 4.0 onward, interoperability has been further simplified with the addition of the dynamic keyword.Support for numerous programming languages: .NET applications can be created using any number of programming languages (C#, Visual Basic, F#, and so on).A common runtime engine shared by all .NET-aware languages: One aspect of this engine is a well-defined set of types that each .NET-aware language understands.

Language integration: .NET supports cross-language inheritance, cross-language exception handling, and cross language debugging of code. For example, you can define a base class in C# and extend this type in Visual Basic.

A comprehensive base class library: This library provides thousands of predefined types that allow you to build code libraries, simple terminal applications, graphical desktop application, and enterprise-level web sites.

A simplified deployment model: Unlike COM, .NET libraries are not registered into the system registry. Furthermore, the .NET platform allows multiple versions of the same *.dll to exist in harmony on a single machine.

About .NET Framework

.Net Framework is a platform that provides tools and technologies to develop Windows, Web and Enterprise applications. It is divided in to two component.

  • Common Language Runtime (CLR)
  • .Net Framework Class Library.

netFrame

  • Common Language Runtime (CLR)
    .Net Framework
    provides runtime environment called Common Language Runtime (CLR).It provides an environment to run all the .Net Programs. The code which runs under the CLR is called as Managed Code. Programmers need not to worry on managing the memory if the programs are running under the CLR as it provides memory management and thread management.Programmatically, when our program needs memory, CLR allocates the memory for scope and de-allocates the memory if the scope is completed.Language Compilers (e.g. C#, VB.Net, J#) will convert the Code/Program to Microsoft Intermediate Language (MSIL) intern this will be converted to Native Code by CLR. See the below Figure

netCompiling

  • .Net Framework Class Library (FCL)
    This is also called as Base Class Library and it is common for all types of applications i.e. the way you access the Library Classes and Methods in VB.NET will be the same in C#, and it is common for all other languages in .NET.The following are different types of applications that can make use of .net class library.

    1. Windows Application.
    2. Console Application
    3. Web Application.
    4. XML Web Services.
    5. Windows Services.

    In short, developers just need to import the BCL in their language code and use its predefined methods and properties to implement common and complex functions like reading and writing to file, graphic rendering, database interaction, and XML document manipulation.

    Below are the few more concepts that we need to know and understand as part of this .Net framework.

  •  Common Type System (CTS)It describes set of data types that can be used in different .Net languages in common. (i.e), CTS ensures that objects written in different .Net languages can interact with each other.For Communicating between programs written in any .NET complaint language, the types have to be compatible on the basic level.The common type system supports two general categories of types:Value types:Value types directly contain their data, and instances of value types are either allocated on the stack or allocated inline in a structure. Value types can be built-in (implemented by the runtime), user-defined, or enumerations.Reference types:Reference types store a reference to the value’s memory address, and are allocated on the heap. Reference types can be self-describing types, pointer types, or interface types. The type of a reference type can be determined from values of self-describing types. Self-describing types are further split into arrays and class types. The class types are user-defined classes, boxed value types, and delegates.
  • Common Language Specification (CLS)It is a sub set of CTS and it specifies a set of rules that needs to be adhered or satisfied by all language compilers targeting CLR. It helps in cross language inheritance and cross language debugging.Common language specification Rules:
    It describes the minimal and complete set of features to produce code that can be hosted by CLR. It ensures that products of compilers will work properly in .NET environment.

Object Oriented Programming Concepts

Objects are real world entities such as books, pens, houses etc. Object-Oriented Programming is a methodology or paradigm to design a program using classes and objects. It simplifies the software development and maintenance by providing some concepts:

  • Object
  • Class
  • Abstraction
  • Encapsulation
  • Inheritance
  • Polymorphism

Object
Objects are whatever around you. If we think about cars, there are different kind of cars we can see, their colors, body shapes, etc… are not similar but we call all of them as cars. Also, real world objects have two characteristics states and behaviors.

Ex: states of a car – body shape, color
behaviors of a car – starting, breaking, and accelerating

Software objects are conceptually similar to real-world objects: they too consist of state and related behavior. An object stores its state in fields (variables in some programming languages) and exposes its behavior through methods (functions in some programming languages). Methods operate on an object’s internal state and serve as the primary mechanism for object-to-object communication

Class
In the real world, you can find many individual objects all of the same kind. There may be thousands of other cars in existence, all of the same make and model. Each car was built from the same set of blueprints and therefore contains the same components. In object-oriented terms, we say that your car is an instance of the class of objects known as cars. A class is the blueprint from which individual objects are created.

 Abstraction
Abstraction (from the Latin abs, meaning away from and trahere, meaning to draw) is the process of taking away or removing characteristics from something in order to reduce it to a set of essential characteristics.

Abstraction is one of three central principles (along with encapsulation and Inheritance). Through the process of abstraction, a programmer hides all but the relevant data about an object in order to reduce complexity and increase efficiency. In the same way that abstraction sometimes works in art, the object that remains is a representation of the original, with unwanted detail omitted. The resulting object itself can be referred to as an abstraction, meaning a named entity made up of selected attributes and behavior specific to a particular usage of the originating entity. Abstraction is related to both encapsulation and data hiding.

Abstraction classes and Interfaces archive abstraction in C# programming

Encapsulation
Encapsulation is the inclusion within a program object of all the resources need for the object to function – basically, the methods and the data. The object is said to “publish its interfaces.” Other objects adhere to these interfaces to use the object without having to be concerned with how the object accomplishes it. The idea is “don’t tell me how you do it; just do it.” An object can be thought of as a self-contained atom. The object interface consists of public methods and instantiated data.

Inheritance
The concept that when a class of objects is defined, any subclass that is defined can inherit the definitions of one or more general classes. This means for the programmer that an object in a subclass need not carry its own definition of data and methods that are generic to the class (or classes) of which it is a part. This not only speeds up program development; it also ensures an inherent validity to the defined subclass object (what works and is consistent about the class will also work for the subclass).

 Polymorphism
Polymorphism (from the Greek meaning “having multiple forms”) is the characteristic of being able to assign a different meaning or usage to something in different contexts – specifically, to allow an entity such as a variable, a function, or an object to have more than one form. There are several different kinds of polymorphism.

 Two kind of polymorphism in programming which are overloading and overriding

Overloading (Compile Time Polymorphism)
Functions with same name and different parameters
Return type is not part of the method signature in C#. Only the method name and its parameters are part of the signature. You cannot, for example, have these two methods:

int DoSomething(int a, int b);
string DoSomething(int a, int b);

 Definition of Overloading:
It can use same method name with different signatures

Overriding (Run Time Polymorphism)
Functions in the extended class with same name and same parameters as in the base class, but with different behaviors. (Return type should be same otherwise it gives an error)

Business Intelligence and Data Warehouse


Business Intelligence is an umbrella term that includes the applications, infrastructure and tools, and best practices that enable access to and analysis of information to improve and optimize decisions and performance. (Gartner’s Definition)

BI Components
bi
The data warehouse (Dimensional Modeling)
The data warehouse is the core of the BI system. A data warehouse is a database built for the purpose of data analysis and reporting. This purpose changes the design of this database as well. As you know, operational databases are built on normalization standards. As you probably know, a 3NF-designed database for a sales system contains many tables related to each other. So, for example, a report on sales information may consume more than 10 joined conditions, which slows down the response time of the query and report. A data warehouse comes with a new design that reduces the response time and increases the performance of queries for reports and analytics.Extract Transform Load (ETL)
It is more than one system acts as the source of data required for the BI system. So there is a requirement for data consolidation that extracts data from different sources and transforms it into the shape that fits into the data warehouse, and finally, loads it into the data warehouse; this process is called Extract Transform Load (ETL). There are many challenges in the ETL process.

According to the definition of states, ETL is not just a data integration phase. Let’s discover more about it with an example; in an operational sales database, you may have dozen of tables that provide sale transactional data. When you design that sales data into your data warehouse, you can deformalize it and build one or two tables for it. So, the ETL process should extract data from the sales database and transform it (combine, match, and so on) to fit it into the model of data warehouse tables.

There are some ETL tools in the market that perform the extract, transform, and load operations. The Microsoft solution for ETL is SQL Server Integration Service (SSIS), which is one of the best ETL tools in the market. SSIS can connect to multiple data sources such as Oracle, DB2, Text Files, XML, Web services, SQL Server, and so on. SSIS also has many built-in transformations to transform the data as required.

Data model – BISM
A data warehouse is designed for the source of analysis and reports, so it works much faster than operational systems for producing reports. However, a DW is not that fast to cover all requirements because it is still a relational database, and databases have many constraints that reduce the response time of a query. The requirement for faster processing and a lower response time on one hand, and aggregated information on another hand causes the creation of another layer in BI systems. This layer, which we call the data model, contains a file-based or memory-based model of the data for producing very quick responses to reports.

Microsoft introduces two technologies for data model which are The OLAP cube and the in-memory tabular. The OLAP cube is a file-based data storage that loads data from a data warehouse into a cube model. The cube contains descriptive information as dimensions (for example, customer and product) and cells (for example, facts and measures, such as sales and discount). The following diagram shows a sample OLAP cube:

In the preceding diagram, the illustrated cube has three dimensions: Product, Customer, and Time. Each cell in the cube shows a junction of these three dimensions. For example, if we store the sales amount in each cell, then the green cell shows that Devin paid 23$ for a Hat on June 5. Aggregated data can be fetched easily as well within the cube structure. For example, the orange set of cells shows how much Mark paid on June 1 for all products. As you can see, the cube structure makes it easier and faster to access the required information.

Microsoft SQL Server Analysis Services 2012 comes with two different types of modeling: multidimensional and tabular. Multidimensional modeling is based on the OLAP cube and is fitted with measures and dimensions, as you can see in the preceding diagram. The tabular model is based on a new In-memory engine for tables. The In-memory engine loads all data rows from tables into the memory and responds to queries directly from the memory. This is very fast in terms of the response time.

The BI semantic model (BISM) provided by Microsoft is a combination of SSAS Tabular and Multidimensional solutions.

 

Data visualization

The frontend of a BI system is data visualization. In other words, data visualization is a part of the BI system that users can see. There are different methods for visualizing information, such as strategic and tactical dashboards, Key Performance Indicators (KPIs), and detailed or consolidated reports. As you probably know, there are many reporting and visualizing tools on the market.

Microsoft has provided a set of visualization tools to cover dashboards, KPIs, scorecards, and reports required in a BI application. PerformancePoint, as part of Microsoft SharePoint, is a dashboard tool that performs best when connected to SSAS Multidimensional OLAP cube. Microsoft’s SQL Server Reporting Services (SSRS) is a great reporting tool for creating detailed and consolidated reports. Excel is also a great slicing and dicing tool especially for power users. There are also components in Excel such as Power View, which are designed to build performance dashboards.

 

Master Data Management

Every organization has a part of its business that is common between different systems. That part of the data in the business can be managed and maintained as master data. For example, an organization may receive customer information from an online web application form or from a retail store’s spreadsheets, or based on a web service provided by other vendors.

Master Data Management (MDM) is the process of maintaining the single version of truth for master data entities through multiple systems. Microsoft’s solution for MDM is Master Data Services (MDS). Master data can be stored in the MDS entities and it can be maintained and changed through the MDS Web UI or Excel UI. Other systems such as CRM, AX, and even DW can be subscribers of the master data entities. Even if one or more systems are able to change the master data, they can write back their changes into MDS through the staging architecture.

Data Quality Services
The quality of data is different in each operational system, especially when we deal with legacy systems or systems that have a high dependence on user inputs. As the BI system is based on data, the better the quality of data, the better and the output of the BI solution. Because of this fact, working on data quality is one of the components of the BI systems.  As a solution to improve the quality of data, Microsoft provided users with DQS. DQS works based on Knowledge Base domains, which means a Knowledge Base can be created for different domains, and the Knowledge Base will be maintained and improved by a data steward as time passes. There are also matching policies that can be used to apply standardization on the data.

Dimensional modeling
To gain an understanding of data warehouse design and dimensional modeling, it’s better to learn about the components and terminologies of a DW. A DW consists of Fact tables and dimensions. The relationship between a Fact table and dimensions are based on the foreign key and primary key (the primary key of the dimension table is addressed in the fact table as the foreign key).

Fact or measure
Facts are numeric and additive values in the business process. For example, in the sales Business, a fact can be a sales amount, discount amount, or quantity of items sold. All of These measures or facts are numeric values and they are additive. Additive means that you can add values of some records together and it provides a meaning. For example, adding the sales amount for all records is the grand total of sales.
Dimension

Dimension tables are tables that contain descriptive information. Descriptive information, for Example, can be a customer’s name, job title, company, and even geographical information of where the customer lives. Each dimension table contains a list of columns, and the columns of the dimension table are called attributes. Each attribute contains some descriptive information, and attributes that are related to each other will be placed in a dimension. For example, the customer dimension would contain the attributes listed earlier.

Each dimension has a primary key, which is called the surrogate key. The surrogate key is usually an auto increment integer value. The primary key of the source system will be stored in the dimension table as the business key.

The Fact table
The Fact table is a table that contains a list of related facts and measures with foreign keys pointing to surrogate keys of the dimension tables. Fact tables usually store a large number of records, and most of the data warehouse space is filled by them (around 80 percent).
Grain
Grain is one of the most important terminologies used to design a data warehouse. Grain defines a level of detail that stores the Fact table. For example, you could build a data warehouse for sales in which Grain is the most detailed level of transactions in the retail shop, that is, one record per each transaction in the specific date and time for the customer and sales person. Understanding Grain is important because it defines which dimensions are required.

The star schema and snow flacke schema
There are two different schemas for creating a relationship between fact and dimensions: the snow flake and star schema. In the start schema, a Fact table will be at the center as a hub, and dimensions will be connected to the fact through a single-level relationship. There won’t be (ideally) a dimension that relates to the fact through another dimension. The following diagram shows the different schemas:

schemas

Reference:
Microsoft SQL Server 2014 Business Intelligence Development Beginner’s Guide

What is diffrent between out and ref

[sourcecode language="csharp"]

  class Program
    {
        static void Main(string[] args)
        {
            var test = new Test();

            int _result ;
            test.outSum(out _result, 5,5);

            Console.WriteLine(_result);

            Console.WriteLine();
            Console.WriteLine();

            int _result2 = 0;

            test.refSum(ref _result2, 6, 6);
            Console.WriteLine(_result2);

            Console.ReadKey();

        }
    }
[/sourcecode]


Ref Out
Ref parameter must be initialized before pass in to called method Out parameter must not be initialized before pass in to called method
It is not needed to initialize the value of parameter before returning to the call calling method A called method is required to assign a value of a parameter before returning to the calling method.
Passing parameter by ref is useful if the called method is also needed to modify the pass parameter Declaring a parameter to an out method is useful if multiple values need to be returned from function or method
It is not compulsory to initialize a parameter value before using it in a calling method A parameter value must be initialized within the calling method before its use.
Both are treated differently at run time and they are treated the same at compile time
Properties are not variables, therefore it cannot be passed as an out or ref parameter