iis 7.0

3:34 AM Edit This 0 Comments »

Introduction

Visual Studio comes with an inbuilt web server. No doubt the inbuilt web server comes handy during development. However, finally your web site needs to sit inside Internet Information Services (IIS). If you are an ASP.NET developers you are probably familiar with IIS6. The new generations of Windows namely Windows Vista and Windows Server 2008 come with IIS7. The new version of IIS is different than earlier versions in many areas. In fact the entire architecture of IIS has been revamped for the good. In this article I am going to give you a jump start on IIS7. I will confine myself to the features that are most commonly needed by ASP.NET developers. If you wish to deploy your websites on IIS7 then this article should give you a good start.

New Architecture of IIS7

As I mentioned earlier, IIS7 has been revamped since its previous versions. The most significant areas of improvement (for developers) are modular architecture, IIS user interface, request processing pipeline and ASP.NET integration. Let's see each of these improvements in brief.

Modular Architecture

The new architecture introduced in IIS7 is modular in nature. Individual features of IIS are organized in various functionally related modules. This allows administrators to install only the required features resulting in decreased footprint of the web server. Additionally, they can install patches and upgrades related to installed components only. These modules can be turned on or off using "Windows Features" dialog of Windows Vista.

Request Processing Pipeline

In the early versions of IIS there were essentially two request pipelines. One used by IIS and one used by ASP.NET. The request authentication, execution of ISAPI extensions and filters etc. used to happen at IIS level first and then the request used to reach ASP.NET. Then ASP.NET used to run its own authentication and HTTP handlers and modules. As you might have guessed there was some duplication of work and responsibilities. The IIS7 on the other hand provides an integrated requested processing pipeline that combines IIS and ASP.NET processing into a single step.

ASP.NET integration

With ASP.NET 2.0 Microsoft added the ASP.NET tab to the IIS application property dialog. Taking this integration further IIS7 adds a lot more integration that makes administrator's job easy.

IIS User Interface

IIS user interface has been greatly redesigned for better organization. The following screen shot shows the new user interface of IIS manager.

Now that you have some idea of what IIS7 has to offer, let's see how some common tasks can be performed. I am going to use Windows Vista for all the discussion below. The concepts remain the same for IIS under Windows Server 2008 also.

IIS Manager

The IIS manager can be accessed from Control Panel > System and Maintenance > Administrative Tools > Internet Information Services Manager.

As shown in the figure above. The IIS manager user interface consists of three panes. The left hand side pane is Connections, the middle pane is Workspace and the right hand side pane is Actions.

The Connections pane lists application pools and websites. The workspace pane consists of two tabs at the bottom namely Features View and Content View. The Features View allows you to work with the settings of the selected item from Connections pane whereas the Content View displays all the child nodes (content) of the selected item. The following Figure shows these two views for the "Default Web Site"

Working with Application Pools

Application pool is a group of IIS applications that are isolated from other application pools. Each application pool runs in its own worker process. Any problem with that process affects the applications residing in it and not the rest of the applications. You can configure application pools individually.

In order to create a new application pool, select "Application Pools" under Connections pane. Then click on "Add application pool" from Actions pane. This will open a dialog as shown below:

Specify a name for the new pool to be created. Select .NET framework version that all the applications from the pool will use. Also select pipeline mode. There are two pipeline modes viz. integrated and classic. The integrated mode uses the integrated request processing model whereas the classic mode uses the older request processing model. Click OK to create the application pool.

Your new application pool will now be displayed in the Workspace pane. To configure the application pool click on the "Advanced Settings" option under Actions pane. The following figure shows many of the configurable properties of an application pool.

Creating Websites

One good feature of IIS7 under Vista is that it allows you to create multiple web sites. This feature was missing on Windows XP or Windows 2000 Professional. Server editions of Windows obviously don't have such limitation. To create a new web site, select Web Sites node under Connections pane and then click on "Add Web Site" under Actions pane. This opens a dialog as shown below:

Here, you can specify properties of the new web site including its application pool and physical location.

Creating IIS Applications

Creating an IIS application or a Virtual Directory is quick and simple. Just right click on the web site and choose either "Add Application" or "Add Virtual Directory" to open respective dialogs (see below).

An existing Virtual directory can be marked as an IIS application by right clicking on it and selecting "Convert to Application".

Once you create a website or an IIS application, you can then set several ASP.NET related configuration properties via Workspace pane.

Ok. That's it for now. In the next part I will discuss the hierarchical configuration used by IIS and feature delegation.

introduction to iis7.0

3:32 AM Edit This 0 Comments »

Using the ASP.NET 2.0 ReportViewer in Local Mode

7:02 AM Posted In Edit This 0 Comments »

Introduction

There are a good amount of materials on the net about “SQL Reporting Services in Server Mode” but it took me a while to research on using “Local Mode”, especially when parameters are involved.

The reason to use “Local Mode” instead of “Server Mode” is that in “Server Mode”, the client makes a report request to the server. The server generates the report and then sends it to the client. While it is more secure, a large report will degrade performance due to transit time from server to browser. In “Local Mode”, reports are generated at the client. No connection to the “SQL Server Reporting Services Server” is needed for local mode. Large reports will not increase wait time.

So here is an article on how to generate reports using the ASP.NET 2.0 ReportViewer web server control via Local Mode with a parameterized stored procedure. I am using ASP.NET 2.0, Visual Studio 2005, and SQL Server 2005 with Application Block. If you are not using Microsoft Application Block, just call the stored procedure via the SQL Command object without using the SQL Helper class in the example.

Using the Northwind database, our example will prompt the user for a category from a dropdown list and display all the products under the selected category.

Step 1: Create a parameterized stored procedure

ALTER PROCEDURE  ShowProductByCategory(@CategoryName nvarchar(15) )
AS
SELECT Categories.CategoryName, Products.ProductName,
Products.UnitPrice, Products.UnitsInStock
FROM Categories INNER JOIN Products ON
Categories.CategoryID = Products.CategoryID
WHERE CategoryName=@CategoryName
RETURN

Step 2: Create a DataTable in a typed DataSet using the DataSet Designer

Under Solution Explorer, right-click on the App_Code folder. Select “Add New Item”. Select “DataSet”. Name your dataset, e.g., DataSetProducts.xsd, and click Add. The TableAdapter Configuration Wizard should appear automatically, if not, right click anywhere on the DataSet Designer screen and select “Add” from the context menu. Select the “TableAdapter” to bring up the wizard. Follow the wizard to create your data table. I chose “Use existing stored procedures” as the command type and specified “ShowProductByCategory” as the Select command. I also highlighted “CategoryName” as the Select procedure parameter.

The results from the stored procedure created in step 1 will eventually be placed into this data table created in step 2 (Fig. 1). Report data is provided through a data table.

Fig. 1 DataSetProducts.xsd contains a DataTable to be used as a report data source.

Step 3: Create a report definition

Under Solution Explorer, right-click and select “Add New Item”. Select the “Report” template. I will use the default name Report.rdlc in this example. Click “Add” to add Report.rdlc to your project. “rdl” stands for Report Definition Language. The “c” stands for Client. Hence, the extension .rdl represents a server report. The extension .rdlc represents a local report.

Drag a “Table” from the Toolbox onto the report designer screen (Fig.2). The Toolbox display here is specific to the report template. It shows controls to be used in a report as opposed to controls to be used in a web form. The “Table” has three bands, the header, detail, and the footer bands.

A “Table” is a data region. A data region is used to display data-bound report items from underlying datasets. Although a report can have multiple data regions, each data region can display data from only one DataSet. Therefore, use a stored procedure to link multiple tables into a single DataSet to feed the report.

Fig. 2 Toolbox contains controls specific to the report template.

Open up the “Website Data Sources” window (Fig.3). Locate the “DataSetProductsDataSet (created in Step 2). Expand to see the columns in the DataTableShowProductByCategory”. The table is named “ShowProductByCategory” because we chose “Use existing stored procedure” in the TableAdapter Configuration Wizard. And our procedure name is “ShowProductByCategory”.

Drag the column “ProductName” from the “Website Data Sources” window, and drop it in the Detail row (middle row). Drag “UnitPrice” into the middle row-second column and “UnitsInStock” into the last column. The header is automatically displayed. You can right click on any field in the detail row (e.g., right click on “Unit Price”) and bring up the context menu. Select Properties from the context menu. Select Format tab to format the “Unit Price” and “Units In Stock” accordingly.

Fig 3. Website Data Sources window shows typed datasets in your app and its columns.

Step 4: Drag a ReportViewer web server control onto an .aspx form

Drag a DropDownList control onto a new web form (Fig. 4). Use the “Choose Data Source” option from the “DropDownList Task” to bind the CategoryName field from the Category table. Remember to enable autopostback. Users can then make their selection as an input to the stored procedure. While I am using a DropDownList in this example, you can use textboxes and other controls to prompt users for additional input.

Drag a ReportViewer web server control onto the web form. Set its Visible property to false. Also notice, the ReportViewer web server control in ASP.NET 2.0 provides exporting capability. You can select between Excel format or PDF format. However, I find that what you see on screen is not always what you get from the printer. You will have to experiment with the output format further.

Fig. 4 Set this web page as the StartUp page.

Next, bring up the smart tag of the ReportViewer control (Fig. 5). Select “Report.rdlc” in the “Choose Report” dropdown list. “Report.rdlc” was created in Step 3. Local Reports have the extension .rdlc. Server Reports are labeled with .rdc.

Fig. 5 Associate the report definition file (.rdlc) to the ReportViewer control

Step 5: Write source code for the “Run Report” button to generate the report based on user selections

Don’t forget to include the “Microsoft.Reporting.WebForms” namespace in your code-behind file.

Collapse
using System;
using System.Data;
using System.Data.SqlClient;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using Microsoft.ApplicationBlocks.Data;
using Microsoft.Reporting.WebForms;

public partial class ReportViewerLocalMode : System.Web.UI.Page
{
public string thisConnectionString =
ConfigurationManager.ConnectionStrings[
"NorthwindConnectionString"].ConnectionString;

/*I used the following statement to show if you have multiple
input parameters, declare the parameter with the number
of parameters in your application, ex. New SqlParameter[4]; */


public SqlParameter[] SearchValue = new SqlParameter[1];

protected void RunReportButton_Click(object sender, EventArgs e)
{
//ReportViewer1.Visible is set to false in design mode

ReportViewer1.Visible = true;
SqlConnection thisConnection = new SqlConnection(thisConnectionString);
System.Data.DataSet thisDataSet = new System.Data.DataSet();
SearchValue[0] = new SqlParameter("@CategoryName",
DropDownList1.SelectedValue);

/* Put the stored procedure result into a dataset */
thisDataSet = SqlHelper.ExecuteDataset(thisConnection,
"ShowProductByCategory", SearchValue);

/*or thisDataSet = SqlHelper.ExecuteDataset(thisConnection,
"ShowProductByCategory", dropdownlist1.selectedvalue);
if you only have 1 input parameter */


/* Associate thisDataSet (now loaded with the stored
procedure result) with the ReportViewer datasource */

ReportDataSource datasource = new
ReportDataSource("DataSetProducts_ShowProductByCategory",
thisDataSet.Tables[0]);

ReportViewer1.LocalReport.DataSources.Clear();
ReportViewer1.LocalReport.DataSources.Add(datasource);
if (thisDataSet.Tables[0].Rows.Count == 0)
{
lblMessage.Text = "Sorry, no products under this category!";
}

ReportViewer1.LocalReport.Refresh();
}
}

Step 6: Build and Run the Report

Press F5 to run the .aspx. Click on the “Run Report” button to see the list of products based on the selected category from the dropdown list (Fig. 6).

Fig. 6 Click on the “Run Report” button to generate a local report

Be sure to add reference of the ReportViewer to your web app, and note that your ReportViewer web server control has registered an HTTP handler in the web.config file. Your web.config file should have the following string:

<httpHandlers>
<add path="Reserved.ReportViewerWebControl.axd" verb="*"
type="Microsoft.Reporting.WebForms.HttpHandler,
Microsoft.ReportViewer.WebForms,
Version=8.0.0.0, Culture=neutral,
PublicKeyToken=?????????????"

validate="false" />
</httpHandlers>

When you use the Visual Studio 2005 ReportViewer web server control in your website, you will need to copy the "C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\BootStrapper\Packages\ReportViewer\ReportViewer.exe" to your server and run it before you post those web pages with the ReportViewer control.

The real reason SELECT * queries are bad:

5:05 AM Posted In Edit This 0 Comments »

The real reason SELECT * queries are bad: index coverage

Are SELECT * queries bad? Sure, everyone know that. But, why?

It's returning too much data, right?

That's the common answer, but I don't think it's the right one. If you're working with a reasonably normalized database, the actual network traffic difference is pretty small.

Let's take a look at a sample. The following two queries select 326 rows from the TransactionHistoryArchive table in the AdventureWorks database (which has a total of 89K rows). The first uses a SELECT * query, the second selects a specific column:

SELECT * FROM Production.TransactionHistoryArchive
WHERE ReferenceOrderID < 100

SELECT ReferenceOrderLineID FROM Production.TransactionHistoryArchive
WHERE ReferenceOrderID < 100

In this case, the difference in network traffic is only 15K, roughly a 10% difference (180K vs. 165K). It's worth fixing, but not a huge difference.

SELECT * makes the Table / Index Scan Monster come

Often, the bigger problem with SELECT * is the effect it will have on the execution plan. While SQL Server primarily uses indexes to look up your data, if the index contains all the columns you’re requesting it doesn’t even need to look in the table. That concept is known as index coverage. In the above example, the first query results in a Clustered Index Scan, whereas the second query uses a much more efficient Index Seek. In this case, the Index seek is one hundred times more efficient than the Clustered Index Scan.

SelectStarQueryPlan

Unless you've indexed every single column in a table (which is almost never a good idea), a SELECT * query can't take advantage of index coverage, and you're likely to get (extremely inefficient) scan operations.

If you just query the rows you'll actually be using, it's more likely they'll be covered by indexes. And I think that's the biggest performance advantage of ignoring SELECT * queries.

The Stability Aspect

SELECT * queries are also bad from an application maintenance point of view as well, since it introduces another outside variable to your code. If a column is added to a table, the results returned to your application will change in structure. Well programmed applications should be referring to columns by name and shouldn't be affected, but well programmed applications should also minimize the ways in which they are vulnerable to external changes.

Shameless Plug: I go into this (and a lot other important performance tips) in more detail in a soon-to-be-released book for SitePoint.

Published Wednesday, July 18, 2007 11:56 PM by Jon Galloway
Filed under: ,

Comments

# re: The real reason SELECT * queries are bad: index coverage

"If a row is added to a table..."

Should be: "If a column is added to a table..."

Thursday, July 19, 2007 4:55 AM by Goran

# re: The real reason SELECT * queries are bad: index coverage

Great post Jon...

The point about stability and ordinal position is a very real one, and I strongly agree with the practice of referencing columns by name*. You'll find this out the hard way if you use most database sync applications to migrate changes from one environment to another.

Even if you don't rely on ordinal position, it's a good idea to have your change scripts drop and recreate a table when columns are added to it, if only for schema consistency reasons.

Can't wait for the book!

* ...he says hypocritically, knowing full well that SubSonic relies on the assumption that the name field is in the second ordinal position

Sql Server Architecture

10:00 PM Posted In Edit This 0 Comments »
Microsoft® SQL Server data is stored in databases. The data in a database is organized into the logical components visible to users. A database is also physically implemented as two or more files on disk.

When using a database, you work primarily with the logical components such as tables, views, procedures, and users. The physical implementation of files is largely transparent. Typically, only the database administrator needs to work with the physical implementation.

Each instance of SQL Server has four system databases (master, model, tempdb, and msdb) and one or more user databases. Some organizations have only one user database, containing all the data for their organization. Some organizations have different databases for each group in their organization, and sometimes a database used by a single application. For example, an organization could have one database for sales, one for payroll, one for a document management application, and so on. Sometimes an application uses only one database; other applications may access several databases.

It is not necessary to run multiple copies of the SQL Server database engine to allow multiple users to access the databases on a server. An instance of the SQL Server is capable of handling thousands of users working in multiple databases at the same time. Each instance of SQL Server makes all databases in the instance available to all users that connect to the instance, subject to the defined security permissions.

When connecting to an instance of SQL Server, your connection is associated with a particular database on the server. This database is called the current database. You are usually connected to a database defined as your default database by the system administrator.

SQL Server allows you to detach databases from an instance of SQL Server, then reattach them to another instance, or even attach the database back to the same instance. If you have a SQL Server database file, you can tell SQL Server when you connect to attach that database file with a specific database name.



Physical Database Files and Filegroups
Microsoft® SQL Server maps a database over a set of operating-system files. Data and log information are never mixed on the same file, and individual files are used only by one database.

SQL Server databases have three types of files:

Primary data files
The primary data file is the starting point of the database and points to the other files in the database. Every database has one primary data file. The recommended file name extension for primary
data files is .mdf.

Secondary data files
Secondary data files comprise all of the data files other than the primary data file. Some databases may not have any secondary data files, while others have multiple secondary data files. The recommended file name extension for secondary data files is .ndf.

Log files
Log files hold all of the log information used to recover the database. There must be at least one log file for each database, although there can be more than one. The recommended file name extension for log files is .ldf.


SQL Server does not enforce the .mdf, .ndf, and .ldf file name extensions, but these extensions are recommended to help identify the use of the file.

In SQL Server, the locations of all the files in a database are recorded in both the master database and the primary file for the database. Most of the time the database engine uses the file location information from the master database. For some operations, however, the database engine uses the file location information from the primary file to initialize the file location entries in the master database.

SQL Server files have two names:

logical_file_name is a name used to refer to the file in all Transact-SQL statements.
The logical file name must conform to the rules for SQL Server identifiers and must be unique to the database.

os_file_name is the name of the physical file.
It must follow the rules for Microsoft Windows NT® or Microsoft Windows® Me, and Microsoft Windows 98 file names.


These are examples of the logical file names and physical file names of a database created on a default instance of SQL Server:



SQL Server data and log files can be placed on either FAT or NTFS file systems, but cannot be placed on compressed file systems.

Use the following SQL Statement to list the logical and physical file names:

USE MyDb
SELECT SUBSTRING(name,1,20) Name,
SUBSTRING(filename,1,50) Filename
FROM dbo.sysfiles

Name Filename
-------------------- ------------------------------------
MyDb_System E:\MsSQLServer\Data\MyDb_System.MDF
MyDb_Log_1 E:\MsSQLServer\Data\MyDb_Log_1.LDF
MyDb_Data_1 E:\MsSQLServer\Data\MyDb_Data_1.NDF
MyDb_Index_1 E:\MsSQLServer\Data\MyDb_Index_1.NDF

If you have a Backup and you would know, the logical and physical file names within this Backup, then you can use RESTORE FILELISTONLY

RESTORE FILELISTONLY FROM
DISK = N'E:\MsSQLServer\Backup\MyDb.bak'
WITH FILE = 7

LogicalName PhysicalName
----------------------------------------------------------------
MyDb D:\sql2005\MSSQL.1\MSSQL\Data\MyDb.mdf
MyDb_log C:\DATA\MyDb_log.ldf

Recovery Model
SQL Server offers three recovery models for each database: full recovery, simple recovery and bulk-logged recovery. The recovery models determine how much data loss is acceptable in case of a failure and what types of backup and restore functions are allowed.

Most people either select full or simple for all of their databases and just stick with the same option across the board. In most cases, selecting the full recovery model is the smartest option, because it gives you the greatest flexibility and minimizes data loss in the event a restore has to take place.

Although using the full recovery model makes logical sense, there are reasons why the other two options are available. We will further define why there are three options and when you might want to use the different options to protect your databases. First, let's take a closer look at each model.

Simple

The simple recovery model allows you to recover data only to the most recent full database or differential backup. Transaction log backups are not available because the contents of the transaction log are truncated each time a checkpoint is issued for the database.

Full

The full recovery model uses database backups and transaction log backups to provide complete protection against failure. Along with being able to restore a full or differential backup, you can recover the database to the point of failure or to a specific point in time. All operations, including bulk operations such as SELECT INTO, CREATE INDEX and bulk-loading data, are fully logged and recoverable.

Bulk-Logged

The bulk-logged recovery model provides protection against failure combined with the best performance. In order to get better performance, the following operations are minimally logged and not fully recoverable: SELECT INTO, bulk-load operations, CREATE INDEX as well as text and image operations. Under the bulk-logged recovery model, a damaged data file can result in having to redo work manually based on the operations that are not fully logged. In addition, the bulk-logged recovery model only allows the database to be recovered to the end of a transaction log backup when the log backup contains bulk changes.

So once again, based on the information above it looks like the Full Recovery model is the way to go. Given the flexibility of the full recovery model, why would you ever select any other model? The following factors will help you determine when another model could work for you:

Select Simple if:

Your data is not critical.
Losing all transactions since the last full or differential backup is not an issue.
Data is derived from other data sources and is easily recreated.
Data is static and does not change often.
Select Bulk-Logged if:

Data is critical, but logging large data loads bogs down the system.
Most bulk operations are done off hours and do not interfere
with normal transaction processing.
You need to be able to recover to a point in time.
Select Full if:

Data is critical and no data can be lost.
You always need the ability to do a point-in-time recovery.
Bulk-logged activities are intermixed with normal transaction processing.
You are using replication and need the ability to resynchronize all
databases involved in replication to a specific point in time.
Switching recovery models

For some databases, you may need to use a combination of these recovery models. Let's say you have a critical system and you cannot afford to lose any data during daily operations; but during off hours there are maintenance tasks and data loads that use way too much transaction log space to log every transaction. In a case like this, you may want to switch recovery models prior to your maintenance tasks. This can be automated using T-SQL in the job that runs your maintenance or data load tasks. After the maintenance task is completed, the recovery model can be switched back again.

Switching between full and bulk-logged models is probably the best scenario for changing recovery models and also the safest and easiest. You can switch from any recovery model to another recovery model, but prior to or after the switch, you may need to issue additional transaction log or full backups to ensure you have a complete backup set.

ALTER DATABASE Northwind SET RECOVERY FULL
GO

Transaction Log Architecture
Every Microsoft® SQL Server™ 2000 database has a transaction log that records all transactions and the database modifications made by each transaction. This record of transactions and their modifications supports three operations:

Recovery of individual transactions
If an application issues a ROLLBACK statement, or if SQL Server detects an error such as the loss of communication with a client, the log records are used to roll back the modifications made by an incomplete transaction.

Recovery of all incomplete transactions when SQL Server is started.
If a server running SQL Server fails, the databases may be left in a state where some modifications were never written from the buffer cache to the data files, and there may be some modifications from incomplete transactions in the data files. When a copy of SQL Server is started, it runs a recovery of each database. Every modification recorded in the log which may not have been written to the data files is rolled forward. Every incomplete transaction found in the transaction log is then rolled back to ensure the integrity of the database is preserved.

Rolling a restored database forward to the point of failure
After the loss of a database, as is possible if a hard drive fails on a server that does not have RAID drives, you can restore the database to the point of failure. You first restore the last full or differential database backup, and then restore the sequence of transaction log backups to the point of failure. As you restore each log backup, SQL Server reapplies all the modifications recorded in the log to roll forward all the transactions. When the last log backup is restored, SQL Server then uses the log information to roll back all transactions that were not complete at that point.


Truncating the Transaction Log

If log records were never deleted from the transaction log, the logical log would grow until it filled all the available space on the disks holding the physical log files. At some point in time, old log records no longer necessary for recovering or restoring a database must be deleted to make way for new log records. The process of deleting these log records to reduce the size of the logical log is called truncating the log.

The active portion of the transaction log can never be truncated. The active portion of the log is the part of the log needed to recover the database at any time, so must have the log images needed to roll back all incomplete transactions. It must always be present in the database in case the server fails because it will be required to recover the database when the server is restarted. The record at the start of the active portion of the log is identified by the minimum recovery log sequence number (MinLSN).

The recovery model chosen for a database determines how much of the transaction log in front of the active portion must be retained in the database. Although the log records in front of the MinLSN play no role in recovering active transactions, they are required to roll forward modifications when using log backups to restore a database to the point of failure. If you lose a database for some reason, you can recover the data by restoring the last database backup, and then restoring every log backup since the database backup. This means that the sequence of log backups must contain every log record that was written since the database backup. When you are maintaining a sequence of transaction log backups, no log record can be truncated until after it has been written to a log backup.

The log records before the MinLSN are only needed to maintain a sequence of transaction log backups.

In the simple recovery model, a sequence of transaction logs is not being maintained. All log records before the MinLSN can be truncated at any time, except while a BACKUP statement is being processed. NO_LOG and TRUNCATE_ONLY are the only BACKUP LOG options that are valid for a database that is using the simple recovery model.

In the full and bulk-logged recovery models, a sequence of transaction log backups is being maintained. The part of the logical log before the MinLSN cannot be truncated until those log records have been copied to a log backup.

Log truncation occurs at these points

At the completion of any BACKUP LOG statement.

Every time a checkpoint is processed, provided the database is using the simple recovery model. This includes both explicit checkpoints resulting from a CHECKPOINT statement and implicit checkpoints generated by the system. The exception is that the log is not truncated if the checkpoint occurs when a BACKUP statement is still active


Transaction logs are divided internally into sections called virtual log files. Virtual log files are the unit of truncation. When a transaction log is truncated, all log records before the start of the virtual log file containing the MinLSN are deleted

The size of a transaction log is therefore controlled in one of these ways

When a log backup sequence is being maintained, schedule BACKUP LOG statements to occur at intervals that will keep the transaction log from growing past the desired size.

When a log backup sequence is not maintained, specify the simple recovery model.


This illustration shows a transaction log that has four virtual logs. The log has not been truncated after the database was created. The logical log starts at the beginning of the first virtual log and the part of virtual log 4 beyond the end of the logical file has never been used.



This illustration shows how the log looks after truncation. The rows before the start of the virtual log containing the MinLSN record have been truncated.



Truncation does not reduce the size of a physical log file, it reduces the size of the logical log file.

Shrinking the Transaction Log

The size of the log files are physically reduced when:

A DBCC SHRINKDATABASE statement is executed.
A DBCC SHRINKFILE statement referencing a log file is executed.
An autoshrink operation occurs
Shrinking a log is dependent on first truncating the log. Log truncation does not reduce the size of a physical log file, it reduces the size of the logical log and marks as inactive the virtual logs that do not hold any part of the logical log. A log shrink operation removes enough inactive virtual logs to reduce the log file to the requested size.

The unit of size reduction is a virtual log. For example, if you have a 600 MB log file that has been divided into six 100 MB virtual logs, the size of the log file can only be reduced in 100 MB increments. The file size can be reduced to sizes such as 500 MB or 400 MB, but it cannot be reduced to sizes such as 433 MB or 525 MB.

Virtual logs that hold part of the logical log cannot be freed. If all the virtual logs in a log file hold parts of the logical log, the file cannot be shrink until a truncation marks one or more of the virtual logs at the end of the physical log as inactive.

When any file is shrunk, the space freed must come from the end of the file. When a transaction log file is shrunk, enough virtual logs from the end of the file are freed to reduce the log to the size requested by the user. The target_size specified by the user is rounded to the next highest virtual log boundary. For example, if a user specifies a target_size of 325 MB for our sample 600 MB file with 100 MB virtual log files, the last two virtual log files are removed and the new file size is 400 MB.

In SQL Server, a DBCC SHRINKDATABASE or DBCC SHRINKFILE operation attempts to shrink the physical log file to the requested size (subject to rounding) immediately:

If no part of the logical log is in the virtual logs beyond the target_size mark, the virtual logs after the target_size mark are freed and the successful DBCC statement completes with no messages.

If part of the logical log is in the virtual logs beyond the target_size mark, SQL Server frees as much space as possible and issues an informational message. The message tells you what actions you need to perform to get the logical log out of the virtual logs at the end of the file. After you perform this action, you can then reissue the DBCC statement to free the remaining space.


For example, assume that a 600 MB log file with six virtual logs has a logical log starting in virtual log 3 and ending in virtual log 4, when you execute a DBCC SHRINKFILE statement with a target_size of 275 MB:



Virtual logs 5 and 6 are freed immediately because they hold no portion of the logical log. To meet the specified target_size, however, virtual log 4 should also be freed, but cannot because it holds the end portion of the logical log. After freeing virtual logs 5 and 6, SQL Server fills the remaining part of virtual log 4 with dummy records. This forces the end of the log file to virtual log 1. In most systems, all transactions starting in virtual log 4 will be committed within seconds, meaning that all of the active portion of the log moves to virtual log 1, and the log file now looks like this:



The DBCC SHRINKFILE statement also issues an informational message that it could not free all the space requested, and indicate that you can execute a BACKUP LOG statement to make it possible to free the remaining space. Once the active portion of the log moves to virtual log 1, a BACKUP LOG statement will truncate the entire logical log that is in virtual log 4:



Because virtual log 4 no longer holds any portion of the logical log, if you now execute the same DBCC SHRINKFILE statement with a target_size of 275 MB, virtual log 4 will be freed and the size of the physical log file reduced to the size requested.

Example Shrinking the Transaction Log

Here is an example how boths steps can be performed:

Database is in FULL Recovery Mode

# For this example we switch to FULL Mode
USE master
ALTER DATABASE MyDb SET RECOVERY FULL;
GO
The command(s) completed successfully.

# Add logical Devices for the Backup (The directories must exist!)
EXEC sp_addumpdevice 'disk', 'MyDb_dat',
'C:\Program Files\Microsoft SQL Server\MSSQL\BACKUP\MyDb_dat.dat'
GO
(1 row(s) affected)
'Disk' device added.

EXEC sp_addumpdevice 'disk', 'MyDb_log',
'C:\Program Files\Microsoft SQL Server\MSSQL\BACKUP\MyDb_log.dat'
GO
(1 row(s) affected)
'Disk' device added.

# Create a Backup before Truncating / Shrinking
BACKUP DATABASE MyDb TO MyDb_dat
GO
Processed 26392 pages for database 'MyDb', file 'MigrationBasisplus_Data' on file 9.
Processed 1 pages for database 'MyDb', file 'MigrationBasisplus_Log' on file 9.
BACKUP DATABASE successfully processed 26393 pages in 9.756 seconds (22.161 MB/sec).

BACKUP LOG MyDb TO MyDb_log
GO
Processed 1 pages for database 'MyDb', file 'MigrationBasisplus_Log' on file 5.
BACKUP LOG successfully processed 1 pages in 0.065 seconds (0.039 MB/sec).

# Truncate the Transaction Log
BACKUP LOG MyDb WITH TRUNCATE_ONLY
GO
The command(s) completed successfully.

# Drop logical Devices
sp_dropdevice 'MyDb_dat'
GO
Device dropped.

sp_dropdevice 'MyDb_log'
GO
Device dropped.

# Get the Name of the Transaction Log
USE MyDb
SELECT name FROM dbo.sysfiles
GO

# Shrink the physical Size of the Transaction Log to 20MB
USE MyDb
DBCC SHRINKFILE (MigrationBasisplus_Log, 20)
GO

# Avoid a transaction log grows unexpectedly
USE [master]
GO
ALTER DATABASE [MyDb] MODIFY FILE
(NAME = N'MyDb_Log_1', SIZE = 772096KB,
MAXSIZE = 921600KB , FILEGROWTH = 10240KB)
GO

Database is in SIMPLE Recovery Mode

# For this example we switch to SIMPLE Mode
USE master
ALTER DATABASE MyDb SET RECOVERY SIMPLE;
GO
The command(s) completed successfully.

# Add logical Device for the Backup (The directories must exist!)
EXEC sp_addumpdevice 'disk', 'MyDb_dat',
'C:\Program Files\Microsoft SQL Server\MSSQL\BACKUP\MyDb_dat.dat'
GO
(1 row(s) affected)
'Disk' device added.

# Create a Backup before Truncating / Shrinking
BACKUP DATABASE MyDb TO MyDb_dat
GO
Processed 26392 pages for database 'MyDb', file 'MigrationBasisplus_Data' on file 9.
Processed 1 pages for database 'MyDb', file 'MigrationBasisplus_Log' on file 9.
BACKUP DATABASE successfully processed 26393 pages in 9.756 seconds (22.161 MB/sec).

# Truncate the Transaction Log
BACKUP LOG MyDb WITH TRUNCATE_ONLY
GO
The command(s) completed successfully.

# Drop logical Device
sp_dropdevice 'MyDb_dat'
GO
Device dropped.

# Get the Name of the Transaction Log
USE MyDb
SELECT name FROM dbo.sysfiles
GO
The command(s) completed successfully.

# Shrink the physical Size of the Transaction Log to 20MB
USE MyDb
DBCC SHRINKFILE (MigrationBasisplus_Log, 20)
GO

SQL Server Overview
System and User Databases (= Oracle Schema)

Master (Controls other Databases)

Model (Template for new Databases)

Tempdb (Temporary Storage)

Msdb (Scheduling and Job Information)

Distribution (Replication Information)

SQL Server Services

SQL Server includes four services

MSSQLServer (Database Engine)

SQLServerAgent (Job Scheduling)

MS DTC, Distributed Transaction Coordinater (Distributed Queries, 2P Commit)

Microsoft Search (Full Text Engine)

Referring Objects

select * from ...object
select * from Northwind..customer (Owner is missing)

Metadata (Data Dictionary)

System Stored Procedures ( sp_ )

sp_helpdb [db_name]
Infos for Database
sp_help [any object]
Infos an Tables, Procedures, etc
sp_helpindex [table_name]
Show Indexes for table_name
sp_who
Show System Activity
SELECT @@spid
Which is my Server Process ID ?
select user_name(),db_name(), @@servername
Database User Name, Database, Server ?

sp_helpdb Northwind
sp_help Employees

System Tables ( sys... )

master..syslogins
Available login Accounts
master..sysmessages
Available System Error / Warnings
master..sysdatabases
Available Databases on SQL Server
sysusers
Available Win 2000 Users, SQL Server Users
sysobjects
Available Objects in the Database

use master
select * from sysdatabases

use northwind
select * from sysobjects
where xtype = 'U'

System Functions ( see QA: Common Objects )

DB_ID(DbName)
Get Database ID
USER_NAME (id)
Get UserName
GETDATE()
Get SystemDate

use master
select * from sysdatabases

use northwind
select * from sysobjects
where xtype = 'U'

Schema Views ( System Table Independent Views)

select * from information_schema.tables
Tables in a Database
select * from information_schema.columns
Columns in a Database
select * from information_schema.table_privileges
Privileges on Tables

SQL Server Logon and Database Access

Login Authentication (Windows Authentication or Mixed Mode)

Mapping of OS User to Database User Accounts and Roles

All W2K Administrators are automatically allowed to logon. This can be disabled by deleting the \BUILTIN\Administrators in the Security Tab on SQL Server Level.

Windows Authentication is the Default (Trusted Connection)

Database Users

Specific to SQL-Server, not the same as the Windows User or Login Account !

Normally dbo is used, mapping is done on Database Level (EM: Users)

Roles

Fixed Server Roles (e.g. System Administrators = DBA) on SQL-Server Level

Fixed Database Role (e.g. db_owner = Has all permissions in the database)

Fixed server role Description
sysadmin Can perform any activity in SQL Server.
serveradmin Can set serverwide configuration options, shut down the server.
setupadmin Can manage linked servers and startup procedures.
securityadmin Can manage logins and CREATE DATABASE permissions, also read error logs and change passwords.
processadmin Can manage processes running in SQL Server.
dbcreator Can create, alter, and drop databases.
diskadmin Can manage disk files.
bulkadmin Can execute BULK INSERT statements.

You can get a list of the fixed server roles from sp_helpsrvrole, and get the specific permissions for each role from sp_srvrolepermission.

Fixed database role Description
db_owner Has all permissions in the database.
db_accessadmin Can add or remove user IDs.
db_securityadmin Can manage all permissions, object ownerships, roles and role memberships.
db_ddladmin Can issue ALL DDL, but cannot issue GRANT, REVOKE, or DENY statements.
db_backupoperator Can issue DBCC, CHECKPOINT, and BACKUP statements.
db_datareader Can select all data from any user table in the database.
db_datawriter Can modify any data in any user table in the database.
db_denydatareader Cannot select any data from any user table in the database.
db_denydatawriter Cannot modify any data in any user table in the database.

Example

USE Northwind
GO
sp_addlogin @loginame = 'Akadia', @passwd = 'Akadia', @defdb = 'Northwind'
GO
sp_grantdbaccess 'Akadia'
GO
sp_addrole 'Masters'
GO
sp_addrolemember 'Masters', 'Akadia'
GO
GRANT SELECT ON Employees TO Masters
GO

SQL Server Query Designer

Query Designer can be used to graphicaly build a SQL statement, for example the syntax for an ANSI OUTER Join Syntax. Follow the following steps:

Open Enterprise Manager

Select a table in the desired Database / Tables

Right-Click an select "Open Table / Query", the Query Designer opens.

Right-Click an empty area on the diagram oane, and then click "Add Table"

Choose another table, in the SQL Pane you can now see the generated SQL statement

For an OUTER Join, right-click the Relation and choose "All rows from "



SQL Server Batch Utility (osql)

The utility osql is a command line tool to run batches. For example you can create the CREDIT database as follows:

osql /E /S /n /i creabase.sql >> credit.log

/*
** CREABASE.SQL
**
** Drop and Recreate the credit database.
*/
PRINT 'Begin CREABASE.SQL'
GO
USE master
SET nocount ON
GO
IF db_id('credit') IS NOT NULL
DROP DATABASE credit
GO
CREATE DATABASE [credit]
ON PRIMARY (NAME = N'credit_Data',
FILENAME = N'E:\MSSQL\Data\credit_Data.MDF',
SIZE = 50,
FILEGROWTH = 10%)
LOG ON (NAME = N'credit_Log',
FILENAME = N'E:\MSSQL\Data\credit_Log.LDF',
SIZE = 1,
FILEGROWTH = 10%)
GO
ALTER DATABASE credit
ADD FILEGROUP CreditTablesFG
GO
ALTER DATABASE credit
ADD FILEGROUP CreditIndexesFG
GO
ALTER DATABASE credit
ADD FILE (
NAME = CreditTables,
FILENAME = 'E:\MSSQL\Data\CreditTables.ndf',
SIZE = 8MB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 50MB )
TO FILEGROUP CreditTablesFG
ALTER DATABASE credit
ADD FILE (
NAME = CreditIndexes,
FILENAME = 'E:\MSSQL\Data\CreditIndexes.ndf',
SIZE = 8MB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 50MB )
TO FILEGROUP CreditIndexesFG
GO
PRINT ' '
IF db_id('credit') IS NOT NULL
PRINT 'CREATED DATABASE "credit"'
ELSE
PRINT 'CREATE DATABASE "credit" FAILED'
PRINT ' '
GO

osql -S localhost -U zahn -P soladur -n -i Sample_Script2.sql

USE Northwind
IF EXISTS (SELECT * FROM sysobjects WHERE type = 'U' AND name = 'Sample1')
DROP TABLE sample1
IF EXISTS (SELECT * FROM sysobjects WHERE type = 'V' AND name = 'Sample_View')
DROP VIEW Sample_View
GO
CREATE TABLE Sample1
(
cust_no int NOT NULL,
fname char(10) NOT NULL,
lname char(15) NOT NULL
)
GO
CREATE VIEW Sample_View
AS
SELECT cust_no, lname FROM Sample1
GO
INSERT Sample1 VALUES ( 100, 'Adam' , 'Barr' )
INSERT Sample1 VALUES ( 200, 'John' , 'Chen' )
INSERT Sample1 VALUES ( 300, 'Cindy' , 'Durkin' )
INSERT Sample1 VALUES ( 400, 'Roger' , 'Harui' )
INSERT Sample1 VALUES ( 500, 'Ryan' , 'LaBrie' )
SELECT * FROM Sample_View

SQL Server Programming Overview
Local Variables

use northwind
go
declare @lastname varchar(20)
declare @firstname varchar(20)
set @lastname = 'Dodsworth'
select @firstname = FirstName
from employees
where lastname = @lastname
print @firstname + ' ' + @lastname
go

Distributed Queries

Perform a distributed query to retrieve information from the EMP table on Oracle 9.2.0 usind MAG1 as the TNSNAMES.ORA connection string.

1. Create the linked Server

Specify Remote Login/Password (system/manager) in Linked Server Properties.

EXEC sp_addlinkedserver
@server = 'MAG1',
@srvproduct = 'Oracle 9.2.0',
@provider = 'MSDAORA',
@datasrc = 'MAG1'
GO

2. Start Distributed Query using the SQL Pass Trough Function OPENQUERY

SELECT * FROM OPENQUERY
(MAG1,'SELECT * FROM scott.emp')
GO

Formatting Dates

Use CONVERT() with date format number, see CONVERT()

select convert(varchar(30), getdate, 104)
--> 19.10.2002

SET DATEFORMAT

Sets the order of the dateparts (month/day/year) for entering datetime or smalldatetime data.

SET DATEFORMAT mdy
GO
DECLARE @datevar smalldatetime
SET @datevar = '12/31/02 12:30:00'
SELECT @datevar
GO
--> 2002-12-31 12:30:00

CASE function (similar to Oracle DECODE)

Within a SELECT statement, a simple CASE function allows only an equality check; no other comparisons are made. This example uses the CASE function to alter the display of book categories to make them more understandable.

USE pubs
GO
SELECT Category =
CASE type
WHEN 'popular_comp' THEN 'Popular Computing'
WHEN 'mod_cook' THEN 'Modern Cooking'
WHEN 'business' THEN 'Business'
WHEN 'psychology' THEN 'Psychology'
WHEN 'trad_cook' THEN 'Traditional Cooking'
ELSE 'Not yet categorized'
END,
CAST(title AS varchar(25)) AS 'Shortened Title',
price AS Price
FROM titles
WHERE price IS NOT NULL
ORDER BY type, price
COMPUTE AVG(price) BY type
GO

Category Shortened Title Price
------------------- ------------------------- ---------------------
Business You Can Combat Computer S 2.9900
Business Cooking with Computers: S 11.9500
Business The Busy Executive's Data 19.9900
Business Straight Talk About Compu 19.9900
avg
=====================
13.7300

SELECT au_fname, au_lname,
CASE state
WHEN 'CA' THEN 'California'
WHEN 'KS' THEN 'Kansas'
WHEN 'TN' THEN 'Tennessee'
WHEN 'OR' THEN 'Oregon'
WHEN 'MI' THEN 'Michigan'
WHEN 'IN' THEN 'Indiana'
WHEN 'MD' THEN 'Maryland'
WHEN 'UT' THEN 'Utah'
END AS StateName
FROM pubs.dbo.authors
ORDER BY au_lname

SELECT statement with simple and searched CASE function

Within a SELECT statement, the searched CASE function allows values to be replaced in the result set based on comparison values. This example displays the price (a money column) as a text comment based on the price range for a book.

USE pubs
GO
SELECT 'Price Category' =
CASE
WHEN price IS NULL THEN 'Not yet priced'
WHEN price < 10 THEN 'Very Reasonable Title'
WHEN price >= 10 and price < 20 THEN 'Coffee Table Title'
ELSE 'Expensive book!'
END,
CAST(title AS varchar(20)) AS 'Shortened Title'
FROM titles
ORDER BY price
GO

Price Category Shortened Title
--------------------- --------------------
Not yet priced The Psychology of Co
Not yet priced Net Etiquette
Very Reasonable Title The Gourmet Microwav
Very Reasonable Title You Can Combat Compu

Dynamically constructing SQL Statements

Use EXECUTE with Literals and Variables

Change Ownership of Tables in Database Northwind to dbo:

use Northwind
select 'EXECUTE sp_changeobjectowner ''' + name + ''', ''dbo''' from sysobjects
where type = 'U'

Dynamically construct and run a SELECT statement

declare @dbname varchar(30)
declare @tblname varchar(30)
set @dbname = 'Northwind'
set @tblname = 'Products'
EXECUTE
('USE ' + @dbname + ' SELECT * FROM ' + @tblname)

Transactions

Transactions must be included in a BEGIN TRAN, COMMIT TRAN Block. Updated Rows in the block are locked for other sessions as long as the transaction is not commited. Open another QA and try to select, the select waits!

USE Northwind

BEGIN TRAN
-- Lock Rows
UPDATE Customers SET ContactName = 'Howard Snyder_Updated'
WHERE CustomerID ='GREAL'
IF (@@ERROR <> 0)
BEGIN
RAISERROR ('Transaction failed',16,-1)
ROLLBACK TRANSACTION
END
COMMIT TRANSACTION

SELECT ContactName FROM Customers WHERE CustomerID = 'GREAL'

TOP n Queries

The TOP keyword specifies that the first n rows of the result set are returned. If ORDER BY is specified, the rows are selected after the result set is ordered. n is the number of rows to return, unless the PERCENT keyword is specified. PERCENT specifies that n is the percentage of rows in the result set that are returned. For example, this SELECT statement returns the first 10 cities, in alphabetic sequence, from the Orders table:

SELECT DISTINCT TOP 10 ShipCity, ShipRegion
FROM Orders
ORDER BY ShipCity

Show User Tables for specified Database

use northwind
select * from information_schema.tables
where table_type = 'BASE TABLE'

Show Primary- and Foreign Key of a Table

select * from information_schema.key_column_usage
where table_name = 'Orders'

Creating and Managing Databases
Database Properties

SELECT DATABASEPROPERTYEX('Northwind', 'IsAutoShrink')

Value Description Value returned
Collation
Default collation name for the database. Collation name
IsAnsiNullDefault
Database follows SQL-92 rules for allowing null values. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsAnsiNullsEnabled
All comparisons to a null evaluate to unknown. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsAnsiPaddingEnabled
Strings are padded to the same length before comparison or insert. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsAnsiWarningsEnabled
Error or warning messages are issued when standard error conditions occur. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsArithmeticAbortEnabled
Queries are terminated when an overflow or divide-by-zero error occurs during query execution. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsAutoClose
Database shuts down cleanly and frees resources after the last user exits. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsAutoCreateStatistics
Existing statistics are automatically updated when the statistics become out-of-date because the data in the tables has changed. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsAutoShrink
Database files are candidates for automatic periodic shrinking. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsAutoUpdateStatistics
Auto update statistics database option is enabled. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsCloseCursorsOnCommitEnabled
Cursors that are open when a transaction is committed are closed. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsFulltextEnabled
Database is full-text enabled. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsInStandBy
Database is online as read-only, with restore log allowed. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsLocalCursorsDefault
Cursor declarations default to LOCAL. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsMergePublished
The tables of a database can be published for replication, if replication is installed. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsNullConcat
Null concatenation operand yields NULL. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsNumericRoundAbortEnabled
Errors are generated when loss of precision occurs in expressions. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsQuotedIdentifiersEnabled
Double quotation marks can be used on identifiers. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsRecursiveTriggersEnabled
Recursive firing of triggers is enabled. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsSubscribed
Database can be subscribed for publication. 1 = TRUE
0 = FALSE
NULL = Invalid input
IsTornPageDetectionEnabled
Microsoft® SQL Server™ detects incomplete I/O operations caused by power failures or other system outages. 1 = TRUE
0 = FALSE
NULL = Invalid input
Recovery
Recovery model for the database. FULL = full recovery model
BULK_LOGGED = bulk logged model
SIMPLE = simple recovery model
SQLSortOrder
SQL Server sort order ID supported in previous versions of SQL Server. 0 = Database is using Windows collation
>0 = SQL Server sort order ID
Status
Database status. ONLINE = database is available for query
OFFLINE = database was explicitly taken offline
RESTORING = database is being restored
RECOVERING = database is recovering and not yet ready for queries
SUSPECT = database cannot be recovered
Updateability
Indicates whether data can be modified. READ_ONLY = data can be read but not modified
READ_WRITE = data can be read and modified
UserAccess
Indicates which users can access the database. SINGLE_USER = only one db_owner, dbcreator, or sysadmin user at a time
RESTRICTED_USER = only members of db_owner, dbcreator, and sysadmin roles
MULTI_USER = all users
Version
Internal version number of the Microsoft SQL Server code with which the database was created. For internal use only by SQL Server tools and in upgrade processing. Version number = Database is open
NULL = Database is closed

Change a property

USE master
EXEC sp_dboption 'ClassNorthwind', 'auto create statistics', 'TRUE'

Create a Database

USE master

/* Drop the ClassNorthwind Database if it already exists */
IF DB_ID('ClassNorthwind') IS NOT NULL
BEGIN
DROP DATABASE ClassNorthwind
END

/* Create the Database */
CREATE DATABASE ClassNorthwind ON PRIMARY
(
NAME = ClassNorthwind_SYS,
FILENAME = 'C:\ClassNorthwind_SYS.mdf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH=10%
)
LOG ON
(
NAME = ClassNorthwind_LOG,
FILENAME = 'C:\ClassNorthwind_LOG.ldf',
SIZE = 15MB,
MAXSIZE = 40MB,
FILEGROWTH = 10%
)

/* Create additional Filegroups */
ALTER DATABASE ClassNorthwind
ADD FILEGROUP TAB

ALTER DATABASE ClassNorthwind
ADD FILEGROUP IDX

ALTER DATABASE ClassNorthwind
ADD FILE (
NAME = ClassNorthwind_TAB01,
FILENAME = 'C:\ClassNorthwind_TAB01.ndf',
SIZE = 1MB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 50MB )
TO FILEGROUP TAB
ALTER DATABASE ClassNorthwind
ADD FILE (
NAME = ClassNorthwind_IDX01,
FILENAME = 'C:\ClassNorthwind_IDX01.ndf',
SIZE = 1MB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 50MB )
TO FILEGROUP IDX

/* Alter Default Filegroup */
ALTER DATABASE ClassNorthwind
MODIFY FILEGROUP [TAB] DEFAULT

GO

Information on Databases

USE ClassNorthwind

dbcc sqlperf (logspace)
sp_helpfilegroup [TAB]

EXEC sp_spaceused ''

Data Structures

All Databases have a primary data file (.MDF) and one or more Transaction log files (.LDF)
A Database can have secondary data files (.NDF)
Data is stored in 8KB blocks = Pages
Rows cannot span Pages, thus the maximum amount of data in a single row is 8KB
Extents are 8 contiguous Pages = 8x8 = 64KB
Extents

Mixed Extents = contains data of two or more tables
Uniform Extents = contains data of one single table
Secial Pages (in first extent of each file as mixed extent)

File Header Page: File Attributs
Page Free Space (PFS): Free Space in Page
Global Allocation Map (GAM): Location of free Pages
Secondary Global Allocation Map (SGAM)
Index Allcation Map (IAM): Information about Extents that a Table or Index uses.
Data Page: Normal Row Data other than text, ntext, image
Text/Image Page: BLOBs
Index Page: Index Structures
Database Recovery Model

SIMPLE: Transaction Log is overwritten when full
FULL: Transaction Log must be backed up
alter database ClassNorthwind set recovery simple
alter database ClassNorthwind set recovery full

Check Extents, Pages

dbcc traceon(3604) /* Output to Screen */
dbcc extentinfo (ClassNorthwind)
dbcc page (ClassNorthwind,1,75) /* 1=FileId, 75=PageId */

Traceflags

Trace flags are used to customize certain characteristics controlling how Microsoft® SQL Server™ operates. Trace flags remain enabled in the server until disabled by executing a DBCC TRACEOFF statement. New connections into the server do not see any trace flags until a DBCC TRACEON statement is issued. Then, the connection will see all trace flags currently enabled in the server, even those enabled by another connection.

Backup a Database

osql -S -U -P -i backup.sql

USE master
EXEC sp_dropdevice 'MyDb_dat'
EXEC sp_dropdevice 'MyDb_log'
EXEC sp_addumpdevice 'disk', 'MyDb_dat',
'C:\Program Files\Microsoft SQL Server\MSSQL\BACKUP\MyDb_dat.dat'
EXEC sp_addumpdevice 'disk', 'MyDb_log',
'C:\Program Files\Microsoft SQL Server\MSSQL\BACKUP\MyDb_log.dat'
BACKUP DATABASE MyDb TO MyDb_dat
BACKUP LOG MyDb WITH TRUNCATE_ONLY
GO

Device dropped.
Device dropped.
(1 row affected)
'Disk' device added.
(1 row affected)
'Disk' device added.
Processed 26392 pages for database 'MyDb', file 'MigrationBasisplus_Data'
Processed 1 pages for database 'MyDb', file 'MigrationBasisplus_Log' on file 3.
BACKUP DATABASE successfully processed 26393 pages in 9.719 seconds (22.245 MB/sec).

Restore a Database

osql -S -U -P -i restore.sql

USE master
RESTORE DATABASE Credit
FROM DISK = 'C:\CreditDB.BAK'
WITH REPLACE
GO

Processed 112 pages for database 'Credit', file 'credit_Data' on file 1.
Processed 984 pages for database 'Credit', file 'CreditTables' on file 1.
Processed 144 pages for database 'Credit', file 'CreditIndexes' on file 1.
Processed 1 pages for database 'Credit', file 'credit_Log' on file 1.
RESTORE DATABASE successfully processed 1241 pages in 2.408 seconds (4.220 MB/sec)

Creating Tables
User defined Data Types

User defined Data Types should not be used !

BLOBS

Text: CLOB (0-2 GB)

NTEXT: Unicode CLOB (0-2GB)

Image: BLOB (0-2GB)

Blobs are nOT stored within row data, however this can accomplished with

use Northwind
EXEC sp_tableoption N'Employees', 'text in row', 'ON'
EXEC sp_tableoption N'Employees', 'text in row', '1000' /* 1000 Chars in Row */

Computed Columns

Virtual Column that is not physically stored in the table, it is based on other Columns within the table.

CREATE TABLE mylogintable (
date_in datetime,
user_id int,
remark varchar(20),
remark_upper AS UPPER(RTRIM(remark)),
user_name AS USER_NAME()
)

Generate Column Value with Identity Property

Creates an identity column in a table. This property is used with the CREATE TABLE and ALTER TABLE Transact-SQL statements (similar to Sequence in Oracle).

Use @@IDENTITY to determine most recent value just after an INSERT.

SCOPE_IDENTITY returns the last IDENTITY value inserted into an identitiy column in the same scope. A scope is a stored procedure, trigger function or batch.

IDENT_CURRENT returns the last IDENTITY value inserted for a specified table in any session and any scope.

Example

USE ClassNorthwind
GO
CREATE TABLE table1(id int IDENTITY)
CREATE TABLE table2(id int IDENTITY(100,1))
GO
CREATE TRIGGER table1ins ON table1 FOR INSERT
AS
BEGIN
INSERT table2 DEFAULT VALUES
END
GO
-- end of trigger definition

SELECT * FROM table1
-- id is empty.

SELECT * FROM table2
-- id is empty.

-- Do the following in Session 1
INSERT table1 DEFAULT VALUES
SELECT @@IDENTITY
100
-- Returns the value 100, which was inserted by the trigger.

SELECT SCOPE_IDENTITY()
1
-- Returns the value 1, which was inserted by the
-- INSERT stmt 2 statements before this query.*/

SELECT IDENT_CURRENT('table2')
100
-- Returns value inserted into table2, i.e. in the trigger.

SELECT IDENT_CURRENT('table1')
1
-- Returns value inserted into table1, which was
-- the INSERT statement 4 stmts before this query.

-- Do the following in Session 2
SELECT @@IDENTITY
-- Returns NULL since there has been no INSERT action
-- so far in this session.

SELECT SCOPE_IDENTITY()
-- Returns NULL since there has been no INSERT action
-- so far in this scope in this session.

SELECT IDENT_CURRENT('table2')
100
-- Returns the last value inserted into table2

SET IDENTITY_INSERT

Allows explicit values to be inserted into the identity column of a table.

USE ClassNorthwind
GO
-- Create products table.
CREATE TABLE products (id int IDENTITY(1,1) PRIMARY KEY,
product varchar(40))
GO
-- Inserting values into products table.
INSERT INTO products (product) VALUES ('screwdriver')
INSERT INTO products (product) VALUES ('hammer')
INSERT INTO products (product) VALUES ('saw')
INSERT INTO products (product) VALUES ('shovel')
GO

-- Get last inserted key
SELECT @@identity

-- Create a gap in the identity values.
DELETE products
WHERE product = 'saw'
GO

SELECT *
FROM products
GO

-- Attempt to insert an explicit ID value of 3;
-- should return a warning:
-- Cannot insert explicit value for identity column in table 'products'
-- when IDENTITY_INSERT is set to OFF.

INSERT INTO products (id, product) VALUES(3, 'garden shovel')
GO
-- SET IDENTITY_INSERT to ON.
SET IDENTITY_INSERT products ON
GO

-- Attempt to insert an explicit ID value of 3
-- Successfull
INSERT INTO products (id, product) VALUES(3, 'garden shovel')
GO

SELECT *
FROM products
GO

Generate Column Value with NEWID Function

Creates a unique value of type uniqueidentifier.

-- Creating a local variable with DECLARE/SET syntax.
USE ClassNorthwind
DECLARE @myid uniqueidentifier
SET @myid = NEWID()
PRINT 'Value of @myid is: '+ CONVERT(varchar(255), @myid)
GO

-- Create Table using NEWID()
CREATE TABLE cust
(
cust_id uniqueidentifier NOT NULL DEFAULT newid(),
company varchar(30) NOT NULL,
contact_name varchar(60) NOT NULL,
address varchar(30) NOT NULL,
city varchar(30) NOT NULL,
state_province varchar(10) NULL,
postal_code varchar(10) NOT NULL,
country varchar(20) NOT NULL,
telephone varchar(15) NOT NULL,
fax varchar(15) NULL
)
GO
-- Inserting data into cust table.
INSERT cust
(cust_id, company, contact_name, address, city, state_province,
postal_code, country, telephone, fax)
VALUES
(newid(), 'Wartian Herkku', 'Pirkko Koskitalo', 'Torikatu 38', 'Oulu', NULL,
'90110', 'Finland', '981-443655', '981-443655')

Create Table in specified File Group

USE ClassNorthwind

-- Check if Table exists
IF OBJECT_ID('Employees') IS NOT NULL
DROP TABLE dbo.Employees
GO

-- Create Table in TAB Filegroup
CREATE TABLE Employees (
EmployeeID int IDENTITY (1, 1) NOT NULL ,
LastName nvarchar (20) NOT NULL ,
FirstName nvarchar (10) NOT NULL ,
) ON [TAB]
GO

Generating Transact-SQL Scripts

Open EM
Select a Database
Right-Click, All Tasks, Generate SQL Script
select * from dbo.sysobjects
where id = object_id(N'[dbo].[Region]')
and OBJECTPROPERTY(id, N'IsUserTable') = 1

Logged and Nonlogged Bulk Copies

The difference between logged and nonlogged bulk copy operations is how much information is logged. Both logged and nonlogged bulk copy operations can be rolled back, but only a logged bulk copy operation can be rolled forward.

In a logged bulk copy all row insertions are logged, which can generate many log records in a large bulk copy operation. These log records can be used to both roll forward and roll back the logged bulk copy operation. In a nonlogged bulk copy, only the allocations of new pages to hold the bulk copied rows are logged.

USE master
GO
exec sp_dboption ClassNorthwind,'select into/bulkcopy',true
GO
USE ClassNorthwind
SET NOCOUNT ON
GO
... Do Bulk Insert

Data Integrity
DEFAULT Constraint

USE ClassNorthwind

/* Drop the constraint if it already exists */
IF OBJECT_ID('DF_Region') IS NOT NULL
BEGIN
ALTER TABLE Employees DROP CONSTRAINT DF_Region
END
GO

/* Add the constraint */
ALTER TABLE Employees
ADD CONSTRAINT DF_Region DEFAULT 'NY' FOR Region
GO

CHECK Constraint

/*
Adds a CHECK CONTSTRAINT to verify that the employee
birth date is less than today's date.
*/
USE ClassNorthwind

ALTER TABLE Employees
ADD CONSTRAINT CK_BirthDate CHECK (BirthDate < GETDATE())
GO

PRIMARY KEY Constraint

A UNIQUE Index is automatically created. You can specify a clustered or nonclustered index (clustered is the default). A clustered index is the same as a IOT (index organized Table) in Oracle. Table data is physically sorted. Only one clustered index is possible per table.

/*
Adds a PRIMARY KEY CONTSTRAINT to the Cumtomers table.
*/
USE ClassNorthwind

ALTER TABLE Customers
ADD CONSTRAINT PK_Customers PRIMARY KEY NONCLUSTERED (CustomerID)
GO

FOREIGN KEY Constraint

/*
Adds a foreign key constraint to the Orders table in
the ClassNorthwind database.

If this is a rerun (and the constraint already exists), first
drop the constraint.

Use the ClassNorthwind database and set NOCOUNT on to eliminate
the message indicating the number of rows affected.
*/

USE ClassNorthwind
SET NOCOUNT ON
GO

IF EXISTS
(SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS
WHERE CONSTRAINT_SCHEMA = 'dbo' AND CONSTRAINT_NAME = 'FK_Orders_Customers'
AND CONSTRAINT_TYPE = 'FOREIGN KEY')
ALTER TABLE dbo.Orders DROP CONSTRAINT FK_Orders_Customers
GO

ALTER TABLE dbo.Orders
ADD CONSTRAINT FK_Orders_Customers
FOREIGN KEY(CustomerID) REFERENCES dbo.Customers(CustomerID)
GO

/* Reset NOCOUNT */
SET NOCOUNT ON
GO

DEFAULT Object

Independent of a Table, can be attached to any Table

/*
Creates a default for the ClassNorthwind database.
*/
USE ClassNorthwind

/* If the default object already exists, drop it */
IF OBJECT_ID('DF_Country') IS NOT NULL
BEGIN
EXEC sp_unbindefault 'dbo.Suppliers.Country'
DROP DEFAULT DF_Country
END
GO

/* Create the Default Object */
CREATE DEFAULT DF_Country AS 'Singapore'
GO

/* Bind the Default Object to the Suppliers.Country column */
EXEC sp_bindefault DF_Country, 'dbo.Suppliers.Country'
GO

RULE Object

Independent of a Table, can be attached to any Table. Rules uses variables, because column name is not known when you create the rule.

/*
Creates the phone number rule for the ClassNorthwind database.
*/
USE ClassNorthwind

-- If the rule already exists, unbind and drop it.
IF OBJECT_ID('R_PhotoPath') IS NOT NULL
BEGIN
EXEC sp_unbindrule 'dbo.Employees.PhotoPath'
DROP RULE R_PhotoPath
END
GO

-- Create and bind the Rule.
CREATE RULE R_PhotoPath
AS @PhotoPath LIKE 'http://www.akadia.%'
GO
EXEC sp_bindrule R_PhotoPath, 'dbo.Employees.PhotoPath'
GO

-- OK
UPDATE Employees
SET PhotoPath = 'http://www.akadia.com'
WHERE LastName = 'Fuller'
GO

-- OK
UPDATE Employees
SET PhotoPath = 'http://www.akadia.com'
WHERE LastName = 'Fuller'
GO

-- NOT OK
UPDATE Employees
SET PhotoPath = 'http://www.arkum.com'
WHERE LastName = 'Fuller'
GO

Disabling and Enabling Constraints

Applies to CHECK and FOREIGN KEY Constraints only.

USE ClassNorthwind
GO
ALTER TABLE Orders
NOCHECK CONSTRAINT FK_Orders_Customers
GO
ALTER TABLE Orders
CHECK CONSTRAINT FK_Orders_Customers

Object Oriented Concept

9:45 PM Posted In Edit This 0 Comments »
1. Introduction
I have noticed an increase in the number of articles published in the Architect category in code-project during the last few months. The number of readers for most of these articles is also high, though the ratings for the articles are not. This indicates that readers are interested in reading articles on Architecture, but the quality does not match their expectations. This article is a constructive attempt to group/ define/ explain all introductory concepts of software architecture for well seasoned developers who are looking to take their next step as system architects.
One day I read an article that said that the richest 2 percent own half the world's wealth. It also said that the richest 1 percent of adults owned 40 percent of global assets in the year 2000. And further, that the richest 10 percent of adults accounted for 85 percent of the world's total wealth. So there is an unbalanced distribution of wealth in the physical world. Have you ever thought of an unbalanced distribution of knowledge in the software world? According to my view point, the massive expansion of the software industry is forcing developers to use already implemented libraries, services and frameworks to develop software within ever shorter periods of time. The new developers are trained to use (I would say more often) already developed software components, to complete the development quicker. They just plug in an existing library and some how manage to achieve the requirements. But the sad part of the story is, that they never get a training to define, design the architecture for, and implement such components. As the number of years pass by, these developers become leads and also software architects. Their titles change, but the old legacy of not understanding, of not having any architectural experience continues, creating a vacuum of good architects. The bottom line is that only a small percentage of developers know how to design a truly object oriented system. The solution to this problem is getting harder every day as the aggressive nature of the software industry does not support an easy adjustment to existing processes, and also the related online teaching materials are either complex or less practical or sometimes even wrong. The most of them use impractical, irrelevant examples of shapes, animals and many other physical world entities to teach concepts of software architecture. There are only very few good business-oriented design references. Unfortunately, I myself am no exception and am a result of this very same system. I got the same education that all of you did, and also referred to the same resource set you all read.

Coming back to the initial point, I noticed that there is a knowledge gap, increasing every day, between the architects who know how to architect a system properly and the others who do not know. The ones, who know, know it right. But the ones, who do not know, know nothing. Just like the world’s wealth distribution, it is an unbalanced distribution of knowledge.

2. Background
This article began after reading and hearing the questions new developers have, on basics of software architecture. There are some good articles out there, but still developers struggle to understand the basic concepts, and more importantly, the way to apply them correctly.
As I see it, newcomers will always struggle to understand a precise definition of a new concept, because it is always a new and hence unfamiliar idea. The one, who has experience, understands the meaning, but the one who doesn’t, struggles to understand the very same definition. It is like that. Employers want experienced employees. So they say, you need to have experience to get a job. But how the hell is one supposed to have that experience if no one is willing to give him a job? As in the general case, the start with software architecture is no exception. It will be difficult. When you start to design your very first system, you will try to apply everything you know or learned from everywhere. You will feel that an interface needs to be defined for every class, like I did once. You will find it harder to understand when and when not to do something. Just prepare to go through a painful process. Others will criticize you, may laugh at you and say that the way you have designed it is wrong. Listen to them, and learn continuously. In this process you will also have to read and think a lot. I hope that this article will give you the right start for that long journey.

“The knowledge of the actions of great men, acquired by long experience in contemporary affairs, and a continual study of antiquity” – I read this phrase when I was reading the book named “The Art of War”, seems applicable here, isn’t it?

3. Prerequisites
This article is an effort to provide an accurate information pool for new developers on the basics of software architecture, focusing on Object Oriented Programming (OOP). If you are a developer, who has a minimum of three or more years of continuous development experience and has that hunger to learn more, to step-in to the next level to become a software architect, this article is for you.
4. The Main Content
4.1. What is Software Architecture?
Software Architecture is defined to be the rules, heuristics and patterns governing:
Partitioning the problem and the system to be built into discrete pieces
Techniques used to create interfaces between these pieces
Techniques used to manage overall structure and flow
Techniques used to interface the system to its environment
Appropriate use of development and delivery approaches, techniques and tools.
4.2. Why Architecture is important?


The primary goal of software architecture is to define the non-functional requirements of a system and define the environment. The detailed design is followed by a definition of how to deliver the functional behavior within the architectural rules. Architecture is important because it:
Controls complexity
Enforces best practices
Gives consistency and uniformity
Increases predictability
Enables re-use.
4.3. What is OOP?
OOP is a design philosophy. It stands for Object Oriented Programming. Object-Oriented Programming (OOP) uses a different set of programming languages than old procedural programming languages (C, Pascal, etc.). Everything in OOP is grouped as self sustainable "objects". Hence, you gain re-usability by means of four main object-oriented programming concepts.
In order to clearly understand the object orientation, let’s take your “hand” as an example. The “hand” is a class. Your body has two objects of type hand, named left hand and right hand. Their main functions are controlled/ managed by a set of electrical signals sent through your shoulders (through an interface). So the shoulder is an interface which your body uses to interact with your hands. The hand is a well architected class. The hand is being re-used to create the left hand and the right hand by slightly changing the properties of it.

4.4. What is an Object?
An object can be considered a "thing" that can perform a set of related activities. The set of activities that the object performs defines the object's behavior. For example, the hand can grip something or a Student (object) can give the name or address.
In pure OOP terms an object is an instance of a class.

4.5. What is a Class?


A class is simply a representation of a type of object. It is the blueprint/ plan/ template that describe the details of an object. A class is the blueprint from which the individual objects are created. Class is composed of three things: a name, attributes, and operations.
public class Student
{
}

According to the sample given below we can say that the student object, named objectStudent, has created out of the Student class.
Student objectStudent = new Student();

In real world, you'll often find many individual objects all of the same kind. As an example, there may be thousands of other bicycles in existence, all of the same make and model. Each bicycle has built from the same blueprint. In object-oriented terms, we say that the bicycle is an instance of the class of objects known as bicycles.
In the software world, though you may not have realized it, you have already used classes. For example, the TextBox control, you always used, is made out of the TextBox class, which defines its appearance and capabilities. Each time you drag a TextBox control, you are actually creating a new instance of the TextBox class.

4.6. How to identify and design a Class?
This is an art; each designer uses different techniques to identify classes. However according to Object Oriented Design Principles, there are five principles that you must follow when design a class,

SRP - The Single Responsibility Principle -
A class should have one, and only one, reason to change.
OCP - The Open Closed Principle -
You should be able to extend a classes behavior, without modifying it.
LSP - The Liskov Substitution Principle-
Derived classes must be substitutable for their base classes.
DIP - The Dependency Inversion Principle-
Depend on abstractions, not on concretions.
ISP - The Interface Segregation Principle-
Make fine grained interfaces that are client specific.
For more information on design principles, please refer to Object Mentor.

Additionally to identify a class correctly, you need to identify the full list of leaf level functions/ operations of the system (granular level use cases of the system). Then you can proceed to group each function to form classes (classes will group same types of functions/ operations). However a well defined class must be a meaningful grouping of a set of functions and should support the re-usability while increasing expandability/ maintainability of the overall system.

In software world the concept of dividing and conquering is always recommended, if you start analyzing a full system at the start, you will find it harder to manage. So the better approach is to identify the module of the system first and then dig deep in to each module separately to seek out classes.

A software system may consist of many classes. But in any case, when you have many, it needs to be managed. Think of a big organization, with its work force exceeding several thousand employees (let’s take one employee as a one class). In order to manage such a work force, you need to have proper management policies in place. Same technique can be applies to manage classes of your software system as well. In order to manage the classes of a software system, and to reduce the complexity, the system designers use several techniques, which can be grouped under four main concepts named Encapsulation, Abstraction, Inheritance, and Polymorphism. These concepts are the four main gods of OOP world and in software term, they are called four main Object Oriented Programming (OOP) Concepts.

4.7. What is Encapsulation (or information hiding)?
The encapsulation is the inclusion within a program object of all the resources need for the object to function - basically, the methods and the data. In OOP the encapsulation is mainly achieved by creating classes, the classes expose public methods and properties. The class is kind of a container or capsule or a cell, which encapsulate the set of methods, attribute and properties to provide its indented functionalities to other classes. In that sense, encapsulation also allows a class to change its internal implementation without hurting the overall functioning of the system. That idea of encapsulation is to hide how a class does it but to allow requesting what to do.


In order to modularize/ define the functionality of a one class, that class can uses functions/ properties exposed by another class in many different ways. According to Object Oriented Programming there are several techniques, classes can use to link with each other and they are named association, aggregation, and composition.
There are several other ways that an encapsulation can be used, as an example we can take the usage of an interface. The interface can be used to hide the information of an implemented class.

IStudent myStudent = new LocalStudent();
IStudent myStudent = new ForeignStudent();

According to the sample above (let’s assume that LocalStudent and ForeignStudent are implemented by the IStudent interface) we can see how LocalStudent and ForeignStudent are hiding their, localize implementing information through the IStudent interface.
4.8. What is Association?
Association is a (*a*) relationship between two classes. It allows one object instance to cause another to perform an action on its behalf. Association is the more general term that define the relationship between two classes, where as the aggregation and composition are relatively special.
public class StudentRegistrar
{
public StudentRegistrar ();
{
new RecordManager().Initialize();
}
}

In this case we can say that there is an association between StudentRegistrar and RecordManager or there is a directional association from StudentRegistrar to RecordManager or StudentRegistrar use a (*Use*) RecordManager. Since a direction is explicitly specified, in this case the controller class is the StudentRegistrar.


To some beginners, association is a confusing concept. The troubles created not only by the association alone, but with two other OOP concepts, that is association, aggregation and composition. Every one understands association, before aggregation and composition are described. The aggregation or composition cannot be separately understood. If you understand the aggregation alone it will crack the definition given for association, and if you try to understand the composition alone it will always threaten the definition given for aggregation, all three concepts are closely related, hence must study together, by comparing one definition to another. Let’s explore all three and see whether we can understand the differences between these useful concepts.
4.9. What is the difference between Association, Aggregation and Composition?
Association is a (*a*) relationship between two classes, where one class use another. But aggregation describes a special type of an association. Aggregation is the (*the*) relationship between two classes. When object of one class has an (*has*) object of another, if second is a part of first (containment relationship) then we called that there is an aggregation between two classes. Unlike association, aggregation always insists a direction.
public class University
{
private Chancellor universityChancellor = new Chancellor();
}



In this case I can say that University aggregate Chancellor or University has an (*has-a*) Chancellor. But even without a Chancellor a University can exists. But a University cannot exist without Faculties, the life time of a University attached with the life time of its Faculty (or Faculties). If Faculties are disposed the University will not exist or wise versa. In that case we called that University is composed of Faculties. So that composition can be recognized as a special type of an aggregation.


Same way, as another example, you can say that, there is a composite relationship in-between a KeyValuePairCollection and a KeyValuePair. The two mutually depend on each other.

.Net and Java uses the Composite relation to define their Collections. I have seen Composition is being used in many other ways too. However the more important factor, that most people forget is the life time factor. The life time of the two classes that has bond with a composite relation mutually depend on each other. If you take the .net Collection to understand this, there you have the Collection Element define inside (it is an inner part, hence called it is composed of) the Collection, farcing the Element to get disposed with the Collection. If not, as an example, if you define the Collection and it’s Element to be independent, then the relationship would be more of a type Aggregation, than a Composition. So the point is, if you want to bind two classes with Composite relation, more accurate way is to have a one define inside the other class (making it a protected or private class). This way you are allowing the outer class to fulfill its purpose, while tying the lifetime of the inner class with the outer class.

So in summary, we can say that aggregation is a special kind of an association and composition is a special kind of an aggregation. (Association->Aggregation->Composition)



4.10. What is Abstraction and Generalization?
Abstraction is an emphasis on the idea, qualities and properties rather than the particulars (a suppression of detail). The importance of abstraction is derived from its ability to hide irrelevant details and from the use of names to reference objects. Abstraction is essential in the construction of programs. It places the emphasis on what an object is or does rather than how it is represented or how it works. Thus, it is the primary means of managing complexity in large programs.
While abstraction reduces complexity by hiding irrelevant detail, generalization reduces complexity by replacing multiple entities which perform similar functions with a single construct. Generalization is the broadening of application to encompass a larger domain of objects of the same or different type. Programming languages provide generalization through variables, parameterization, generics and polymorphism. It places the emphasis on the similarities between objects. Thus, it helps to manage complexity by collecting individuals into groups and providing a representative which can be used to specify any individual of the group.

Abstraction and generalization are often used together. Abstracts are generalized through parameterization to provide greater utility. In parameterization, one or more parts of an entity are replaced with a name which is new to the entity. The name is used as a parameter. When the parameterized abstract is invoked, it is invoked with a binding of the parameter to an argument.

4.11. What is an Abstract class?
Abstract classes, which declared with the abstract keyword, cannot be instantiated. It can only be used as a super-class for other classes that extend the abstract class. Abstract class is the concept and implementation gets completed when it is being realized by a subclass. In addition to this a class can inherit only from one abstract class (but a class may implement many interfaces) and must override all its abstract methods/ properties and may override virtual methods/ properties.
Abstract classes are ideal when implementing frameworks. As an example, let’s study the abstract class named LoggerBase below. Please carefully read the comments as it will help you to understand the reasoning behind this code.

Collapse
public abstract class LoggerBase
{
///
/// field is private, so it intend to use inside the class only
///

private log4net.ILog logger = null;

///
/// protected, so it only visible for inherited class
///

protected LoggerBase()
{
// The private object is created inside the constructor
logger = log4net.LogManager.GetLogger(this.LogPrefix);
// The additional initialization is done immediately after
log4net.Config.DOMConfigurator.Configure();
}

///
/// When you define the property as abstract,
/// it forces the inherited class to override the LogPrefix
/// So, with the help of this technique the log can be made,
/// inside the abstract class itself, irrespective of it origin.
/// If you study carefully you will find a reason for not to have “set” method here.
///

protected abstract System.Type LogPrefix
{
get;
}

///
/// Simple log method,
/// which is only visible for inherited classes
///

///
protected void LogError(string message)
{
if (this.logger.IsErrorEnabled)
{
this.logger.Error(message);
}
}

///
/// Public properties which exposes to inherited class
/// and all other classes that have access to inherited class
///

public bool IsThisLogError
{
get
{
return this.logger.IsErrorEnabled;
}
}
}
The idea of having this class as an abstract is to define a framework for exception logging. This class will allow all subclass to gain access to a common exception logging module and will facilitate to easily replace the logging library. By the time you define the LoggerBase, you wouldn’t have an idea about other modules of the system. But you do have a concept in mind and that is, if a class is going to log an exception, they have to inherit the LoggerBase. In other word the LoggerBase provide a framework for exception logging.
Let’s try to understand each line of the above code.

Like any other class, an abstract class can contain fields, hence I used a private field named logger declare the ILog interface of the famous log4net library. This will allow the Loggerbase class to control, what to use, for logging, hence, will allow changing the source logger library easily.

The access modifier of the constructor of the LoggerBase is protected. The public constructor has no use when the class is of type abstract. The abstract classes are not allowed to instantiate the class. So I went for the protected constructor.

The abstract property named LogPrefix is an important one. It enforces and guarantees to have a value for LogPrefix (LogPrefix uses to obtain the detail of the source class, which the exception has occurred) for every subclass, before they invoke a method to log an error.

The method named LogError is protected, hence exposed to all subclasses. You are not allowed or rather you cannot make it public, as any class, without inheriting the LoggerBase cannot use it meaningfully.

Let’s find out why the property named IsThisLogError is public. It may be important/ useful for other associated classes of an inherited class to know whether the associated member logs its errors or not.

Apart from these you can also have virtual methods defined in an abstract class. The virtual method may have its default implementation, where a subclass can override it when required.

All and all, the important factor here is that all OOP concepts should be used carefully with reasons, you should be able to logically explain, why you make a property a public or a field a private or a class an abstract. Additionally, when architecting frameworks, the OOP concepts can be used to forcefully guide the system to be developed in the way framework architect’s wanted it to be architected initially.

4.12. What is an Interface?
In summary the Interface separates the implementation and defines the structure, and this concept is very useful in cases where you need the implementation to be interchangeable. Apart from that an interface is very useful when the implementation changes frequently. Some say you should define all classes in terms of interfaces, but I think recommendation seems a bit extreme.
Interface can be used to define a generic template and then one or more abstract classes to define partial implementations of the interface. Interfaces just specify the method declaration (implicitly public and abstract) and can contain fields and properties (which are also implicitly public and abstract). Interface definition begins with the keyword interface. An interface like that of an abstract class cannot be instantiated.

If a class that implements an interface does not define all the methods of the interface, then it must be declared abstract and the method definitions must be provided by the subclass that extends the abstract class. In addition to this an interfaces can inherit other interfaces.

The sample below will provide an interface for our LoggerBase abstract class.

public interface ILogger
{
bool IsThisLogError { get; }
}

4.13. What is the difference between a Class and an Interface?
In .Net/ C# a class can be defined to implement an interface and also it supports multiple implementations. When a class implements an interface, an object of such class can be encapsulated inside an interface.
If MyLogger is a class, which implements ILogger, there we can write

ILogger log = new MyLogger();

A class and an interface are two different types (conceptually). Theoretically a class emphasis the idea of encapsulation, while an interface emphasis the idea of abstraction (by suppressing the details of the implementation). The two poses a clear separation from one to another. Therefore it is very difficult or rather impossible to have an effective meaningful comparison between the two, but it is very useful and also meaningful to have a comparison between an interface and an abstract class.
4.14. What is the difference between an Interface and an Abstract class?
There are quite a big difference between an interface and an abstract class, even though both look similar.

Interface definition begins with a keyword interface so it is of type interface
Abstract classes are declared with the abstract keyword so it is of type class
Interface has no implementation, but they have to be implemented.
Abstract class’s methods can have implementations and they have to be extended.
Interfaces can only have method declaration (implicitly public and abstract) and fields (implicitly public static)
Abstract class’s methods can’t have implementation only when declared abstract.
Interface can inherit more than one interfaces
Abstract class can implement more than one interfaces, but can inherit only one class
Abstract class must override all abstract method and may override virtual methods
Interface can be used when the implementation is changing
Abstract class can be used to provide some default behavior for a base class.
Interface makes implementation interchangeable
Interface increase security by hiding the implementation
Abstract class can be used when implementing framework
Abstract classes are an excellent way to create planned inheritance hierarchies and also to use as non-leaf classes in class hierarchies.
Abstract classes let you define some behaviors; they force your subclasses to provide others. For example, if you have an application framework, an abstract class can be used to provide the default implementation of the services and all mandatory modules such as event logging and message handling etc. This approach allows the developers to develop the application within the guided help provided by the framework.

However, in practice when you come across with some application-specific functionality that only your application can perform, such as startup and shutdown tasks etc. The abstract base class can declare virtual shutdown and startup methods. The base class knows that it needs those methods, but an abstract class lets your class admit that it doesn't know how to perform those actions; it only knows that it must initiate the actions. When it is time to start up, the abstract class can call the startup method. When the base class calls this method, it can execute the method defined by the child class.

4.15. What is Implicit and Explicit Interface Implementations?
As mentioned before .Net support multiple implementations, the concept of implicit and explicit implementation provide safe way to implement methods of multiple interfaces by hiding, exposing or preserving identities of each of interface methods, even when the method signatures are the same.

Let's consider the interfaces defined below.

interface IDisposable
{
void Dispose();
}
Here you can see that the class Student has implicitly and explicitly implemented the method named Dispose() via Dispose and IDisposable.Dispose.

class Student : IDisposable
{
public void Dispose()
{
Console.WriteLine("Student.Dispose");
}

void IDisposable.Dispose()
{
Console.WriteLine("IDisposable.Dispose");
}
}

4.16. What is Inheritance?
Ability of a new class to be created, from an existing class by extending it, is called inheritance.



public class Exception
{
}


public class IOException : Exception
{
}
According to the above example the new class (IOException), which is called the derived class or subclass, inherits the members of an existing class (Exception), which is called the base class or super-class. The class IOException can extend the functionality of the class Exception by adding new types and methods and by overriding existing ones.

Just like abstraction is closely related with generalization, the inheritance is closely related with specialization. It is important to discuss those two concepts together with generalization to better understand and to reduce the complexity.

One of the most important relationships among objects in the real world is specialization, which can be described as the “is-a” relationship. When we say that a dog is a mammal, we mean that the dog is a specialized kind of mammal. It has all the characteristics of any mammal (it bears live young, nurses with milk, has hair), but it specializes these characteristics to the familiar characteristics of canis domesticus. A cat is also a mammal. As such, we expect it to share certain characteristics with the dog that are generalized in Mammal, but to differ in those characteristics that are specialized in cats.

The specialization and generalization relationships are both reciprocal and hierarchical. Specialization is just the other side of the generalization coin: Mammal generalizes what is common between dogs and cats, and dogs and cats specialize mammals to their own specific subtypes.

Similarly, as an example you can say that both IOException and SecurityException are of type Exception. They have all characteristics and behaviors of an Exception, That mean the IOException is a specialized kind of Exception. A SecurityException is also an Exception. As such, we expect it to share certain characteristic with IOException that are generalized in Exception, but to differ in those characteristics that are specialized in SecurityExceptions. In other words, Exception generalizes the shared characteristics of both IOException and SecurityException, while IOException and SecurityException specialize with their characteristics and behaviors.

In OOP, the specialization relationship is implemented using the principle called inheritance. This is the most common and most natural and widely accepted way of implement this relationship.

4.17. What is Polymorphisms?
Polymorphisms is a generic term that means 'many shapes'. More precisely Polymorphisms means the ability to request that the same operations be performed by a wide range of different types of things.
At times, I used to think that understanding Object Oriented Programming concepts have made it difficult since they have grouped under four main concepts, while each concept is closely related with one another. Hence one has to be extremely careful to correctly understand each concept separately, while understanding the way each related with other concepts.

In OOP the polymorphisms is achieved by using many different techniques named method overloading, operator overloading and method overriding,

4.18. What is Method Overloading?
The method overloading is the ability to define several methods all with the same name.
public class MyLogger
{
public void LogError(Exception e)
{
// Implementation goes here
}

public bool LogError(Exception e, string message)
{
// Implementation goes here
}
}

4.19. What is Operator Overloading?
The operator overloading (less commonly known as ad-hoc polymorphisms) is a specific case of polymorphisms in which some or all of operators like +, - or == are treated as polymorphic functions and as such have different behaviors depending on the types of its arguments.
public class Complex
{
private int real;
public int Real
{ get { return real; } }

private int imaginary;
public int Imaginary
{ get { return imaginary; } }

public Complex(int real, int imaginary)
{
this.real = real;
this.imaginary = imaginary;
}

public static Complex operator +(Complex c1, Complex c2)
{
return new Complex(c1.Real + c2.Real, c1.Imaginary + c2.Imaginary);
}
}

I above example I have overloaded the plus operator for adding two complex numbers. There the two properties named Real and Imaginary has been declared exposing only the required “get” method, while the object’s constructor is demanding for mandatory real and imaginary values with the user defined constructor of the class.
4.20. What is Method Overriding?
Method overriding is a language feature that allows a subclass to override a specific implementation of a method that is already provided by one of its super-classes.
A subclass can give its own definition of methods but need to have the same signature as the method in its super-class. This means that when overriding a method the subclass's method has to have the same name and parameter list as the super-class's overridden method.

using System;
public class Complex
{
private int real;
public int Real
{ get { return real; } }

private int imaginary;
public int Imaginary
{ get { return imaginary; } }

public Complex(int real, int imaginary)
{
this.real = real;
this.imaginary = imaginary;
}

public static Complex operator +(Complex c1, Complex c2)
{
return new Complex(c1.Real + c2.Real, c1.Imaginary + c2.Imaginary);
}

public override string ToString()
{
return (String.Format("{0} + {1}i", real, imaginary));
}
}

In above example I have extended the implementation of the sample Complex class given under operator overloading section. This class has one overridden method named “ToString”, which override the default implementation of the standard “ToString” method to support the correct string conversion of a complex number.
Complex num1 = new Complex(5, 7);
Complex num2 = new Complex(3, 8);

// Add two Complex numbers using the
// overloaded plus operator
Complex sum = num1 + num2;

// Print the numbers and the sum
// using the overriden ToString method
Console.WriteLine("({0}) + ({1}) = {2}", num1, num2, sum);
Console.ReadLine();

4.21. What is a Use case?
A use case is a thing an actor perceives from the system. A use case maps actors with functions. Importantly, the actors need not be people. As an example a system can perform the role of an actor, when it communicate with another system.



In another angle a use case encodes a typical user interaction with the system. In particular, it:
Captures some user-visible function.
Achieves some concrete goal for the user.
A complete set of use cases largely defines the requirements for your system: everything the user can see, and would like to do. The below diagram contains a set of use cases that describes a simple login module of a gaming website.


4.22. What is a Class Diagram?
A class diagrams are widely used to describe the types of objects in a system and their relationships. Class diagrams model class structure and contents using design elements such as classes, packages and objects. Class diagrams describe three different perspectives when designing a system, conceptual, specification, and implementation. These perspectives become evident as the diagram is created and help solidify the design.

The Class diagrams, physical data models, along with the system overview diagram are in my opinion the most important diagrams that suite the current day rapid application development requirements.

UML Notations:


4.23. What is a Package Diagram?
Package diagrams are used to reflect the organization of packages and their elements. When used to represent class elements, package diagrams provide a visualization of the name-spaces. In my designs, I use the package diagrams to organize classes in to different modules of the system.
4.24. What is a Sequence Diagram?
A sequence diagrams model the flow of logic within a system in a visual manner, it enable both to document and validate your logic, and are used for both analysis and design purposes. Sequence diagrams are the most popular UML artifact for dynamic modeling, which focuses on identifying the behavior within your system.
4.25. What is two-tier architecture?
The two-tier architecture is refers to client/ server architectures as well, the term client/ server was first used in the 1980s in reference to personal computers (PCs) on a network. The actual client/ server model started gaining acceptance in the late 1980s, and later it was adapted to World Wide Web programming.
According to the modern days use of two-tier architecture the user interfaces (or with ASP.NET, all web pages) runs on the client and the database is stored on the server. The actual application logic can run on either the client or the server. So in this case the user interfaces are directly access the database. Those can also be non-interface processing engines, which provide solutions to other remote/ local systems. In either case, today the two-tier model is not as reputed as the three-tier model. The advantage of the two-tier design is its simplicity, but the simplicity comes with the cost of scalability. The newer three-tier architecture, which is more famous, introduces a middle tier for the application logic.



4.26. What is three-tier architecture?
The three tier software architecture (also known as three layer architectures) emerged in the 1990s to overcome the limitations of the two tier architecture. This architecture has aggressively customized and adopted by modern day system designer to web systems.
Three-tier is a client-server architecture in which the user interface, functional process logic, data storage and data access are developed and maintained as independent modules, some time on separate platforms. The term "three-tier" or "three-layer", as well as the concept of multi-tier architectures (often refers to as three-tier architecture), seems to have originated within Rational Software.



The 3-Tier architecture has the following three tiers.
Presentation Tier or Web Server: User Interface, displaying/ accepting data/ input to/ from the user
Application Logic/ Business Logic/ Transaction Tier or Application Server: Data validation, acceptability check before being added to the database and all other business/ application specific operations
Data Tier or Database server: Simple reading and writing method to database or any other storage, connection, command, stored procedures etc
4.27. What is MVC architecture?
The Model-View-Controller (MVC) architecture separates the modeling of the domain, the presentation, and the actions based on user input into three separate classes.
Unfortunately, the popularity of this pattern has resulted in a number of faulty usages; each technology (Java, ASP.NET etc) has defined it in their own way making it difficult to understand. In particular, the term "controller" has been used to mean different things in different contexts. The definitions given bellow are the closes possible ones I found for ASP.NET version of MVC.



Model: DataSet and typed DataSet (some times business object, object collection, XML etc) are the most common use of the model.
View: The ASPX and ASCX files generally handle the responsibilities of the view.
Controllers: The handling of events or the controlling is usually done in the code-behind class.
In a complex n-tier distributed system the MVC architecture place the vital role of organizing the presentation tier of the system.
4.28. What is SOA?
A service-oriented architecture is essentially a collection of services. These services communicate with each other. The communication can involve either simple data passing or it could involve two or more services coordinating some activity. Some means of connecting services to each other is needed.
The .Net technology introduces the SOA by mean of web services.



The SOA can be used as the concept to connect multiple systems to provide services. It has it's great share in the future of the IT world.

According to the imaginary diagram above, we can see how the Service Oriented Architecture is being used to provide a set of centralized services to the citizens of a country. The citizens are given a unique identifying card, where that card carries all personal information of each citizen. Each service centers such as shopping complex, hospital, station, and factory are equipped with a computer system where that system is connected to a central server, which is responsible of providing service to a city. As an example when a customer enter the shopping complex the regional computer system report it to the central server and obtain information about the customer before providing access to the premises. The system welcomes the customer. The customer finished the shopping and then by the time he leaves the shopping complex, he will be asked to go through a billing process, where the regional computer system will manage the process. The payment will be automatically handled with the input details obtain from the customer identifying card.

The regional system will report to the city (computer system of the city) while the city will report to the country (computer system of the country).

4.29. What is the Data Access Layer?
The data access layer (DAL), which is a key part of every n-tier system, is mainly consist of a simple set of code that does basic interactions with the database or any other storage device. These functionalities are often referred to as CRUD (Create, Retrieve, Update, and Delete).
The data access layer need to be generic, simple, quick and efficient as much as possible. It should not include complex application/ business logics.

I have seen systems with lengthy, complex store procedures (SP), which run through several cases before doing a simple retrieval. They contain not only most part of the business logic, but application logic and user interface logic as well. If SP is getting longer and complicated, then it is a good indication that you are burring your business logic inside the data access layer.

4.30. What is the Business Logic Layer?
I know for a fact that this is a question for most, but from the other hand by reading many articles I have become aware that not everyone agrees to what business logic actually is, and in many cases it's just the bridge in between the presentation layer and the data access layer with having nothing much, except taking from one and passing to the other. In some other cases, it is not even been well thought out, they just take the leftovers from the presentation layer and the data access layer then put them in another layer which automatically is called the business logic layer. However there are no god said things that cannot be changed in software world. You can change as and when you feel comfortable that the method you apply is flexible enough to support the growth of your system. There are many great ways, but be careful when selecting them, they can over complicating the simple system. It is a balance one needs to find with their experience.
As a general advice when you define business entities, you must decide how to map the data in your tables to correctly defined business entities. The business entities should meaningfully define considering various types of requirements and functioning of your system. It is recommended to identify the business entities to encapsulate the functional/ UI (User Interface) requirements of your application, rather than define a separate business entity for each table of your database. For example, if you want to combine data from couple of table to build a UI (User Interface) control (Web Control), implement that function in the Business Logic Layer with a business object that uses couple of data object to support with your complex business requirement.

4.31. What is Gang of Four (GoF) Design Patterns?
The Gang of Four (GoF) patterns are generally considered the foundation for all other patterns. They are categorized in three groups: Creational, Structural, and Behavioral. Here you will find information on these important patterns.
Creational Patterns
Abstract Factory Creates an instance of several families of classes
Builder Separates object construction from its representation
Factory Method Creates an instance of several derived classes
Prototype A fully initialized instance to be copied or cloned
Singleton A class of which only a single instance can exist
Structural Patterns
Adapter Match interfaces of different classes
Bridge Separates an object’s interface from its implementation
Composite A tree structure of simple and composite objects
Decorator Add responsibilities to objects dynamically
Facade A single class that represents an entire subsystem
Flyweight A fine-grained instance used for efficient sharing
Proxy An object representing another object
Behavioral Patterns
Chain of Resp. A way of passing a request between a chain of objects
Command Encapsulate a command request as an object
Interpreter A way to include language elements in a program
Iterator Sequentially access the elements of a collection
Mediator Defines simplified communication between classes
Memento Capture and restore an object's internal state
Observer A way of notifying change to a number of classes
State Alter an object's behavior when its state changes
Strategy Encapsulates an algorithm inside a class
Template Method Defer the exact steps of an algorithm to a subclass
Visitor Defines a new operation to a class without change
4.32. What is the difference between Abstract Factory and Builder design patterns?
The two design patterns are fundamentally different. However, when you learn them for the first time, you will see a confusing similarity. So that it will make harder for you to understand them. But if you continue to study eventually, you will get afraid of design patterns too. It is like infant phobia, once you get afraid at your early age, it stays with you forever. So the result would be that you never look back at design patterns again. Let me see whether I can solve this brain teaser for you.

In the image below, you have both design pattern listed in. I am trying to compare the two one on one to identify the similarities. If you observe the figure carefully, you will see an easily understandable color pattern (same color is used to mark the classes that are of similar kind).



Please follow up with the numbers in the image when reading the listing below.

Mark #1: Both patterns have used a generic class as the entry-class. The only difference is the name of the class. One pattern has named it as “Client”, while the other named it as “Director”.
Mark #2: Here again the difference is the class name. It is “AbstractFactory” for one and “Builder” for the other. Additionally both classes are of type abstract.
Mark #3: Once again both patterns have defined two generic (WindowsFactory & ConcreteBuilder) classes. They both have created by inheriting their respective abstract class.
Mark #4: Finally, both seem to produce some kind of a generic output.

Now, where are we? Aren’t they looking almost identical? So then why are we having two different patterns here?

Let’s compare the two again side by side for one last time, but this time, focusing on the differences.

Abstract Factory: Emphasizes a family of product objects (either simple or complex)
Builder: Focuses on constructing a complex object step by step
Abstract Factory: Focus on *what* is made
Builder: Focus on *how* it is made
Abstract Factory: Focus on defining many different types of *factories* to build many *products*, and it is not a one builder for just one product
Builder: Focus on building a one complex but one single *product*
Abstract Factory: Defers the choice of what concrete type of object to make until run time
Builder: Hide the logic/ operation of how to compile that complex object
Abstract Factory: *Every* method call creates and returns different objects
Builder: Only the *last* method call returns the object, while other calls partially build the object
Sometimes creational patterns are complementary: So you can join one or many patterns when you design your system. As an example builder can use one of the other patterns to implement which components get built or in another case Abstract Factory, Builder, and Prototype can use Singleton in their implementations. So the conclusion would be that the two design patterns exist to resolve two type of business problems, so even though they look similar, they are not.

I hope that this shed some light to resolve the puzzle. If you still don’t understand it, then this time it is not you, it has to be me and it is since that I don’t know how to explain it.

5. What is the Conclusion?
I don't think, that it is realistic trying to make a programming language be everything to everybody. The language becomes bloated, hard to learn, and hard to read if everything plus the kitchen sink is thrown in. In another word every language has their limitations. As system architect and designer we should be able to fully and more importantly correctly (this also mean that you shouldn’t use a ballistic missile to kill a fly or hire FBI to catch the fly) utilize the available tools and features to build usable, sustainable, maintainable and also very importantly expandable software systems, that fully utilize the feature of the language to bring a competitively advance system to their customers. In order to do it, the foundation of a system places a vital role. The design or the architecture of a software system is the foundation. It hold the system together, hence designing a system properly (this never mean an *over* desinging) is the key to the success. When you talk about designing a software system, the correct handling of OOP concept is very important. I have made the above article richer with idea but still kept it short so that one can learn/ remind all of important concept at a glance. Hope you all will enjoy reading it.