Friday, April 3, 2020

Drop SQL Server databases in Recovery Mode

Hi Guys,
Today met with an unusual situation where a couple of drives in the Dev box crashed. Luckily the system databases were not affected. So cooked up a script to generate the restore of the user databases from the last good backup. Should be a piece of cake I assumed with the restore script to my rescue. 


Sample restore Script. 
--------------------------------------------------------------------------------
DECLARE @dbname sysname ='dba'


SELECT 'RESTORE database [' + @dbname + '] from disk =' AS 'restore script'
UNION ALL 
SELECT  'WITH ' AS 'restore script'
UNION all
SELECT 

'move '''+name+''' ' + 'to ''' +physical_name+''',' AS 'restore script'
FROM sys.master_files
WHERE 1=1
AND database_id=DB_ID(@dbname)
UNION all
SELECT 'stats=1,replace ' AS 'restore script'
-----------------------------------------------------------------------------------------

Well those of you thinking this was a happily ever after story you are almost wrong. Things do not turn good so easily when they mess up in DB world. 

To my surprise I was not able to a. restore with overwrite d. alter database to single user mode c. take db offline d. drop the database. And as you might have guessed the reason by now the database automatically starts when SQL Server restarts. Well I always loved this feature where you do not have to manually start 100s of database manually or through a script every time you restart SQL Server. But just for this time I was aching to have that feature where I could actually stop the recovery of the databases I do not want. I finally got to the answer through a lot of soul searching and blog posts (Amit Bansal's website ).

So here goes the list of my attempts 
1. Take the SQL Server in single user mode and try to take the user databases offline - Did not work 
2. Take the SQL Server in single user mode and try to drop the user databases - Did not work
3.  Take the SQL Server in minimal  mode and try to take the user databases offline - Did not work 
4. Take the SQL Server in minimal  mode and try to drop the user databases - Did not work
5. Set Trace 3607 in SQL Server Startup and try to take the user databases offline - Did not work 
6. Set Trace 3607 in SQL Server Startup and try to  drop the user databases - Did not work

Then finally I hit gold and tried to set the Trace 3608 in the SQL Server Startup and started the SQL Server. 
Most of the user databases showed online though some were in recovery or recovery pending. 
I dropped the databases which were in online status and restarted the SQL Server  with the trace 3608 in the the startup parameter to bring the other user databases online and then dropped them. 
I finally removed the trace flag 3608 and restarted the SQL Server and all the user databases were gone. 

So all I had to do now was to use the restore script to generate the restore database command for all databases. Well I swung a cursor around the script to get everything in one go but later on that in the next blog post. 


Thanks,
Tushar Kanti

Thursday, February 20, 2020

Modern Day Data Warehouse


Modern Day Data Warehouse

Data is the new Oil a term often coined these days whenever we talk in context of Data Analytics.
Data Analysis is a process of reducing the large number of collected data to make a sense of them. Business would not be willing to look at the whole spreadsheet or entire column of numbers. It is an exhausting and tedious job to look at those numbers or spreadsheet.
Exploratory Data analysis gives techniques to devise as an aid in this situation. Most of these techniques work in part by hiding certain aspects of the data while making other aspects clear. This is understanding the key performance indicators which in turn helps Business take the "Decision".

The traditional ways of doing analytics over a historic data warehouse is now going out of fashion and out of context. People want the analysis on the live data and understand the trends of growth to take business decisions. The new way of analyzing the data is not as straight forward as it used to be which used to depend on traditional sources like ERP , CRM and LOB applications. People now want their decisions to be based on a more complicated approach which involves the data generated in and around the products right from social networking to their in house surveys to all connected devices.

To solve this puzzle of data warehousing we need to do some ground breaking changes. According to Gartner reports  “Data warehousing has reached the most significant tipping point since its inception. The biggest, possibly most elaborate data management system in IT is changing.” To achieve this the traditional data warehouse has to evolve and take advantage of big data in real time. So we need a modern data warehouse for our present puzzle.

In theory the traditional data warehouse was designed to be a central repository for all the data in an organization. The data from the transactional systems like CRM, ERP and LOB was deduplicated, cleansed and loaded (ETL) into a relational schema. The reports were based on the batches that used to run on top of that data and analytics was based on these kind of reports.
Traditional Data Warehouse System



The new trends of data analytics like high data volume, real-time data, types of data and new deployment models in cloud and hybrid are putting the traditional warehouse into pressure. So the need of advanced analytics and machine learning are coming into the picture. These trends are forcing enterprise to identify approaches to evolve their existing systems to modern data warehouse.

High Data Volume which is flowing from all Social Network Media and IOT in addition to the existing business data has caused the traditional data warehouse based on SMP to fail catastrophically. They are not able to aid business to predict the correct business decisions. The basic architecture of this design is incapable to meet scale out or scale in demand. The vertical scaling approach is not a viable return on investment scenario.
Real time Data is the demand of the Business Analysts in contrary to traditional data warehouse where analysis was done on historical sanitized data. The velocity of the data captured is ever increasing and the organizations are using this real time data not only to build and optimize their business but also to transact and engage in dynamic event driven processes like market forecasting.

New sources and types of data have come into the picture of Analytics like mobile and social media channel, scanners, sensors, RFID tags, devices, feeds and sources outside of business.  These data types are not easily digestible by traditional data warehouse and do not fit in the business schema model directly. These data types have a lot of potential to benefit the business in optimizing the operations and predictions.

New deployment models in cloud and in hybrid has hit the industry and analytics is not untouched by the presence of these. The trend is the organizations is to invest in Big Data for the analytics and most of the infrastructure for the same is chosen in cloud. The cloud gives the infrastructure the cost effectiveness and the scalability to meet the new demands of the organizations. This only means that a lot of data is also "cloud-born" such as clickstreams, videos , social feeds, GPS, market and weather.
Various Organizations are implementing advanced analytics and predictive analysis to understand what may happen in the near future of their business from varied set of data sources and types. The traditional data warehouse was not designed these types of new analytics which is inductive or bottoms up in nature. Unlike working through a defined set of schema and data collection from requirement based model of traditional ware house advanced analytics  and data science uses experimentation approach to explore answers to vague questions or non-existent questions. This requires a lot of experimentation with the data sets before a schema can be created allowing the data to make sense to business.
Organization should be ready to look into the new modern data warehouse approach when they see the below phenomenon
·       Traditional data warehouse is not able to keep up with the volume surge in the data
·       To do better business they need to look into new data sources with real time data
·       Vertical scaling is not helping
·       Platform cost is not in coherence with ROI
·       Sluggish performance


Modern Data Warehouse lets you choose from a variety of techniques like business intelligence and advanced analytics from vivid types of data sets that are exploding in volume while making the insights real time with an ability to deliver correct data at precise time.

A modern data warehouse is capable of  delivering an exhaustive logical data and analytical platform with a complete set of fully supported, solutions and technologies that can cater to demand of the most sophisticated and oppressive modern enterprise—on-premises, in the cloud, or within any hybrid scenario.
Modern Day Data Warehouse System

Things are changing fast and we need to adapt ourselves to the new as much as we know it we need to follow the same now. Or as I say it like now now. 

Thanks,
Tushar Kanti





Tuesday, February 25, 2014

Contained Databases is GooooD

Hi Guys,
Let's check out the contained database feature today. The contained database user's can be based on both sql and windows based. They need not be mapped to any login and the best part which I liked is that the application user will have absolutely no idea about the server it's running on.

Creating a Contained Database

To start with the Contained database you will have to first configure the Contained database option.

exec spconfigure 'contained database authentication',1
go
reconfigure
go

Now we will be ready to create a partial contained database (No Clue why they call it partially contained). So let's fire the create database command.

Create database Contained
containment =partial

This will create a database for us which will be contained. Now lets go ahead and create an object (i.e a table) in this database and put in some values.

USE [COntained]gocreate table conTab(name char(100))
insert into contab values ('tushar'), ('kusum'),('swati'), ('shashank')
go Now lets go ahead and create a windows user for this database.
USE [COntained]goCREATE USER [eapac\etuskanadm] WITH DEFAULT_SCHEMA=[dbo]goexec sp_addrolemember 'db_owner','eapac\etuskanadm'
go
 Now this user will be comtained in the database itself as it is not mapped to any login and this user will have no server level information. The below image shows how the connection to the Server/Database will look like. Here while connecting using ssms you have to addtionally go to the options in the ssms and there type the database name.




























You can also connect through command line you have to use the -d paramter in the connection to connect to the database.

sqlcmd -E -Slocalhost -dCOntained

Thanks,
Tushar Kanti










 

Thursday, February 20, 2014

Working with command line

Hi Guys,
There has been times when I have heard people saying MS SQL Server as a right click database.
Well first answer to them is we have a GUI provided by the base OS and the RDMS client tools are intelligent enough to exploit them. Too bad they dont have the GUI to handle tasks efficiently.
Comming back to command line we definitely can do everything any RDMS can through command line.
 I started working on command line like 2 years ago and have now experienced that working on command line not only makes your work blazing fast but also you get to know the internal commands much more than when you use a GUI. The idea of working on command line was not new to me but then it was the challenge that I faced while working on the heavy GUI which is pretty cool as far as functionality is related but then it makes the normal administration work a little slow specifically when you are geographically placed  in a distant location from you actual sever. So command line comes very handy in situations when the connectivity is really slow but then you should be ready and equipped to work on the command line. So best is start preparing scripts that will help you in a crunch scenario and then its really feels geeky while you are working on the command line. So its kinda catchy there.

All I am saying if I am saying anything at all is that you should prepare your scripts and start using them on a day to day basis before you faced with any crunch situation.

Well to start off you can install SQL Server on a Windows Core to get the real feel of working on command line.

To start with you can start/stop the SQL Server related services from cmd or powershell.
CommandLine (CMD)
  • Net start mssqlserver (Net start mssql$InstanceName for named instance)
  • Net stop  mssqlserver (Net stop mssql$InstanceName for named instance)
  • Sc
Powershell
  • Start-Service MSSQLSERVER  (Start-Service 'MSSQL$instancename' )
  • Stop-Service MSSQLSERVER (Stop-Service 'MSSQL$instancename' )

to be continued...

Tushar Kanti

SSMS Keyboard shortcut not working!!!

hi Guys,
Was facing the keyboard shortcut issue not working since logn so decided to run through the options in the SSMS today and identified how to fix that. Sharing the same with you/.

Tools-> Options -> Environment -> Keyboard -> Reset -> ok

This was the SSMS for SQL Server 2012 but I hope most of the versions will be the same from SQL Server 2005.


Thanks,
Tushar Kanti
 

Thursday, February 6, 2014

Harpoon for SQL Server DBA's

Hi Guys,
It's been sometime now that I have spent in this RDBMS world and I know there is still plethora of things still waiting for me in here. But today I thought of compiling my tools in One Place so that it's one stop destination for DBA's working on Production Issues.

I  will try to put all the counters and values or the queries required for looking into Production Issues. I will also try to document the significance of a parameter. I will alter/customize the content depending upon the feedback I have from the guests.

I will start with places I have the most issues and try to fight my fears before the D Day so that I don't have a cold feet.

Classic Question : What do I do if my Query/Report is responding slow and taking more time than ever?
Answer: There is never a right/wrong answer here. The only thing important is your approach and the answer will come eventually. The doctors first tries to identify the symptoms and then does some test and based on his experience he judges what is wrong with the patient. If the doctor has not faced the disease earlier then he will not be in a situation to prescribe any medication nor will he be able to tell what's wrong with the patient.
So the Answer will be two fold what are the parameters you will try to judge the situation on and the analysis you will do based on the data you have.
Now first the parameters : There are a lot of parameters you can look into starting from PLE ,Page Fault,Disk Write/Sec, Disk Queue length, Processor Queue Length, etc.. The list if almost endless

Processor Queue Length:
How will it help? This is a parameter which will give the awareness that whether the CPU in the system is adequate or you need more CPU to be added.
Value : The Processor Queue Length should be 0.
Significance : The Processor Queue Length as the name suggests is a parameter which tells us if there are any processes in queue waiting for the Processing. So if the Value is 0 then there are no processes waiting to be catered. You need not worry even the CPU utilization is very high if the Processor queue length is 0. What a I mean to say is there is no performance hit yet. Definitely a thing to work upon why is the CPU going high.

Disk Queue Length:

ok.. first things first.. I apologize for not working on this blog for long long time now. But then today I thought I will ink some of the important things I need to revise before the interview calls. :) 

Let Start with Mirroring: 
I know it's an old school technology and we should switch to always on like asap but then hold on to your horses causes the questions are not stopping to come very soon. 

So 1st question which is the table which will show you the redo queue for the mirroring. 

SELECT  db_name(database_id) as DatabaseName       ,[retention_period]      ,[time_behind]
      ,[enable_time_behind]      ,[send_queue]      ,[enable_send_queue]      ,[redo_queue]      ,[enable_redo_queue]      ,[average_delay]      ,[enable_average_delay]
  FROM [msdb].[dbo].[dbm_monitor_alerts]  order by [database_id]

So basically the msdb.dbo.dbm_monitor_alerts table will have the key to your performance configuration of the mirroing setup.

SELECT  db_name(database_id) as DatabaseName      ,[role]      ,[status]      ,[witness_status]
      ,[log_flush_rate]      ,[send_queue_size]      ,[send_rate]      ,[redo_queue_size]      ,[redo_rate]
      ,[transaction_delay]      ,[transactions_per_sec]      ,[time]      ,[end_of_log_lsn]      ,[failover_lsn]
      ,[local_time]  FROM [msdb].[dbo].[dbm_monitor_data]  order by [database_id], [local_time] desc

Now msdb.dbo.dbm_monitor_data will have the answer to your redo queue size 

OR use sp_dbmmonitorresults database_name       , rows_to_return        , update_status 
which will give you all the information you can seek for :)

There will another questions related to setup as well and I would like to address the most interesting one as of now but I promise to update this more soon. 
What is the process of failover for asyncronous mode in the mirror server 
use master; 
alter database set partner  force_service_allow_data_loss

ok.. lets now look at replication questions we might encounter 


Exec sys.sp_replmonitorrefreshjob @iterations = 1;
we need to run this proc to update all the data in the distribution database monitoring tables.

exec sp_replmonitorsubscriptionpendingcmds   @publisher ='Enter publisher server name',
 @publisher_db = 'Enter publisher database name', @publication ='Enter name of publication',
 @subscriber ='Enter subscriber server name', @subscriber_db ='Enter subscriber database name',
 @subscription_type ='0' --0 for push and 1 for pull 

The MSrepl_transactions table contains one row for each replicated transaction.
The MSrepl_commands table contains rows of replicated commands
The MSrepl_errors table contains rows with extended Distribution Agent and Merge Agent failure information. 

Okay all for Indexes on this page 
https://technet.microsoft.com/en-us/library/ms177443(v=sql.105).aspx

Al Time Favourite !!! Will post on the lines of Paul Randal sometime sson. May be getting Lucky Tonite !! 



































Wednesday, October 19, 2011

Database Refresh

Hi,
Off late I have been doing a lot of db refresh on adhoc basis but I tend to do some mistakes each time. So I thought of tabulating the steps in one central point. I hope some of you can also benefit from it. Below mentioned are logical steps and some scripts to make things easier.


Steps for refresh:-
>take backup from prod
>zip the bakcup
>move the zip to UAT server
>unzip the backupfile
>take backup of the database on UAT and keep it in a separate folder just in case of roll back

>run
[restore filelistonly from device ] and [sp_helpdb ] and check that both have matching files.
>script out the roles access rights on UAT database
>script out the user access rights on UAT database
>kill all user connections on the database on UAT
>restore the database on UAT
>delete the users exect dbo,sa,guest and other
>run the user access script
>use sp_change_users_login 'report' to generate the orphaned logins
>run sp_changedbowner 'sa'
>change the recovery plan of the UAT back to its original as the Prod may have a different recovery plan.
>send mail to use rto verify the database access for a couple of logins

That should be basic steps. I will soon update the script to get the access list for the users in a database.

Script for users and user access :


set nocount on

-- users created for logins and with their defualt schema
select 'CREATE USER ['+p.name +'] '+
case when l.name is not null then ' FOR LOGIN ['+l.name+'] ' else ' WITHOUT LOGIN ' end +
case when p.default_schema_name is not null then ' WITH DEFAULT_SCHEMA = '+p.default_schema_name +' 'else ' ' end + CHAR(13)+CHAR(10)+ 'GO '
from sys.syslogins l
right outer join sys.database_principals p on p.SID=l.SID
where (p.type='S' or p.type ='U')

--roles related to users in database
select 'Exec sp_addrolemember ' + ''''+prole.name+''''+','+'''' +puser.name +'''' + CHAR(13)+CHAR(10)+' GO'
from sys.database_role_members r
join sys.database_principals prole on r.role_principal_id= prole.principal_id
join sys.database_principals puser on r.member_principal_id= puser.principal_id
order by 1

set nocount off

-------------------------------------------------

Hope this helps.I am still missing the grants in the script. Will work on that today and add it to the script.






Tushar