显示标签为“drop”的博文。显示所有博文
显示标签为“drop”的博文。显示所有博文

2012年3月20日星期二

any way to speed replication creation through scripts

As part of my previous thread on migrating the publisher to a new server, I
had scripted the drop and create replication scripts and I have around 100
articles and a few subscribers and what i want to do is to create the trans
replication with the "nosync" option as i ensured that the publisher and
subscribers are in sync.
So what i noticed is that although the drop replication runs in a min or 2,
the create runs for almost 30 mins
And looking into the script, I see that for each article,
it first runs
1) sp_add_article
and then for every subscriber runs
1) sp_addsubscription
2) sp_addpushsubscription_agent
and it runs all this in serial.
So the question is, can i somehow speed this up ? Is there a way to run
sp_addsubscription that say run it for "all" articles
Also I noticed that sp_addpushsubscription_agent just run the same set of
parameters except the subscriber name is different and it repeats it every
time for every article
Can someone please let me know if I could somehow avoid those default
scripted sql and make this all work at the end faster with less stored
procs?
I am using SQL 2005
Thanks
Replication puts schema locks on tables momentarily as part of the
publication creation process. You will get much faster publication creation
times if you kick all your users out of the database before creating the
publication(s).
http://www.zetainteractive.com - Shift Happens!
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Hassan" <hassan@.hotmail.com> wrote in message
news:eMmCKuUPIHA.4440@.TK2MSFTNGP06.phx.gbl...
> As part of my previous thread on migrating the publisher to a new server,
> I had scripted the drop and create replication scripts and I have around
> 100 articles and a few subscribers and what i want to do is to create the
> trans replication with the "nosync" option as i ensured that the publisher
> and subscribers are in sync.
> So what i noticed is that although the drop replication runs in a min or
> 2, the create runs for almost 30 mins
> And looking into the script, I see that for each article,
> it first runs
> 1) sp_add_article
> and then for every subscriber runs
> 1) sp_addsubscription
> 2) sp_addpushsubscription_agent
> and it runs all this in serial.
> So the question is, can i somehow speed this up ? Is there a way to run
> sp_addsubscription that say run it for "all" articles
> Also I noticed that sp_addpushsubscription_agent just run the same set of
> parameters except the subscriber name is different and it repeats it every
> time for every article
> Can someone please let me know if I could somehow avoid those default
> scripted sql and make this all work at the end faster with less stored
> procs?
> I am using SQL 2005
> Thanks
>
|||So what i tried yesterday and would like to validate with the experts here
is I first ran the sp_addarticle for each and every article
and then ran once for every subscriber the following 2 stored procs
sp_addsubscription -- I specified all articles here
sp_addpushsubscription_agent
If you script replication the above 2 stored procs runs for every article,
hence has 1000 more calls to these sprocs as opposed to what i did above and
run it once for every subscriber
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:eYEdzKaPIHA.1756@.TK2MSFTNGP06.phx.gbl...
> Replication puts schema locks on tables momentarily as part of the
> publication creation process. You will get much faster publication
> creation times if you kick all your users out of the database before
> creating the publication(s).
> --
> http://www.zetainteractive.com - Shift Happens!
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "Hassan" <hassan@.hotmail.com> wrote in message
> news:eMmCKuUPIHA.4440@.TK2MSFTNGP06.phx.gbl...
>

2012年3月11日星期日

Any tool to give me some live metrics

Id like a nice front end that I can drop to get the server I want or even
combine multiple perfmon sessions for servers in one look and feel.
I also want to look at event logs, current OS /SQL info.. Current
applications running...etc..
Right now, if i have to go to the logs, i have to go to the manage
properties of My computer.. and connect to the server..
Also open Perfmon and select the server.. or even worse, log on to the
server itself to look at current running processes.. I want to make our job
easier by having this one console that i can just drop down on the server I
want and it gives me most of the live info..Something with a web interface
would also be better..
You could check http://www.imceda.com/Product.htm.
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:eznApwkBFHA.3616@.TK2MSFTNGP11.phx.gbl...
> Id like a nice front end that I can drop to get the server I want or even
> combine multiple perfmon sessions for servers in one look and feel.
> I also want to look at event logs, current OS /SQL info.. Current
> applications running...etc..
> Right now, if i have to go to the logs, i have to go to the manage
> properties of My computer.. and connect to the server..
> Also open Perfmon and select the server.. or even worse, log on to the
> server itself to look at current running processes.. I want to make our
job
> easier by having this one console that i can just drop down on the server
I
> want and it gives me most of the live info..Something with a web interface
> would also be better..
>
|||Hello Hassan,
Check out http://www.quest.com/Quest_Central_f...agnostics.asp. We have a lot of other SQL Server tools which you can see at http://www.quest.com/Quest_Central_for_SQL_Server.
-Kevin Kline
Quest Software (www.quest.com)
SQL Server MVP
I support PASS, the Professional Association for SQL Server. (www.sqlpass.org)

> Id like a nice front end that I can drop to get the server I want or
> even combine multiple perfmon sessions for servers in one look and
> feel.
> I also want to look at event logs, current OS /SQL info.. Current
> applications running...etc..
> Right now, if i have to go to the logs, i have to go to the manage
> properties of My computer.. and connect to the server..
> Also open Perfmon and select the server.. or even worse, log on to the
> server itself to look at current running processes.. I want to make
> our job
> easier by having this one console that i can just drop down on the
> server I
> want and it gives me most of the live info..Something with a web
> interface
> would also be better..

2012年2月23日星期四

any impact if create index and then drop index

hi,
i want to create index to speed up one operation (DELETE) and the drop the
index to resume.
any side effects or impacts on the db?
On Tue, 30 Nov 2004 15:34:47 +0800, Mullin Yu wrote:

>hi,
>i want to create index to speed up one operation (DELETE) and the drop the
>index to resume.
>any side effects or impacts on the db?
>
Hi Mullin,
Creating the index will take some time and will also lock parts of your
data. Dropping the index doesn't take much time (unless it is a clustered
index).
However, why would you want to drop the index after the delete?
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)
|||because want to add an index on the third-party product for a temp
operation. don't know any impacts on that product if exist an index
persistently.
"Hugo Kornelis" <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:fkdoq0p5sb73abfmke0phv999kjaa2k4p9@.4ax.com... [vbcol=seagreen]
> On Tue, 30 Nov 2004 15:34:47 +0800, Mullin Yu wrote:
the
> Hi Mullin,
> Creating the index will take some time and will also lock parts of your
> data. Dropping the index doesn't take much time (unless it is a clustered
> index).
> However, why would you want to drop the index after the delete?
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)
|||On Wed, 1 Dec 2004 15:47:04 +0800, Mullin Yu wrote:

>because want to add an index on the third-party product for a temp
>operation. don't know any impacts on that product if exist an index
>persistently.
Hi Mullin,
An extra index will:
* Slightly decrease performance of inserts, deletes and updates that
affect the column(s) used in the index,
* Speed up those queries that can use this index,
* Slightly increase the size of databases and backups,
* Have no effect on any other operation.
All this on the assumption that you're talking about a nonclustered index.
A clustered index has more impact. (But since building and dropping a
clustered index is quite costly, I don't think you'd consider this for
your temp index).
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)

any impact if create index and then drop index

hi,
i want to create index to speed up one operation (DELETE) and the drop the
index to resume.
any side effects or impacts on the db?On Tue, 30 Nov 2004 15:34:47 +0800, Mullin Yu wrote:

>hi,
>i want to create index to speed up one operation (DELETE) and the drop the
>index to resume.
>any side effects or impacts on the db?
>
Hi Mullin,
Creating the index will take some time and will also lock parts of your
data. Dropping the index doesn't take much time (unless it is a clustered
index).
However, why would you want to drop the index after the delete?
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||because want to add an index on the third-party product for a temp
operation. don't know any impacts on that product if exist an index
persistently.
"Hugo Kornelis" <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:fkdoq0p5sb73abfmke0phv999kjaa2k4p9@.
4ax.com...
> On Tue, 30 Nov 2004 15:34:47 +0800, Mullin Yu wrote:
>
the[vbcol=seagreen]
> Hi Mullin,
> Creating the index will take some time and will also lock parts of your
> data. Dropping the index doesn't take much time (unless it is a clustered
> index).
> However, why would you want to drop the index after the delete?
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)|||On Wed, 1 Dec 2004 15:47:04 +0800, Mullin Yu wrote:

>because want to add an index on the third-party product for a temp
>operation. don't know any impacts on that product if exist an index
>persistently.
Hi Mullin,
An extra index will:
* Slightly decrease performance of inserts, deletes and updates that
affect the column(s) used in the index,
* Speed up those queries that can use this index,
* Slightly increase the size of databases and backups,
* Have no effect on any other operation.
All this on the assumption that you're talking about a nonclustered index.
A clustered index has more impact. (But since building and dropping a
clustered index is quite costly, I don't think you'd consider this for
your temp index).
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)

any impact if create index and then drop index

hi,
i want to create index to speed up one operation (DELETE) and the drop the
index to resume.
any side effects or impacts on the db?On Tue, 30 Nov 2004 15:34:47 +0800, Mullin Yu wrote:
>hi,
>i want to create index to speed up one operation (DELETE) and the drop the
>index to resume.
>any side effects or impacts on the db?
>
Hi Mullin,
Creating the index will take some time and will also lock parts of your
data. Dropping the index doesn't take much time (unless it is a clustered
index).
However, why would you want to drop the index after the delete?
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||because want to add an index on the third-party product for a temp
operation. don't know any impacts on that product if exist an index
persistently.
"Hugo Kornelis" <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:fkdoq0p5sb73abfmke0phv999kjaa2k4p9@.4ax.com...
> On Tue, 30 Nov 2004 15:34:47 +0800, Mullin Yu wrote:
> >hi,
> >
> >i want to create index to speed up one operation (DELETE) and the drop
the
> >index to resume.
> >
> >any side effects or impacts on the db?
> >
> Hi Mullin,
> Creating the index will take some time and will also lock parts of your
> data. Dropping the index doesn't take much time (unless it is a clustered
> index).
> However, why would you want to drop the index after the delete?
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)|||On Wed, 1 Dec 2004 15:47:04 +0800, Mullin Yu wrote:
>because want to add an index on the third-party product for a temp
>operation. don't know any impacts on that product if exist an index
>persistently.
Hi Mullin,
An extra index will:
* Slightly decrease performance of inserts, deletes and updates that
affect the column(s) used in the index,
* Speed up those queries that can use this index,
* Slightly increase the size of databases and backups,
* Have no effect on any other operation.
All this on the assumption that you're talking about a nonclustered index.
A clustered index has more impact. (But since building and dropping a
clustered index is quite costly, I don't think you'd consider this for
your temp index).
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)