显示标签为“speed”的博文。显示所有博文
显示标签为“speed”的博文。显示所有博文

2012年3月20日星期二

any way to speed replication creation through scripts

As part of my previous thread on migrating the publisher to a new server, I
had scripted the drop and create replication scripts and I have around 100
articles and a few subscribers and what i want to do is to create the trans
replication with the "nosync" option as i ensured that the publisher and
subscribers are in sync.
So what i noticed is that although the drop replication runs in a min or 2,
the create runs for almost 30 mins
And looking into the script, I see that for each article,
it first runs
1) sp_add_article
and then for every subscriber runs
1) sp_addsubscription
2) sp_addpushsubscription_agent
and it runs all this in serial.
So the question is, can i somehow speed this up ? Is there a way to run
sp_addsubscription that say run it for "all" articles
Also I noticed that sp_addpushsubscription_agent just run the same set of
parameters except the subscriber name is different and it repeats it every
time for every article
Can someone please let me know if I could somehow avoid those default
scripted sql and make this all work at the end faster with less stored
procs?
I am using SQL 2005
Thanks
Replication puts schema locks on tables momentarily as part of the
publication creation process. You will get much faster publication creation
times if you kick all your users out of the database before creating the
publication(s).
http://www.zetainteractive.com - Shift Happens!
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Hassan" <hassan@.hotmail.com> wrote in message
news:eMmCKuUPIHA.4440@.TK2MSFTNGP06.phx.gbl...
> As part of my previous thread on migrating the publisher to a new server,
> I had scripted the drop and create replication scripts and I have around
> 100 articles and a few subscribers and what i want to do is to create the
> trans replication with the "nosync" option as i ensured that the publisher
> and subscribers are in sync.
> So what i noticed is that although the drop replication runs in a min or
> 2, the create runs for almost 30 mins
> And looking into the script, I see that for each article,
> it first runs
> 1) sp_add_article
> and then for every subscriber runs
> 1) sp_addsubscription
> 2) sp_addpushsubscription_agent
> and it runs all this in serial.
> So the question is, can i somehow speed this up ? Is there a way to run
> sp_addsubscription that say run it for "all" articles
> Also I noticed that sp_addpushsubscription_agent just run the same set of
> parameters except the subscriber name is different and it repeats it every
> time for every article
> Can someone please let me know if I could somehow avoid those default
> scripted sql and make this all work at the end faster with less stored
> procs?
> I am using SQL 2005
> Thanks
>
|||So what i tried yesterday and would like to validate with the experts here
is I first ran the sp_addarticle for each and every article
and then ran once for every subscriber the following 2 stored procs
sp_addsubscription -- I specified all articles here
sp_addpushsubscription_agent
If you script replication the above 2 stored procs runs for every article,
hence has 1000 more calls to these sprocs as opposed to what i did above and
run it once for every subscriber
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:eYEdzKaPIHA.1756@.TK2MSFTNGP06.phx.gbl...
> Replication puts schema locks on tables momentarily as part of the
> publication creation process. You will get much faster publication
> creation times if you kick all your users out of the database before
> creating the publication(s).
> --
> http://www.zetainteractive.com - Shift Happens!
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "Hassan" <hassan@.hotmail.com> wrote in message
> news:eMmCKuUPIHA.4440@.TK2MSFTNGP06.phx.gbl...
>

2012年2月23日星期四

any impact if create index and then drop index

hi,
i want to create index to speed up one operation (DELETE) and the drop the
index to resume.
any side effects or impacts on the db?
On Tue, 30 Nov 2004 15:34:47 +0800, Mullin Yu wrote:

>hi,
>i want to create index to speed up one operation (DELETE) and the drop the
>index to resume.
>any side effects or impacts on the db?
>
Hi Mullin,
Creating the index will take some time and will also lock parts of your
data. Dropping the index doesn't take much time (unless it is a clustered
index).
However, why would you want to drop the index after the delete?
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)
|||because want to add an index on the third-party product for a temp
operation. don't know any impacts on that product if exist an index
persistently.
"Hugo Kornelis" <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:fkdoq0p5sb73abfmke0phv999kjaa2k4p9@.4ax.com... [vbcol=seagreen]
> On Tue, 30 Nov 2004 15:34:47 +0800, Mullin Yu wrote:
the
> Hi Mullin,
> Creating the index will take some time and will also lock parts of your
> data. Dropping the index doesn't take much time (unless it is a clustered
> index).
> However, why would you want to drop the index after the delete?
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)
|||On Wed, 1 Dec 2004 15:47:04 +0800, Mullin Yu wrote:

>because want to add an index on the third-party product for a temp
>operation. don't know any impacts on that product if exist an index
>persistently.
Hi Mullin,
An extra index will:
* Slightly decrease performance of inserts, deletes and updates that
affect the column(s) used in the index,
* Speed up those queries that can use this index,
* Slightly increase the size of databases and backups,
* Have no effect on any other operation.
All this on the assumption that you're talking about a nonclustered index.
A clustered index has more impact. (But since building and dropping a
clustered index is quite costly, I don't think you'd consider this for
your temp index).
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)

any impact if create index and then drop index

hi,
i want to create index to speed up one operation (DELETE) and the drop the
index to resume.
any side effects or impacts on the db?On Tue, 30 Nov 2004 15:34:47 +0800, Mullin Yu wrote:

>hi,
>i want to create index to speed up one operation (DELETE) and the drop the
>index to resume.
>any side effects or impacts on the db?
>
Hi Mullin,
Creating the index will take some time and will also lock parts of your
data. Dropping the index doesn't take much time (unless it is a clustered
index).
However, why would you want to drop the index after the delete?
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||because want to add an index on the third-party product for a temp
operation. don't know any impacts on that product if exist an index
persistently.
"Hugo Kornelis" <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:fkdoq0p5sb73abfmke0phv999kjaa2k4p9@.
4ax.com...
> On Tue, 30 Nov 2004 15:34:47 +0800, Mullin Yu wrote:
>
the[vbcol=seagreen]
> Hi Mullin,
> Creating the index will take some time and will also lock parts of your
> data. Dropping the index doesn't take much time (unless it is a clustered
> index).
> However, why would you want to drop the index after the delete?
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)|||On Wed, 1 Dec 2004 15:47:04 +0800, Mullin Yu wrote:

>because want to add an index on the third-party product for a temp
>operation. don't know any impacts on that product if exist an index
>persistently.
Hi Mullin,
An extra index will:
* Slightly decrease performance of inserts, deletes and updates that
affect the column(s) used in the index,
* Speed up those queries that can use this index,
* Slightly increase the size of databases and backups,
* Have no effect on any other operation.
All this on the assumption that you're talking about a nonclustered index.
A clustered index has more impact. (But since building and dropping a
clustered index is quite costly, I don't think you'd consider this for
your temp index).
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)

any impact if create index and then drop index

hi,
i want to create index to speed up one operation (DELETE) and the drop the
index to resume.
any side effects or impacts on the db?On Tue, 30 Nov 2004 15:34:47 +0800, Mullin Yu wrote:
>hi,
>i want to create index to speed up one operation (DELETE) and the drop the
>index to resume.
>any side effects or impacts on the db?
>
Hi Mullin,
Creating the index will take some time and will also lock parts of your
data. Dropping the index doesn't take much time (unless it is a clustered
index).
However, why would you want to drop the index after the delete?
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||because want to add an index on the third-party product for a temp
operation. don't know any impacts on that product if exist an index
persistently.
"Hugo Kornelis" <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:fkdoq0p5sb73abfmke0phv999kjaa2k4p9@.4ax.com...
> On Tue, 30 Nov 2004 15:34:47 +0800, Mullin Yu wrote:
> >hi,
> >
> >i want to create index to speed up one operation (DELETE) and the drop
the
> >index to resume.
> >
> >any side effects or impacts on the db?
> >
> Hi Mullin,
> Creating the index will take some time and will also lock parts of your
> data. Dropping the index doesn't take much time (unless it is a clustered
> index).
> However, why would you want to drop the index after the delete?
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)|||On Wed, 1 Dec 2004 15:47:04 +0800, Mullin Yu wrote:
>because want to add an index on the third-party product for a temp
>operation. don't know any impacts on that product if exist an index
>persistently.
Hi Mullin,
An extra index will:
* Slightly decrease performance of inserts, deletes and updates that
affect the column(s) used in the index,
* Speed up those queries that can use this index,
* Slightly increase the size of databases and backups,
* Have no effect on any other operation.
All this on the assumption that you're talking about a nonclustered index.
A clustered index has more impact. (But since building and dropping a
clustered index is quite costly, I don't think you'd consider this for
your temp index).
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)

Any ideas on how to speed up this sp?

it is working but takes about 3-4 seconds per exec.

CREATE PROCEDURE isp_ap_calc_apt_totals
@.p_comp char(2),
@.p_vend char(6),
@.p_asofdate char(8)
as

if (@.p_asofdate <= '00000000')
begin
set @.p_asofdate = '99999999'
end

delete from XAPAPTTOT
where xapt_comp = @.p_comp and xapt_vend = @.p_vend and xapt_asof_date = @.p_asofdate

insert into XAPAPTTOT
select apph_comp, apph_vend, apph_type, apph_id, @.p_asofdate,
sum(apph_paymnts),
sum(apph_discts),
sum(apph_adjts),
count(apph_paymnts),
sum(apph_paymnts)+ sum(apph_discts) + sum(apph_adjts) +
b.apt_gross,
0,
max(str(yy,4) + replace(str(mm,2),' ','0') + replace(str(dd,2),' ','0'))
from APPHISTF.a join APTRANF.b on b.apt_comp = a.apph_comp and b.apt_vend = a.apph_vend and b.apt_type = a.apph_type and b.apt_id = a.apph_id
where ((a.apph_comp = @.p_comp) and (a.apph_vend = @.p_vend) and (a.apph_unpost_dt = 0)
and (str(a.yy,4) + replace(str(a.mm,2),' ','0') + replace(str(a.dd,2),' ','0') <= @.p_asofdate))
or ((a.apph_unpost_dt > 0 and a.apph_unpost_dt <= @.p_asofdate and b.apt_unposted_fg = 1 and b.apt_comp = @.p_comp and b.apt_vend = @.p_vend and b.apt_type = a.apph_type and b.apt_id = a.apph_id))
or (((str(a.yy,4) + replace(str(a.mm,2),' ','0') + replace(str(a.dd,2),' ','0') <= @.p_asofdate) and a.apph_unpost_dt > @.p_asofdate and b.apt_comp = @.p_comp and b.apt_vend = @.p_vend and b.apt_type = a.apph_type and b.apt_id = a.apph_id))
group by apph_comp, apph_vend, apph_type, apph_id

update XAPAPTTOT
set xapt_last_payck =
(select max(apph_payck) from APPHISTF
where apph_comp = xapt_comp and apph_vend = xapt_vend and apph_type = xapt_type
and apph_id = xapt_id
and str(yy,4) + replace(str(mm,2),' ','0') + replace(str(dd,2),' ','0') = xapt_last_paydt )
where xapt_comp = @.p_comp and xapt_vend = @.p_vend and xapt_asof_date = @.p_asofdate
GOPost the DDL and the indexes for the tables. Read the hint sticky at the top of the forum. Sample data might help as well, but what kind of volume are we talking about?|||the volume varies(multiple customers)
1,000-10,000 APTRANF and APPHISTF records
the Relationship between the tables is for each APTRANF record
you can have 0 to 9999 APPHISTF records.(generally only 0 or 1)
in unusually instances the APPHISTF might have 2-5 records.
the APPHISTF is a payment history(detail) to the APTRANF(master)
we allow unposting of a payment (apph_unpost_dt) and reissueing a new payment.
not sure about DDL ?|||Read this link here

http://www.dbforums.com/t1196943.html|||Hi,
Since I don't have any idea of your table structures and indexes thereon, I would go with eliminating redundencies in your code to reduce time. Several of the conditions and calculations are repeated and have now been changed to occur once. The code is given below. Hope this helps:

CREATE PROCEDURE isp_ap_calc_apt_totals
@.p_comp char(2),
@.p_vend char(6),
@.p_asofdate char(8)
as

if (@.p_asofdate <= '00000000')
set @.p_asofdate = '99999999'

delete from XAPAPTTOT
where xapt_comp = @.p_comp
and xapt_vend = @.p_vend
and xapt_asof_date = @.p_asofdate

insert into XAPAPTTOT
select apph_comp
,apph_vend
,apph_type
,apph_id
,@.p_asofdate
,sum(apph_paymnts)
,sum(apph_discts)
,sum(apph_adjts)
,count(apph_paymnts)
,sum(apph_paymnts)+ sum(apph_discts) + sum(apph_adjts) + b.apt_gross
,0
,max(str_1)
from (select apph_comp
,apph_vend
,apph_type
,apph_id
,apph_paymnts
,apph_discts
,apph_adjts
,apph_paymnts
,apph_unpost_dt
,str(yy,4) + replace(str(mm,2),' ','0') + replace(str(dd,2),' ','0') str_1
from APPHISTF
where apph_comp = @.p_comp
and apph_vend = @.p_vend) a

inner join

(select apt_gross
,apt_type
,apt_id
,apt_unposted_fg
from APTRANF
where bapt_comp = @.p_comp
and apt_vend = @.p_vend) b

on ( b.apt_type = a.apph_type
and b.apt_id = a.apph_id)

where (a.apph_unpost_dt = 0
and a.str_1 <= @.p_asofdate)
or (a.apph_unpost_dt > 0
and a.apph_unpost_dt <= @.p_asofdate
and b.apt_unposted_fg = 1)
or (a.str_1 <= @.p_asofdate
and a.apph_unpost_dt > @.p_asofdate)

group by apph_comp, apph_vend, apph_type, apph_id

update XAPAPTTOT
set xapt_last_payck = (select max(apph_payck) from APPHISTF
where apph_comp = xapt_comp
and apph_vend = xapt_vend
and apph_type = xapt_type
and apph_id = xapt_id
and str(yy,4) + replace(str(mm,2),' ','0') + replace(str(dd,2),' ','0') = xapt_last_paydt )
where xapt_comp = @.p_comp and xapt_vend = @.p_vend and xapt_asof_date = @.p_asofdate
GO


--Scalability Experts.

2012年2月11日星期六

Anti-virus Exclusion list

Currently, I install an Anti-virus solution on MS SQL Server. It protects windows OS. In order to speed up the SQL performance, I dont want this Anti-virus to scan my SQL database. I want to know what's the exclusion list for the Anti-virus application. Thanks,

Hi,

all file extension that are used for SQL Server, by default this is mdf,ndf,ldf but this can be changed by the user, so you will need to have a look at your database. The logfiles .log can be dismissed, as there is no heavy load on them.

The information about the datafiles is stored in the sysfiles table, therefore you will have to query this table for each database in order to get the extensions. I prepared something for you which should help you to identity the needed extensions.

CREATE Table #Extensions

(

Extension VARCHAR(10)

)

INSERT INTO #Extensions

EXEC sp_msforeachdb 'Select REVERSE(LEFT(REVERSE(filename),CHARINDEX(CHAR(46),REVERSE(filename))-1)) from sysfiles'

SELECT DISTINCT Extension FROM #Extensions

If you data is stored in specific folders, you could also try to exclude this folder instead of using the file extension filter (if you application is able to do this)

HTH, Jens K. Suessmeyer.

http://www.sqlserver2005.de

|||

Hi,

All anti virus server have Exclusion List which means it won't scan those files(Extension) listed , you have to enter .mdf,.ldf & .ndf to its Extention Exclusion List and then it won't scan your Database. http://www.nus.edu.sg/comcen/antivirus/faq.htm#10 FYI & E.G.

Hemantgiri S. Goswami