显示标签为“write”的博文。显示所有博文
显示标签为“write”的博文。显示所有博文

2012年3月22日星期四

anyone ?

I've populated reporting DB's in the past using different techniques:

1. Have Transactions in Data Tier Write to OLTP AND Reporting DB Simultaneously

2. Log Shipping

3. Restore BAckups Nightly

4. Replication

5. DTS

My question is what is the BEST Practice in SQL2k5.

I'm all gung ho about using Mirroring but am not certain that this will be the best way to go.

My thoughts are something along the lines of this

1. have OLTP Mirrored to what I'll call a "Staging" DB. This is NOT to be used for Failover just as a means of accessing read only data for the Reporting DB

2. Have 2 DBs on another box that are flattened (Denormalized) for reporting (ReportA and ReportB)

3. Use SSIS to Populate the Report DBs in alternating sequences every XXX minutes (60 minutes let say)

4. Use Logic to let the Reporting Application Figure out which of the Two reporting DB's is "live". The Currently loading DB will be offline while it's being loaded and we'll alternate between the two (Exact method TBD but this wont be rocket science)

Now, I know this will "Work" but is this the best way to go about this ?

I do NOT want to use SSIS to populate the Reporting system directly from our OLTP system due to contention issues etc.

thoughts ?

Hello Gregory,

I've thought of using Database Mirroring for reporting as well, but there were a couple things holding me back. First off, you can't use the mirrored database directly, you have to create database snapshots of the mirrored database that you can use instead. Also, each database snapshot is independent of each other, so any time you create a new one, your applications/SSIS package will have to take this into account and reference the new snapshot. Secondly, the disk space. Unless you manage this well, you could eat up a lot of drive space.

Currently, I am using #3 from your list, restoring nightly backups. The problems I am having with this option is the length of the database restore, during which, the reporting database is offline. I'm up to a 21GB db backup file that I am restoring to another server, and it takes about 3 hours. Another problem I have is that my users would like the data to be more up-to-date.

Eventually, I think, I will be moving to asynchronous replication.

Hope this helps.

Jarret

|||

jarrett,

I guarantee you can speed up your backup "Dramatically" by backing up to multiple files (4 is probably a good # to start with).

Then restore from those 4 files on the target box.

I bet your backup and restore run in about 10 minutes (depending on the disk IO subsystem you're using)

try it out and see what happens.

thanks for the post.

GAJ

|||

Can anyone else comment on this thread\question ?

2012年3月19日星期一

any way to check the duplicated rows in destination before loading data?

Hi. As the title, I am try to figure out how to write script to prevent duplicated rows before loading data from couple csv files to the OLE database table.
Another quick question, when I use Data Conversion to convert data from string to datetime or decimal type, it always return error like potential data loss.

For your first question, probably an easier approach is to use a SORT transform to remove duplicate records before loading data into your destination.

For the other question, I think it's a matter of which format you used in your source strings. Firstly pls be aware we use locale information when doing converting strings to date types or decimals. Secondly, when converting string to date types, you have two options: normal conversion and fast-parse conversion. Normal conversion supports standard oledb formats while fastparse supports ISO 8601. (fastparse option is on the DataConversion output columns)

You'll need to get more detailed helps on this from SQLServer Books On Line. e.g. For fastparse, pls refer ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/extran9/html/bed6e2c1-791a-4fa1-b29f-cbfdd1fa8d39.htm

thanks

wenyang

|||Thanks for your fast response. My first question is to load date from CSV files to the table, but don't insert the duplicated rows which are already existed in the table.|||

I see. you want to avoid inserting rows which'll duplicate rows in your existing destination table. In that case, you can do a lookup first, then leading only those "failing" rows to destination. Remember to set Lookup's error flow handling to Redirect.

thanks

wenyang

2012年3月8日星期四

Any SELECT Statement Gurus Out There?

Hi all,

I'm having real trouble wrapping my newbie brain around this problem. Can
someone please tell me the most efficient (or any!) way to write a SELECT
statement to return a set of rows from 5 tables:

These tables are part of our medical database. For example, a person seeking
healthcare may want to know the names and contact information for all
doctors in Reno, NV, who do cardiology. Sounds simple, but it isn't.

A medical group can have many doctors.
A doctor may be a member of more than one medical group.
A doctor or group can have multiple specialties.
A group can have multiple facilities (physical locations).

So the tables look like this...

Table: Doctors
-----
DoctorName
DoctorID

Table: Groups
-----
GroupName
GroupID

Table: Docs2Groups (provides many-to-many relationship between Doctors and
Groups)
-------
DoctorID
GroupID

Table: Specialties (provides many-to-many relationshop between Doctors,
Groups, and Specialties)
------
SpecialtyID
SpecialtyDesc
DoctorID
GroupID

Facilities
---
FacilityName (provides one-to-many relationship between Groups and
Facilities)
FacilityID
GroupID
Address
City
State
Zip
Phone
E-mail

Any help would be GREATLY appreciated.

--Eric Robinsonthis table confuses me:

Table: Specialties (provides many-to-many relationshop between Doctors,
Groups, and Specialties)
------
SpecialtyID
SpecialtyDesc
DoctorID
GroupID

Are you saying that a doctor can only specialize in something within
the context of a certain group? In other words Dr.Smith is a pediatric
oncologist, but can only work in pediatrics for Group A and oncology
for group B? Does that happen?

It would be easier if a Group was a collection of doctors who all had
specialties.

SELECT d.DoctorName
FROM Doctors d JOIN Docs2Groups d2g on d.DoctorID = d2g.DoctorID
JOIN Specialties s ON d.DoctorID = s.DoctorID
JOIN Facilities f ON f.GroupID = d2g.GroupID
WHERE s.SpecialityDesc = 'cardiology'
AND f.City = 'Reno' AND f.State = 'NV'

But, if a Doctor can only specialize in something in the context of a
group, then you could do this:

SELECT d.DoctorName
FROM Doctors d JOIN Docs2Groups d2g on d.DoctorID = d2g.DoctorID
JOIN Specialties s ON d2g.DoctorID = s.DoctorID
AND d2g.GroupID = s.GroupID
JOIN Facilities f ON f.GroupID = d2g.GroupID
WHERE s.SpecialityDesc = 'cardiology'
AND f.City = 'Reno' AND f.State = 'NV'

Untested.

HTH,
Stu|||Stu,

Each doctor has his or her own primary specialty and 0 or more additional
specialties. These specialties are asociated with the doctor, not the group.
A group's specialty is a function of its participating doctors. If all the
docs for Group A are cardiologists, then Group A is considered a cardiology
group. If the docs are of different specialties, then the group is
considered "multispecialty." In that sense, a group is, as you say, a
collection of doctors.

HOWEVER, there are unusual cases where the group consists of docs with
different specialties, but it still wants to to be known (for the purposes
of our directory) as one certain kind of group. In these cases, the group
itself gets an "overriding" specialty associated with it to keep from being
listed as "multispecialty."

Therefore the Specialties table does double-duty, but there are only a few
records with GroupID <> 0.

Does that clear things up?

--Eric

"Stu" <stuart.ainsworth@.gmail.com> wrote in message
news:1126550993.851782.190410@.g43g2000cwa.googlegr oups.com...
> this table confuses me:
> Table: Specialties (provides many-to-many relationshop between Doctors,
> Groups, and Specialties)
> ------
> SpecialtyID
> SpecialtyDesc
> DoctorID
> GroupID
> Are you saying that a doctor can only specialize in something within
> the context of a certain group? In other words Dr.Smith is a pediatric
> oncologist, but can only work in pediatrics for Group A and oncology
> for group B? Does that happen?
> It would be easier if a Group was a collection of doctors who all had
> specialties.
> SELECT d.DoctorName
> FROM Doctors d JOIN Docs2Groups d2g on d.DoctorID = d2g.DoctorID
> JOIN Specialties s ON d.DoctorID = s.DoctorID
> JOIN Facilities f ON f.GroupID = d2g.GroupID
> WHERE s.SpecialityDesc = 'cardiology'
> AND f.City = 'Reno' AND f.State = 'NV'
> But, if a Doctor can only specialize in something in the context of a
> group, then you could do this:
> SELECT d.DoctorName
> FROM Doctors d JOIN Docs2Groups d2g on d.DoctorID = d2g.DoctorID
> JOIN Specialties s ON d2g.DoctorID = s.DoctorID
> AND d2g.GroupID = s.GroupID
> JOIN Facilities f ON f.GroupID = d2g.GroupID
> WHERE s.SpecialityDesc = 'cardiology'
> AND f.City = 'Reno' AND f.State = 'NV'
> Untested.
> HTH,
> Stu|||Oops, I mistated the Specialties table. It is actually three tables:

Specialty
---
SpecialtyID
SpecialtyDesc

Doc2Specialty
------
SpecialtyID
DoctorID

Group2Specialty
------
SpecialtyID
GroupID

"Stu" <stuart.ainsworth@.gmail.com> wrote in message
news:1126550993.851782.190410@.g43g2000cwa.googlegr oups.com...
> this table confuses me:
> Table: Specialties (provides many-to-many relationshop between Doctors,
> Groups, and Specialties)
> ------
> SpecialtyID
> SpecialtyDesc
> DoctorID
> GroupID
> Are you saying that a doctor can only specialize in something within
> the context of a certain group? In other words Dr.Smith is a pediatric
> oncologist, but can only work in pediatrics for Group A and oncology
> for group B? Does that happen?
> It would be easier if a Group was a collection of doctors who all had
> specialties.
> SELECT d.DoctorName
> FROM Doctors d JOIN Docs2Groups d2g on d.DoctorID = d2g.DoctorID
> JOIN Specialties s ON d.DoctorID = s.DoctorID
> JOIN Facilities f ON f.GroupID = d2g.GroupID
> WHERE s.SpecialityDesc = 'cardiology'
> AND f.City = 'Reno' AND f.State = 'NV'
> But, if a Doctor can only specialize in something in the context of a
> group, then you could do this:
> SELECT d.DoctorName
> FROM Doctors d JOIN Docs2Groups d2g on d.DoctorID = d2g.DoctorID
> JOIN Specialties s ON d2g.DoctorID = s.DoctorID
> AND d2g.GroupID = s.GroupID
> JOIN Facilities f ON f.GroupID = d2g.GroupID
> WHERE s.SpecialityDesc = 'cardiology'
> AND f.City = 'Reno' AND f.State = 'NV'
> Untested.
> HTH,
> Stu|||Eric Robinson (eric @. pmcipa..{com}) writes:
> These tables are part of our medical database. For example, a person
> seeking healthcare may want to know the names and contact information
> for all doctors in Reno, NV, who do cardiology. Sounds simple, but it
> isn't.
> A medical group can have many doctors.
> A doctor may be a member of more than one medical group.
> A doctor or group can have multiple specialties.
> A group can have multiple facilities (physical locations).
> So the tables look like this...

It is always preferable to post CREATE TABLE scripts for the table.
That and test data in INSERT statemants, and the desired result of
the test data. That will give you a tested solution.

So this is an untested solution:

SELECT D.DoctorName
FROM Doctors D
WHERE EXISTS (SELECT *
FROM Doc2Groups DG
JOIN Facility F ON F.GroupID = DG.GroupID
WHERE DG.DoctorID = D.DoctorID
AND F.State = 'NV'
AND F.City = 'Reno')
AND EXISTS (SELECT *
FROM Doc2Specialiity DS
JOIN Specialtities S ON DS.SpecialityID = S.SpecialityID
WHERE DS.DoctorID = S.DoctorID
AND S.SpecialityDesc = 'Cardiology')
UNION
SELECT D.DoctorName
FROM Doctors D
WHERE EXISTS (SELECT *
FROM Doc2Groups DG
JOIN Facility F ON F.GroupID = DG.GroupID
JOIN Group2Specialiity GS ON GS.GroupID = DS.GroupID
JOIN Specialtities S ON GS.SpecialityID = S.SpecialityID
WHERE DG.DoctorID = D.DoctorID
AND F.State = 'NV'
AND F.City = 'Reno'
AND S.SpecialityDesc = 'Cardiology')

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||On Mon, 12 Sep 2005 12:36:02 -0700, Eric Robinson wrote:

>Oops, I mistated the Specialties table. It is actually three tables:
(snip)

Hi Eric,

I'm glad you posted that, since I was just preparing to telll you how
you should change your design and all <grin>.

Anyway, since we now have doctors who do cardiology, groups that
specialize in cardiology and doctors that belong to groups that do
cardiology, it's not exactly clear what you want your query to return.
I'll assume you want to know what I would be interested in if my heart
starts acting funny while I'm in Reno - I'd like to know where to go
(and quick!).

The query below will return the name of the group and the details of the
location for each group with a location in Reno, NV that either has
cardiology as group specialization, or hosts at least one doctor who
specializes in cardiology.

SELECT g.GroupName, f.FacilityName, f.Address, f.Phone
FROM Groups AS g
INNER JOIN Facilities AS f
ON f.GroupID = g.GroupID
LEFT JOIN Group2Specialties AS g2s
ON g2s.GroupID = g.GroupID
CROSS JOIN (SELECT SpecialtyID
FROM Specialties
WHERE SpecialtyDesc = 'Cardiology') AS s(SpecialtyID)
WHERE f.City = 'Reno'
AND f.State = 'NV'
AND
( g2s.SpecialtyID = s.SpecialtyID
OR EXISTS
(SELECT *
FROM Docs2Groups AS d2g
INNER JOIN Doc2Specialties AS d2s
ON d2s.DoctorID = d2g.DoctorID
WHERE d2g.GroupID = g.GroupID
AND d2s.SpecialtyID = s.SpecialtyID))

(untested - see www.aspfaq.com/5006 if you prefer tested suggestions).

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||I was offline for a few hours and I see that I have some very throughtful
replies. I will take a closer look at them. In the meantime, based on Stu's
initial suggestion, I came up with the following, which SEEMS to work:

For simplicity's sake, I lied in my original message about the true names of
the tables, so the following query looks slightly different because it
contains real table names. (In the future I'll take Erland's suggestion and
post CREATE TABLE scripts instead.)

SELECT s.Specialty, p.LastName, p.FirstName, p.Degree, g.GroupName,
f.PhysAddr1, f.PhysCity, f.PhysAddr2, f.PhysState, f.Phone1
FROM tblProviders p JOIN tblBindProviderGroup p2g on p.ProviderID =
p2g.ProviderID
JOIN tblGroups g on g.GroupID=p2g.GroupID
JOIN tblFacilities f on f.GroupID=p2g.GroupID
JOIN tblBindProviderSpecialty p2s on p2s.ProviderID=p.ProviderID
JOIN tblSpecialties s on s.SpecialtyID=p2s.SpecialtyID
WHERE f.PhysCity='Reno'
ORDER BY p.LastName

I've run this against the actual data and it SEEMS to return correct
results.

I'm guessing that some of the other approaches people have posted are better
in ways I have not yet thought of.

--Eric

"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns96CFEF79E2BE1Yazorman@.127.0.0.1...
> Eric Robinson (eric @. pmcipa..{com}) writes:
>> These tables are part of our medical database. For example, a person
>> seeking healthcare may want to know the names and contact information
>> for all doctors in Reno, NV, who do cardiology. Sounds simple, but it
>> isn't.
>>
>> A medical group can have many doctors.
>> A doctor may be a member of more than one medical group.
>> A doctor or group can have multiple specialties.
>> A group can have multiple facilities (physical locations).
>>
>> So the tables look like this...
> It is always preferable to post CREATE TABLE scripts for the table.
> That and test data in INSERT statemants, and the desired result of
> the test data. That will give you a tested solution.
> So this is an untested solution:
> SELECT D.DoctorName
> FROM Doctors D
> WHERE EXISTS (SELECT *
> FROM Doc2Groups DG
> JOIN Facility F ON F.GroupID = DG.GroupID
> WHERE DG.DoctorID = D.DoctorID
> AND F.State = 'NV'
> AND F.City = 'Reno')
> AND EXISTS (SELECT *
> FROM Doc2Specialiity DS
> JOIN Specialtities S ON DS.SpecialityID = S.SpecialityID
> WHERE DS.DoctorID = S.DoctorID
> AND S.SpecialityDesc = 'Cardiology')
> UNION
> SELECT D.DoctorName
> FROM Doctors D
> WHERE EXISTS (SELECT *
> FROM Doc2Groups DG
> JOIN Facility F ON F.GroupID = DG.GroupID
> JOIN Group2Specialiity GS ON GS.GroupID = DS.GroupID
> JOIN Specialtities S ON GS.SpecialityID =
> S.SpecialityID
> WHERE DG.DoctorID = D.DoctorID
> AND F.State = 'NV'
> AND F.City = 'Reno'
> AND S.SpecialityDesc = 'Cardiology')
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp

2012年3月6日星期二

Any other ways to write this Query

Hello All,
For my own edification, I was wondering if this query can be written in some
other way(such as using Joins ... etc).
This query generates the daily cumulative totals.
Many Thanks,
Gopi
Create table DailyOrders (OrderDate Datetime, OrderQty Int)
Insert DailyOrders values (getdate() - 8, 12)
Insert DailyOrders values (getdate() - 7, 22)
Insert DailyOrders values (getdate() - 6, 32)
Insert DailyOrders values (getdate() - 5, 42)
Insert DailyOrders values (getdate() - 4, 52)
Insert DailyOrders values (getdate() - 3, 62)
Insert DailyOrders values (getdate() - 2, 72)
Insert DailyOrders values (getdate() - 1, 82)
Insert DailyOrders values (getdate() - 0, 92)
select * from DailyOrders
OrderDate
OrderQty
--- --
2005-03-16 00:16:57.920 12
2005-03-17 00:16:57.967 22
2005-03-18 00:16:57.967 32
2005-03-19 00:16:57.967 42
2005-03-20 00:16:57.967 52
2005-03-21 00:16:57.967 62
2005-03-22 00:16:57.967 72
2005-03-23 00:16:57.967 82
2005-03-24 00:16:57.967 92
select D.OrderDate,
D.OrderQty,
(select SUM(o.OrderQty) from DailyOrders O where o.OrderDate <=
D.OrderDate) CumulativeDailyQty
from DailyOrders D
order by d.OrderDate
OrderDate
OrderQty CumulativeDailyQty
--- -- --
--
2005-03-16 00:16:57.920 12
12
2005-03-17 00:16:57.967 22
34
2005-03-18 00:16:57.967 32
66
2005-03-19 00:16:57.967 42
108
2005-03-20 00:16:57.967 52
160
2005-03-21 00:16:57.967 62
222
2005-03-22 00:16:57.967 72
294
2005-03-23 00:16:57.967 82
376
2005-03-24 00:16:57.967 92
468Try,
select
d1.OrderDate,
d1.OrderQty,
sum(d2.OrderQty) as CumulativeDailyQty
from
DailyOrders d1
inner join
DailyOrders d2
on d1.OrderDate >= d2.OrderDate
group by
d1.OrderDate,
d1.OrderQty
order by
d1.OrderDate
go
AMB
"gopi" wrote:

> Hello All,
> For my own edification, I was wondering if this query can be written in so
me
> other way(such as using Joins ... etc).
> This query generates the daily cumulative totals.
> Many Thanks,
> Gopi
> Create table DailyOrders (OrderDate Datetime, OrderQty Int)
> Insert DailyOrders values (getdate() - 8, 12)
> Insert DailyOrders values (getdate() - 7, 22)
> Insert DailyOrders values (getdate() - 6, 32)
> Insert DailyOrders values (getdate() - 5, 42)
> Insert DailyOrders values (getdate() - 4, 52)
> Insert DailyOrders values (getdate() - 3, 62)
> Insert DailyOrders values (getdate() - 2, 72)
> Insert DailyOrders values (getdate() - 1, 82)
> Insert DailyOrders values (getdate() - 0, 92)
> select * from DailyOrders
> OrderDate
> OrderQty
> --- --
> 2005-03-16 00:16:57.920 12
> 2005-03-17 00:16:57.967 22
> 2005-03-18 00:16:57.967 32
> 2005-03-19 00:16:57.967 42
> 2005-03-20 00:16:57.967 52
> 2005-03-21 00:16:57.967 62
> 2005-03-22 00:16:57.967 72
> 2005-03-23 00:16:57.967 82
> 2005-03-24 00:16:57.967 92
> select D.OrderDate,
> D.OrderQty,
> (select SUM(o.OrderQty) from DailyOrders O where o.OrderDate <=
> D.OrderDate) CumulativeDailyQty
> from DailyOrders D
> order by d.OrderDate
> OrderDate
> OrderQty CumulativeDailyQty
> --- -- --
--
> 2005-03-16 00:16:57.920 12
> 12
> 2005-03-17 00:16:57.967 22
> 34
> 2005-03-18 00:16:57.967 32
> 66
> 2005-03-19 00:16:57.967 42
> 108
> 2005-03-20 00:16:57.967 52
> 160
> 2005-03-21 00:16:57.967 62
> 222
> 2005-03-22 00:16:57.967 72
> 294
> 2005-03-23 00:16:57.967 82
> 376
> 2005-03-24 00:16:57.967 92
> 468
>
>|||gopi,
this will work, and i think that it might be slightly more efficient
than the other two methods as it only has to do one table scan:
select D.OrderDate,
D.OrderQty,
SUM(d.OrderQty) CumulativeDailyQty
from DailyOrders D
group by D.OrderDate, D.OrderQty
having D.OrderDate <= D.OrderDate
order by d.OrderDate|||"gopi" <rgopinath@.hotmail.com> wrote in message
news:%236JYer9LFHA.3844@.TK2MSFTNGP14.phx.gbl...
> Hello All,
> For my own edification, I was wondering if this query can be written in
> some other way(such as using Joins ... etc).
> This query generates the daily cumulative totals.
The real answer is No.The culprit, whether in a subquery or join, is the
the inequality operator (o.OrderDate <= D.OrderDate) which requires
way too many comparisons especially on a large table.There are no
ansi sql construct(s) in S2k to allow writing this query without that
terrible
comparison:).Sql99 and 2003 address this kind of problem in a much
more efficient way by eliminating the need for the inequality comparison.
For example, check out the thread:
http://tinyurl.com/5j95y
and see the DB2 solution to the problem.Oracle also supports what is
referred to as analytic/olap/windowing constructs.
Apparently you can wait a year to see that this functionality is still
missing
from Yukon:( Redmonds short shrift to sql in comparion to its major
competition is puzzling at the very least. (Does anyone know who is
ultimately
responsible for t-sql there?:) On the other hand many people eat by
inventing
workarounds:).
Do I hear anyone thinking about Cursors/Client programming?:)
What of portability:)
You might be interested in the RAC solution posted in the above thread.
No inequality comparions,no cursors,no functions:).
RAC and QALite @.
www.rac4sql.net|||Think about the predicate "D.OrderDate <= D.OrderDate"; it is always
TRUE.
SELECT D1.order_date , D1.order_qty,
SUM (D2.order_qty) AS cum_daily_qty
FROM DailyOrders AS D1, DailyOrders AS D2
WHERE D2.order_date <= D1.order_date
GROUP BY D1.OrderDate, D1.OrderQty;|||Talk about magnetic tape drives! :)
"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1111621247.818572.277230@.z14g2000cwz.googlegroups.com...
> Think about the predicate "D.OrderDate <= D.OrderDate"; it is always
> TRUE.
> SELECT D1.order_date , D1.order_qty,
> SUM (D2.order_qty) AS cum_daily_qty
> FROM DailyOrders AS D1, DailyOrders AS D2
> WHERE D2.order_date <= D1.order_date
> GROUP BY D1.OrderDate, D1.OrderQty;
>|||Hello AMB,
I tested these two queries and looks like the first query is efficient than
the second one as it takes 45.10% when compared to 54.90%
when I ran these two together. I created the following index :
CREATE INDEX [x] ON [dbo].[DailyOrders] ([OrderDate]) ON [PRIMARY]
[1] First Query
select D.OrderDate,
D.OrderQty,
(select SUM(o.OrderQty) from DailyOrders O where o.OrderDate <=
D.OrderDate) CumulativeDailyQty
from DailyOrders D
order by d.OrderDate
GO
StmtText
----
----
-
|--Compute Scalar(DEFINE:([Expr1002]=[Expr1002]))
|--Nested Loops(Inner Join, OUTER REFERENCES:([D].[OrderDate]))
|--Bookmark Lookup(BOOKMARK:([Bmk1000]),
OBJECT:([Pubs].[dbo].[DailyOrders] AS [D]))
| |--Index Scan(OBJECT:([Pubs].[dbo].[DailyOrders].[x] AS
[D]), ORDERED FORWARD)
|--Hash Match(Cache, HASH:([D].[OrderDate]),
RESIDUAL:([D].[OrderDate]=[D].[OrderDate]))
|--Compute Scalar(DEFINE:([Expr1002]=If ([Expr1011]=0) then
NULL else [Expr1012]))
|--Stream
Aggregate(DEFINE:([Expr1011]=COUNT_BIG([
O].[OrderQty]),
[Expr1012]=SUM([O].[OrderQty])))
|--Bookmark Lookup(BOOKMARK:([Bmk1001]),
OBJECT:([Pubs].[dbo].[DailyOrders] AS [O]))
|--Index
S(OBJECT:([Pubs].[dbo].[DailyOrders].[x] AS [O]), SEEK:([O].[OrderDate]
<= [D].[OrderDate]) ORDERED FORWARD)
Application Profile Statistics
Timer resolution (milliseconds) 0 0
Number of INSERT, UPDATE, DELETE statements 0 0
Rows effected by INSERT, UPDATE, DELETE statements 0 0
Number of SELECT statements 2 2.66667
Rows effected by SELECT statements 19 24.6667
Number of user transactions 5 6
Average fetch time 0 0
Cumulative fetch time 0 0
Number of fetches 0 0
Number of open statement handles 0 0
Max number of opened statement handles 0 0
Cumulative number of statement handles 0 0
Network Statistics
Number of server roundtrips 3 3.33333
Number of TDS packets sent 3 3.33333
Number of TDS packets received 4 4.66667
Number of bytes sent 504 659.333
Number of bytes received 6141 7969.33
Time Statistics
Cumulative client processing time 0 0.666667
Cumulative wait time on server replies 0 0
[2] Second Query
select d1.OrderDate,
d1.OrderQty,
sum(d2.OrderQty) as CumulativeDailyQty
from DailyOrders d1
inner join DailyOrders d2 on d1.OrderDate >= d2.OrderDate
group by d1.OrderDate,
d1.OrderQty
order by d1.OrderDate
go
StmtText
----
----
-
|--Compute Scalar(DEFINE:([Expr1002]=If ([Expr1009]=0) then NULL else
[Expr1010]))
|--Stream Aggregate(GROUP BY:([d1].[OrderDate], [d1].[OrderQty])
DEFINE:([Expr1009]=COUNT_BIG([d2].[OrderQty]),
[Expr1010]=SUM([d2].[OrderQty])))
|--Bookmark Lookup(BOOKMARK:([Bmk1001]),
OBJECT:([Pubs].[dbo].[DailyOrders] AS [d2]))
|--Nested Loops(Inner Join, OUTER
REFERENCES:([d1].[OrderDate]))
|--Sort(ORDER BY:([d1].[OrderDate] ASC,
[d1].[OrderQty] ASC))
| |--Table Scan(OBJECT:([Pubs].[dbo].[DailyOrders]
AS [d1]))
|--Index S(OBJECT:([Pubs].[dbo].[DailyOrders].[x]
AS [d2]), SEEK:([d2].[OrderDate] <= [d1].[OrderDate]) ORDERED FORWARD)
Application Profile Statistics
Timer resolution (milliseconds) 0 0
Number of INSERT, UPDATE, DELETE statements 0 0
Rows effected by INSERT, UPDATE, DELETE statements 0 0
Number of SELECT statements 1 1
Rows effected by SELECT statements 9 9
Number of user transactions 1 1
Average fetch time 0 0
Cumulative fetch time 0 0
Number of fetches 0 0
Number of open statement handles 0 0
Max number of opened statement handles 0 0
Cumulative number of statement handles 0 0
Network Statistics
Number of server roundtrips 1 1
Number of TDS packets sent 1 1
Number of TDS packets received 1 1
Number of bytes sent 466 466
Number of bytes received 296 296
Time Statistics
Cumulative client processing time 0 0.333333
Cumulative wait time on server replies 0 0
Many Thanks,
Gopi
"Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in message
news:772A5932-7BC3-482D-B874-221E1881C253@.microsoft.com...
> Try,
> select
> d1.OrderDate,
> d1.OrderQty,
> sum(d2.OrderQty) as CumulativeDailyQty
> from
> DailyOrders d1
> inner join
> DailyOrders d2
> on d1.OrderDate >= d2.OrderDate
> group by
> d1.OrderDate,
> d1.OrderQty
> order by
> d1.OrderDate
> go
>
> AMB
>
> "gopi" wrote:
>|||Actually, I added D.OrderQty to the first query and now it takes the same
amount of time as the second.
I think it is because of the Clustered Index on OrderDate and the addition
of the OrderQty column forced
the Query Optimtimizer to reorder.
select D.OrderDate,
D.OrderQty,
(select SUM(o.OrderQty) from DailyOrders O where o.OrderDate <=
D.OrderDate) CumulativeDailyQty
from DailyOrders D
order by d.OrderDate, D.OrderQty
GO
Gopi
"rgn" <gopinathr@.healthasyst.com> wrote in message
news:e$K$4kEMFHA.1308@.TK2MSFTNGP15.phx.gbl...
> Hello AMB,
> I tested these two queries and looks like the first query is efficient
> than the second one as it takes 45.10% when compared to 54.90%
> when I ran these two together. I created the following index :
> CREATE INDEX [x] ON [dbo].[DailyOrders] ([OrderDate]) ON [PRIMARY]
> [1] First Query
> select D.OrderDate,
> D.OrderQty,
> (select SUM(o.OrderQty) from DailyOrders O where o.OrderDate <=
> D.OrderDate) CumulativeDailyQty
> from DailyOrders D
> order by d.OrderDate
> GO
> StmtText
> ----
----
--
> |--Compute Scalar(DEFINE:([Expr1002]=[Expr1002]))
> |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[OrderDate]))
> |--Bookmark Lookup(BOOKMARK:([Bmk1000]),
> OBJECT:([Pubs].[dbo].[DailyOrders] AS [D]))
> | |--Index Scan(OBJECT:([Pubs].[dbo].[DailyOrders].[x] AS
> [D]), ORDERED FORWARD)
> |--Hash Match(Cache, HASH:([D].[OrderDate]),
> RESIDUAL:([D].[OrderDate]=[D].[OrderDate]))
> |--Compute Scalar(DEFINE:([Expr1002]=If ([Expr1011]=0)
> then NULL else [Expr1012]))
> |--Stream
> Aggregate(DEFINE:([Expr1011]=COUNT_BIG([
O].[OrderQty]),
> [Expr1012]=SUM([O].[OrderQty])))
> |--Bookmark Lookup(BOOKMARK:([Bmk1001]),
> OBJECT:([Pubs].[dbo].[DailyOrders] AS [O]))
> |--Index
> S(OBJECT:([Pubs].[dbo].[DailyOrders].[x] AS [O]), SEEK:([O].[OrderDate]
> <= [D].[OrderDate]) ORDERED FORWARD)
>
> Application Profile Statistics
> Timer resolution (milliseconds) 0 0
> Number of INSERT, UPDATE, DELETE statements 0 0
> Rows effected by INSERT, UPDATE, DELETE statements 0 0
> Number of SELECT statements 2 2.66667
> Rows effected by SELECT statements 19 24.6667
> Number of user transactions 5 6
> Average fetch time 0 0
> Cumulative fetch time 0 0
> Number of fetches 0 0
> Number of open statement handles 0 0
> Max number of opened statement handles 0 0
> Cumulative number of statement handles 0 0
> Network Statistics
> Number of server roundtrips 3 3.33333
> Number of TDS packets sent 3 3.33333
> Number of TDS packets received 4 4.66667
> Number of bytes sent 504 659.333
> Number of bytes received 6141 7969.33
> Time Statistics
> Cumulative client processing time 0 0.666667
> Cumulative wait time on server replies 0 0
>
> [2] Second Query
> select d1.OrderDate,
> d1.OrderQty,
> sum(d2.OrderQty) as CumulativeDailyQty
> from DailyOrders d1
> inner join DailyOrders d2 on d1.OrderDate >= d2.OrderDate
> group by d1.OrderDate,
> d1.OrderQty
> order by d1.OrderDate
> go
> StmtText
> ----
----
--
> |--Compute Scalar(DEFINE:([Expr1002]=If ([Expr1009]=0) then NULL else
> [Expr1010]))
> |--Stream Aggregate(GROUP BY:([d1].[OrderDate], [d1].[OrderQty])
> DEFINE:([Expr1009]=COUNT_BIG([d2].[OrderQty]),
> [Expr1010]=SUM([d2].[OrderQty])))
> |--Bookmark Lookup(BOOKMARK:([Bmk1001]),
> OBJECT:([Pubs].[dbo].[DailyOrders] AS [d2]))
> |--Nested Loops(Inner Join, OUTER
> REFERENCES:([d1].[OrderDate]))
> |--Sort(ORDER BY:([d1].[OrderDate] ASC,
> [d1].[OrderQty] ASC))
> | |--Table Scan(OBJECT:([Pubs].[dbo].[DailyOrders]
> AS [d1]))
> |--Index S(OBJECT:([Pubs].[dbo].[DailyOrders].[x]
> AS [d2]), SEEK:([d2].[OrderDate] <= [d1].[OrderDate]) ORDERED FORWARD)
> Application Profile Statistics
> Timer resolution (milliseconds) 0 0
> Number of INSERT, UPDATE, DELETE statements 0 0
> Rows effected by INSERT, UPDATE, DELETE statements 0 0
> Number of SELECT statements 1 1
> Rows effected by SELECT statements 9 9
> Number of user transactions 1 1
> Average fetch time 0 0
> Cumulative fetch time 0 0
> Number of fetches 0 0
> Number of open statement handles 0 0
> Max number of opened statement handles 0 0
> Cumulative number of statement handles 0 0
> Network Statistics
> Number of server roundtrips 1 1
> Number of TDS packets sent 1 1
> Number of TDS packets received 1 1
> Number of bytes sent 466 466
> Number of bytes received 296 296
> Time Statistics
> Cumulative client processing time 0 0.333333
> Cumulative wait time on server replies 0 0
>
> Many Thanks,
> Gopi
> "Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in
> message news:772A5932-7BC3-482D-B874-221E1881C253@.microsoft.com...
>

2012年2月23日星期四

Any idea on this!

How can execute sqlscript stored in a folder.
i.e D:\SQLSCRIPT\cs_posting_exc.PRC
What can I write in query analyzer so that I can run that procedure in that
folder is a specified server and database.Have you looked into using osql.exe with xp_cmdshell? See SQL Server Books
Online for more details & post back if you have questions.
Anith

2012年2月18日星期六

Any help on application that works like a profiler

Hi
I like to write an application that works kind of like Profiler.
I want to get some sort of event notification that something has happened on
the database server to let me know I should check for data and then log into
some table.
I tried through system stored procedure, but they don't let me directly
trace into the table, for that I have trace into file then from there to a
table.
I want to write something like profiler type application, that should be
continously listening to server events.
Are there any COM objects available for that or some other where through
which I can do this.
Thanks in Advance
PushkarIn SQL Server profiler, you can filter events based on application name or
user, and output the event trace to a table or text file. Once this is in
place, your application can simply query for specific event conditions. Just
in case, if your intention is to implement a complex data constraint, then
use a trigger. Also, if you are wanting to block specific types of user
access to tables or stored procedures, then implement this using appropriate
logins and object level permissions.
"Pushkar" <tiwaripushkar@.yahoo.co.in> wrote in message
news:eFI1SsgZFHA.2520@.TK2MSFTNGP09.phx.gbl...
> Hi
> I like to write an application that works kind of like Profiler.
> I want to get some sort of event notification that something has happened
on
> the database server to let me know I should check for data and then log
into
> some table.
> I tried through system stored procedure, but they don't let me directly
> trace into the table, for that I have trace into file then from there to a
> table.
> I want to write something like profiler type application, that should be
> continously listening to server events.
> Are there any COM objects available for that or some other where through
> which I can do this.
> Thanks in Advance
> Pushkar
>

2012年2月16日星期四

Any downloadable tool to chart my perfmon data ?

I am collecting perfmon data in a SQL database but I dont know how to write
any web code,etc. to display the contents in a chart so I can look at the
data over a specified time range.
Are there any tools out there that I can download and just point it to the
SQL database without writing any code ?
ThanksHave you looked at SQL Server Reporting Services?
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Hassan" <hassan@.test.com> wrote in message
news:uSKfHsfSIHA.2000@.TK2MSFTNGP05.phx.gbl...
>I am collecting perfmon data in a SQL database but I dont know how to write
>any web code,etc. to display the contents in a chart so I can look at the
>data over a specified time range.
> Are there any tools out there that I can download and just point it to the
> SQL database without writing any code ?
> Thanks|||Hassan wrote:
> I am collecting perfmon data in a SQL database but I dont know how to
> write any web code,etc. to display the contents in a chart so I can look
> at the data over a specified time range.
> Are there any tools out there that I can download and just point it to
> the SQL database without writing any code ?
This may be one of those things that's not as hard as you think. Go
into Excel, connect to your external data source then use the chart wizard.
Hal Rottenberg <hal@.halr9000.com>
Author, TechProsaic (http://halr9000.com)
Webmaster, Psi (http://psi-im.org)
Co-host, PowerScripting Podcast (http://powerscripting.net)