显示标签为“own”的博文。显示所有博文
显示标签为“own”的博文。显示所有博文

2012年3月25日星期日

Anyone got sqlexec v.2.01 ?

We got our own key. but i lost this software...
My company dont' want to upgrade it.
Any people can give a help ?
The simplest way would be to contact "SnoopSoft" the company that produces
SqlExec http://www.snoopsoft.com/sqlexec/index.html
Cristian Lefter, SQL Server MVP
"Agnes" <agnes@.dynamictech.com.hk> wrote in message
news:utLx4mGVFHA.3544@.TK2MSFTNGP10.phx.gbl...
> We got our own key. but i lost this software...
> My company dont' want to upgrade it.
> Any people can give a help ?
>
|||You might mean SQLExecMS from laplas-soft, if this is the case then an
upgrade within any major version is free for that product, just ask for a
new key.
"Agnes" <agnes@.dynamictech.com.hk> wrote in message
news:utLx4mGVFHA.3544@.TK2MSFTNGP10.phx.gbl...
> We got our own key. but i lost this software...
> My company dont' want to upgrade it.
> Any people can give a help ?
>

2012年3月6日星期二

any pros/ cons to having multiple Publications

sql2k sp3
Im going to start Replicating 40 or so tables. Is there
any good/ bad to each table going into its own Publication?
TIA, ChrisR
ideally you will want to group your publications into logical units for
administrative, functional, and performance reasons.
To make life simpler for yourself you should have a single publication. This
eases the administrative burden. However sometimes you will want to
replicate different tables on different schedules, ie some transactions are
required to be replicated real time, others must only be replicated once per
day.
You will get better performance if you group your articles that have dri
relationships into the same publications, and then create multiple
publications and use the independent_agent option on each publication. This
will create multiple distribution agents replicating to the same subscriber
db.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"ChrisR" <anonymous@.discussions.microsoft.com> wrote in message
news:01ef01c496a0$ed8c86c0$a401280a@.phx.gbl...
> sql2k sp3
> Im going to start Replicating 40 or so tables. Is there
> any good/ bad to each table going into its own Publication?
> TIA, ChrisR
|||> You will get better performance if you group your articles that have dri
> relationships into the same publications, and then create multiple
> publications and use the independent_agent option on each publication.
This
> will create multiple distribution agents replicating to the same
subscriber
> db.
Im going to be replicating to a denormalized db for reporting purposes only.
(Thanks to you ;-) ) Therefore, Im not going to include the dri as it will
be enforced on the Publisher. That being the case, theres really no logical
grouping I can use. What about just putting the really big tables into
they're own Publications and using the independent_agent option you
mentioned?
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:OvJMuQtlEHA.896@.TK2MSFTNGP12.phx.gbl...
> ideally you will want to group your publications into logical units for
> administrative, functional, and performance reasons.
> To make life simpler for yourself you should have a single publication.
This
> eases the administrative burden. However sometimes you will want to
> replicate different tables on different schedules, ie some transactions
are
> required to be replicated real time, others must only be replicated once
per
> day.
> You will get better performance if you group your articles that have dri
> relationships into the same publications, and then create multiple
> publications and use the independent_agent option on each publication.
This
> will create multiple distribution agents replicating to the same
subscriber
> db.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
>
> "ChrisR" <anonymous@.discussions.microsoft.com> wrote in message
> news:01ef01c496a0$ed8c86c0$a401280a@.phx.gbl...
>
|||separate them according to activity. I.e. if you have 10 really volatile
tables you are publisher 20 ones that are modified a couple of time an hour,
and 50 tables which are fairly static you could do 5 or 10 separate
publications, each with one or two of the volatile tables, 4 to 2 of the
less volatile tables, and 1 to 2 of the static tables.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"ChrisR" <chris@.noemail.com> wrote in message
news:uj$UMhulEHA.2020@.TK2MSFTNGP09.phx.gbl...
> This
> subscriber
> Im going to be replicating to a denormalized db for reporting purposes
only.
> (Thanks to you ;-) ) Therefore, Im not going to include the dri as it
will
> be enforced on the Publisher. That being the case, theres really no
logical
> grouping I can use. What about just putting the really big tables into
> they're own Publications and using the independent_agent option you
> mentioned?
>
> "Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
> news:OvJMuQtlEHA.896@.TK2MSFTNGP12.phx.gbl...
> This
> are
> per
> This
> subscriber
>

Any possibility?

Any possibility?
Right now I have a 600 MB space free and on my system I have a 6 GB backup
file and I want to restore it on my own system...
I don't have any network where I have place this file and restore it from
there and no DVD which support this 6 GB... any
possibility like when I restore this file the backup file also remove
simultaneously so that space would be there... I am
sure it's not possible but might be some idea from you guys...
thanks
Joh wrote:
> Any possibility?
> Right now I have a 600 MB space free and on my system I have a 6 GB
> backup file and I want to restore it on my own system...
> I don't have any network where I have place this file and restore it
> from there and no DVD which support this 6 GB... any
> possibility like when I restore this file the backup file also remove
> simultaneously so that space would be there... I am
> sure it's not possible but might be some idea from you guys...
> thanks
Why not purchase a small USB hard drive. You can get one for about $129
that will give you the needed space for these type of operations. Or a
second internal hard drive which might be cheaper. Or a friend's
networked PC or laptop.
David G.

Any other ways to write this Query

Hello All,
For my own edification, I was wondering if this query can be written in some
other way(such as using Joins ... etc).
This query generates the daily cumulative totals.
Many Thanks,
Gopi
Create table DailyOrders (OrderDate Datetime, OrderQty Int)
Insert DailyOrders values (getdate() - 8, 12)
Insert DailyOrders values (getdate() - 7, 22)
Insert DailyOrders values (getdate() - 6, 32)
Insert DailyOrders values (getdate() - 5, 42)
Insert DailyOrders values (getdate() - 4, 52)
Insert DailyOrders values (getdate() - 3, 62)
Insert DailyOrders values (getdate() - 2, 72)
Insert DailyOrders values (getdate() - 1, 82)
Insert DailyOrders values (getdate() - 0, 92)
select * from DailyOrders
OrderDate
OrderQty
--- --
2005-03-16 00:16:57.920 12
2005-03-17 00:16:57.967 22
2005-03-18 00:16:57.967 32
2005-03-19 00:16:57.967 42
2005-03-20 00:16:57.967 52
2005-03-21 00:16:57.967 62
2005-03-22 00:16:57.967 72
2005-03-23 00:16:57.967 82
2005-03-24 00:16:57.967 92
select D.OrderDate,
D.OrderQty,
(select SUM(o.OrderQty) from DailyOrders O where o.OrderDate <=
D.OrderDate) CumulativeDailyQty
from DailyOrders D
order by d.OrderDate
OrderDate
OrderQty CumulativeDailyQty
--- -- --
--
2005-03-16 00:16:57.920 12
12
2005-03-17 00:16:57.967 22
34
2005-03-18 00:16:57.967 32
66
2005-03-19 00:16:57.967 42
108
2005-03-20 00:16:57.967 52
160
2005-03-21 00:16:57.967 62
222
2005-03-22 00:16:57.967 72
294
2005-03-23 00:16:57.967 82
376
2005-03-24 00:16:57.967 92
468Try,
select
d1.OrderDate,
d1.OrderQty,
sum(d2.OrderQty) as CumulativeDailyQty
from
DailyOrders d1
inner join
DailyOrders d2
on d1.OrderDate >= d2.OrderDate
group by
d1.OrderDate,
d1.OrderQty
order by
d1.OrderDate
go
AMB
"gopi" wrote:

> Hello All,
> For my own edification, I was wondering if this query can be written in so
me
> other way(such as using Joins ... etc).
> This query generates the daily cumulative totals.
> Many Thanks,
> Gopi
> Create table DailyOrders (OrderDate Datetime, OrderQty Int)
> Insert DailyOrders values (getdate() - 8, 12)
> Insert DailyOrders values (getdate() - 7, 22)
> Insert DailyOrders values (getdate() - 6, 32)
> Insert DailyOrders values (getdate() - 5, 42)
> Insert DailyOrders values (getdate() - 4, 52)
> Insert DailyOrders values (getdate() - 3, 62)
> Insert DailyOrders values (getdate() - 2, 72)
> Insert DailyOrders values (getdate() - 1, 82)
> Insert DailyOrders values (getdate() - 0, 92)
> select * from DailyOrders
> OrderDate
> OrderQty
> --- --
> 2005-03-16 00:16:57.920 12
> 2005-03-17 00:16:57.967 22
> 2005-03-18 00:16:57.967 32
> 2005-03-19 00:16:57.967 42
> 2005-03-20 00:16:57.967 52
> 2005-03-21 00:16:57.967 62
> 2005-03-22 00:16:57.967 72
> 2005-03-23 00:16:57.967 82
> 2005-03-24 00:16:57.967 92
> select D.OrderDate,
> D.OrderQty,
> (select SUM(o.OrderQty) from DailyOrders O where o.OrderDate <=
> D.OrderDate) CumulativeDailyQty
> from DailyOrders D
> order by d.OrderDate
> OrderDate
> OrderQty CumulativeDailyQty
> --- -- --
--
> 2005-03-16 00:16:57.920 12
> 12
> 2005-03-17 00:16:57.967 22
> 34
> 2005-03-18 00:16:57.967 32
> 66
> 2005-03-19 00:16:57.967 42
> 108
> 2005-03-20 00:16:57.967 52
> 160
> 2005-03-21 00:16:57.967 62
> 222
> 2005-03-22 00:16:57.967 72
> 294
> 2005-03-23 00:16:57.967 82
> 376
> 2005-03-24 00:16:57.967 92
> 468
>
>|||gopi,
this will work, and i think that it might be slightly more efficient
than the other two methods as it only has to do one table scan:
select D.OrderDate,
D.OrderQty,
SUM(d.OrderQty) CumulativeDailyQty
from DailyOrders D
group by D.OrderDate, D.OrderQty
having D.OrderDate <= D.OrderDate
order by d.OrderDate|||"gopi" <rgopinath@.hotmail.com> wrote in message
news:%236JYer9LFHA.3844@.TK2MSFTNGP14.phx.gbl...
> Hello All,
> For my own edification, I was wondering if this query can be written in
> some other way(such as using Joins ... etc).
> This query generates the daily cumulative totals.
The real answer is No.The culprit, whether in a subquery or join, is the
the inequality operator (o.OrderDate <= D.OrderDate) which requires
way too many comparisons especially on a large table.There are no
ansi sql construct(s) in S2k to allow writing this query without that
terrible
comparison:).Sql99 and 2003 address this kind of problem in a much
more efficient way by eliminating the need for the inequality comparison.
For example, check out the thread:
http://tinyurl.com/5j95y
and see the DB2 solution to the problem.Oracle also supports what is
referred to as analytic/olap/windowing constructs.
Apparently you can wait a year to see that this functionality is still
missing
from Yukon:( Redmonds short shrift to sql in comparion to its major
competition is puzzling at the very least. (Does anyone know who is
ultimately
responsible for t-sql there?:) On the other hand many people eat by
inventing
workarounds:).
Do I hear anyone thinking about Cursors/Client programming?:)
What of portability:)
You might be interested in the RAC solution posted in the above thread.
No inequality comparions,no cursors,no functions:).
RAC and QALite @.
www.rac4sql.net|||Think about the predicate "D.OrderDate <= D.OrderDate"; it is always
TRUE.
SELECT D1.order_date , D1.order_qty,
SUM (D2.order_qty) AS cum_daily_qty
FROM DailyOrders AS D1, DailyOrders AS D2
WHERE D2.order_date <= D1.order_date
GROUP BY D1.OrderDate, D1.OrderQty;|||Talk about magnetic tape drives! :)
"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1111621247.818572.277230@.z14g2000cwz.googlegroups.com...
> Think about the predicate "D.OrderDate <= D.OrderDate"; it is always
> TRUE.
> SELECT D1.order_date , D1.order_qty,
> SUM (D2.order_qty) AS cum_daily_qty
> FROM DailyOrders AS D1, DailyOrders AS D2
> WHERE D2.order_date <= D1.order_date
> GROUP BY D1.OrderDate, D1.OrderQty;
>|||Hello AMB,
I tested these two queries and looks like the first query is efficient than
the second one as it takes 45.10% when compared to 54.90%
when I ran these two together. I created the following index :
CREATE INDEX [x] ON [dbo].[DailyOrders] ([OrderDate]) ON [PRIMARY]
[1] First Query
select D.OrderDate,
D.OrderQty,
(select SUM(o.OrderQty) from DailyOrders O where o.OrderDate <=
D.OrderDate) CumulativeDailyQty
from DailyOrders D
order by d.OrderDate
GO
StmtText
----
----
-
|--Compute Scalar(DEFINE:([Expr1002]=[Expr1002]))
|--Nested Loops(Inner Join, OUTER REFERENCES:([D].[OrderDate]))
|--Bookmark Lookup(BOOKMARK:([Bmk1000]),
OBJECT:([Pubs].[dbo].[DailyOrders] AS [D]))
| |--Index Scan(OBJECT:([Pubs].[dbo].[DailyOrders].[x] AS
[D]), ORDERED FORWARD)
|--Hash Match(Cache, HASH:([D].[OrderDate]),
RESIDUAL:([D].[OrderDate]=[D].[OrderDate]))
|--Compute Scalar(DEFINE:([Expr1002]=If ([Expr1011]=0) then
NULL else [Expr1012]))
|--Stream
Aggregate(DEFINE:([Expr1011]=COUNT_BIG([
O].[OrderQty]),
[Expr1012]=SUM([O].[OrderQty])))
|--Bookmark Lookup(BOOKMARK:([Bmk1001]),
OBJECT:([Pubs].[dbo].[DailyOrders] AS [O]))
|--Index
S(OBJECT:([Pubs].[dbo].[DailyOrders].[x] AS [O]), SEEK:([O].[OrderDate]
<= [D].[OrderDate]) ORDERED FORWARD)
Application Profile Statistics
Timer resolution (milliseconds) 0 0
Number of INSERT, UPDATE, DELETE statements 0 0
Rows effected by INSERT, UPDATE, DELETE statements 0 0
Number of SELECT statements 2 2.66667
Rows effected by SELECT statements 19 24.6667
Number of user transactions 5 6
Average fetch time 0 0
Cumulative fetch time 0 0
Number of fetches 0 0
Number of open statement handles 0 0
Max number of opened statement handles 0 0
Cumulative number of statement handles 0 0
Network Statistics
Number of server roundtrips 3 3.33333
Number of TDS packets sent 3 3.33333
Number of TDS packets received 4 4.66667
Number of bytes sent 504 659.333
Number of bytes received 6141 7969.33
Time Statistics
Cumulative client processing time 0 0.666667
Cumulative wait time on server replies 0 0
[2] Second Query
select d1.OrderDate,
d1.OrderQty,
sum(d2.OrderQty) as CumulativeDailyQty
from DailyOrders d1
inner join DailyOrders d2 on d1.OrderDate >= d2.OrderDate
group by d1.OrderDate,
d1.OrderQty
order by d1.OrderDate
go
StmtText
----
----
-
|--Compute Scalar(DEFINE:([Expr1002]=If ([Expr1009]=0) then NULL else
[Expr1010]))
|--Stream Aggregate(GROUP BY:([d1].[OrderDate], [d1].[OrderQty])
DEFINE:([Expr1009]=COUNT_BIG([d2].[OrderQty]),
[Expr1010]=SUM([d2].[OrderQty])))
|--Bookmark Lookup(BOOKMARK:([Bmk1001]),
OBJECT:([Pubs].[dbo].[DailyOrders] AS [d2]))
|--Nested Loops(Inner Join, OUTER
REFERENCES:([d1].[OrderDate]))
|--Sort(ORDER BY:([d1].[OrderDate] ASC,
[d1].[OrderQty] ASC))
| |--Table Scan(OBJECT:([Pubs].[dbo].[DailyOrders]
AS [d1]))
|--Index S(OBJECT:([Pubs].[dbo].[DailyOrders].[x]
AS [d2]), SEEK:([d2].[OrderDate] <= [d1].[OrderDate]) ORDERED FORWARD)
Application Profile Statistics
Timer resolution (milliseconds) 0 0
Number of INSERT, UPDATE, DELETE statements 0 0
Rows effected by INSERT, UPDATE, DELETE statements 0 0
Number of SELECT statements 1 1
Rows effected by SELECT statements 9 9
Number of user transactions 1 1
Average fetch time 0 0
Cumulative fetch time 0 0
Number of fetches 0 0
Number of open statement handles 0 0
Max number of opened statement handles 0 0
Cumulative number of statement handles 0 0
Network Statistics
Number of server roundtrips 1 1
Number of TDS packets sent 1 1
Number of TDS packets received 1 1
Number of bytes sent 466 466
Number of bytes received 296 296
Time Statistics
Cumulative client processing time 0 0.333333
Cumulative wait time on server replies 0 0
Many Thanks,
Gopi
"Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in message
news:772A5932-7BC3-482D-B874-221E1881C253@.microsoft.com...
> Try,
> select
> d1.OrderDate,
> d1.OrderQty,
> sum(d2.OrderQty) as CumulativeDailyQty
> from
> DailyOrders d1
> inner join
> DailyOrders d2
> on d1.OrderDate >= d2.OrderDate
> group by
> d1.OrderDate,
> d1.OrderQty
> order by
> d1.OrderDate
> go
>
> AMB
>
> "gopi" wrote:
>|||Actually, I added D.OrderQty to the first query and now it takes the same
amount of time as the second.
I think it is because of the Clustered Index on OrderDate and the addition
of the OrderQty column forced
the Query Optimtimizer to reorder.
select D.OrderDate,
D.OrderQty,
(select SUM(o.OrderQty) from DailyOrders O where o.OrderDate <=
D.OrderDate) CumulativeDailyQty
from DailyOrders D
order by d.OrderDate, D.OrderQty
GO
Gopi
"rgn" <gopinathr@.healthasyst.com> wrote in message
news:e$K$4kEMFHA.1308@.TK2MSFTNGP15.phx.gbl...
> Hello AMB,
> I tested these two queries and looks like the first query is efficient
> than the second one as it takes 45.10% when compared to 54.90%
> when I ran these two together. I created the following index :
> CREATE INDEX [x] ON [dbo].[DailyOrders] ([OrderDate]) ON [PRIMARY]
> [1] First Query
> select D.OrderDate,
> D.OrderQty,
> (select SUM(o.OrderQty) from DailyOrders O where o.OrderDate <=
> D.OrderDate) CumulativeDailyQty
> from DailyOrders D
> order by d.OrderDate
> GO
> StmtText
> ----
----
--
> |--Compute Scalar(DEFINE:([Expr1002]=[Expr1002]))
> |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[OrderDate]))
> |--Bookmark Lookup(BOOKMARK:([Bmk1000]),
> OBJECT:([Pubs].[dbo].[DailyOrders] AS [D]))
> | |--Index Scan(OBJECT:([Pubs].[dbo].[DailyOrders].[x] AS
> [D]), ORDERED FORWARD)
> |--Hash Match(Cache, HASH:([D].[OrderDate]),
> RESIDUAL:([D].[OrderDate]=[D].[OrderDate]))
> |--Compute Scalar(DEFINE:([Expr1002]=If ([Expr1011]=0)
> then NULL else [Expr1012]))
> |--Stream
> Aggregate(DEFINE:([Expr1011]=COUNT_BIG([
O].[OrderQty]),
> [Expr1012]=SUM([O].[OrderQty])))
> |--Bookmark Lookup(BOOKMARK:([Bmk1001]),
> OBJECT:([Pubs].[dbo].[DailyOrders] AS [O]))
> |--Index
> S(OBJECT:([Pubs].[dbo].[DailyOrders].[x] AS [O]), SEEK:([O].[OrderDate]
> <= [D].[OrderDate]) ORDERED FORWARD)
>
> Application Profile Statistics
> Timer resolution (milliseconds) 0 0
> Number of INSERT, UPDATE, DELETE statements 0 0
> Rows effected by INSERT, UPDATE, DELETE statements 0 0
> Number of SELECT statements 2 2.66667
> Rows effected by SELECT statements 19 24.6667
> Number of user transactions 5 6
> Average fetch time 0 0
> Cumulative fetch time 0 0
> Number of fetches 0 0
> Number of open statement handles 0 0
> Max number of opened statement handles 0 0
> Cumulative number of statement handles 0 0
> Network Statistics
> Number of server roundtrips 3 3.33333
> Number of TDS packets sent 3 3.33333
> Number of TDS packets received 4 4.66667
> Number of bytes sent 504 659.333
> Number of bytes received 6141 7969.33
> Time Statistics
> Cumulative client processing time 0 0.666667
> Cumulative wait time on server replies 0 0
>
> [2] Second Query
> select d1.OrderDate,
> d1.OrderQty,
> sum(d2.OrderQty) as CumulativeDailyQty
> from DailyOrders d1
> inner join DailyOrders d2 on d1.OrderDate >= d2.OrderDate
> group by d1.OrderDate,
> d1.OrderQty
> order by d1.OrderDate
> go
> StmtText
> ----
----
--
> |--Compute Scalar(DEFINE:([Expr1002]=If ([Expr1009]=0) then NULL else
> [Expr1010]))
> |--Stream Aggregate(GROUP BY:([d1].[OrderDate], [d1].[OrderQty])
> DEFINE:([Expr1009]=COUNT_BIG([d2].[OrderQty]),
> [Expr1010]=SUM([d2].[OrderQty])))
> |--Bookmark Lookup(BOOKMARK:([Bmk1001]),
> OBJECT:([Pubs].[dbo].[DailyOrders] AS [d2]))
> |--Nested Loops(Inner Join, OUTER
> REFERENCES:([d1].[OrderDate]))
> |--Sort(ORDER BY:([d1].[OrderDate] ASC,
> [d1].[OrderQty] ASC))
> | |--Table Scan(OBJECT:([Pubs].[dbo].[DailyOrders]
> AS [d1]))
> |--Index S(OBJECT:([Pubs].[dbo].[DailyOrders].[x]
> AS [d2]), SEEK:([d2].[OrderDate] <= [d1].[OrderDate]) ORDERED FORWARD)
> Application Profile Statistics
> Timer resolution (milliseconds) 0 0
> Number of INSERT, UPDATE, DELETE statements 0 0
> Rows effected by INSERT, UPDATE, DELETE statements 0 0
> Number of SELECT statements 1 1
> Rows effected by SELECT statements 9 9
> Number of user transactions 1 1
> Average fetch time 0 0
> Cumulative fetch time 0 0
> Number of fetches 0 0
> Number of open statement handles 0 0
> Max number of opened statement handles 0 0
> Cumulative number of statement handles 0 0
> Network Statistics
> Number of server roundtrips 1 1
> Number of TDS packets sent 1 1
> Number of TDS packets received 1 1
> Number of bytes sent 466 466
> Number of bytes received 296 296
> Time Statistics
> Cumulative client processing time 0 0.333333
> Cumulative wait time on server replies 0 0
>
> Many Thanks,
> Gopi
> "Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in
> message news:772A5932-7BC3-482D-B874-221E1881C253@.microsoft.com...
>

2012年2月25日星期六

Any MySQL book recommendations for newbie?

I just bought the book, Build Your Own Database Driven Website, by
Kevin Yank, and I'll start working on a database soon. Any other
books, etc., you can think of that a know-nothing newbie should read?
Is the book above a good one?

Thanks!
Steve HThis is a Microsoft SQL Server group so if you are looking for
recommendations on MySQL books you will probably get more help elsewhere.

Some book lists and other resources for SQLServer (not MySQL):

http://vyaskn.tripod.com/sqlbooks.htm
http://www.aspfaq.com/2423

--
David Portas
SQL Server MVP
--

2012年2月11日星期六

ANSI's stance on SmallInt vs. Int

I decided to push this out to it's own thread. Here's the question, if you a
re
deciding to abide purely by ANSI SQL 92 standards and are only thinking in t
erms
of the logical model, how does the modeler decide when to use a SmallInt vs.
an
Integer if the allowed range is considered a physical implementation?
A secondary question relates to boolean values. Is the official ANSI solutio
n to
storing boolean values (0 or 1) to use either a full blown Int (of course, s
ince
we are only talking about ANSI, we have no idea how big that is ;-> ) or a
Char(1)?
ThomasANSI / ISO SQL doesn't define absolute numeric precision so the
decisions on which datatype to use can only be made in the context of
an actual implementation. Those physical details are just outside the
scope of the standard. Does it matter?
SQL99 defines a Boolean datatype but SQL Server doesn't support it.
David Portas
SQL Server MVP
--|||> ANSI / ISO SQL doesn't define absolute numeric precision so the
> decisions on which datatype to use can only be made in the context of
> an actual implementation. Those physical details are just outside the
> scope of the standard. Does it matter?
By specifying the concept of a SmallInt but providing no information that wo
uld
help the modeler choose one over the other, it sounds like the ANSI team mad
e an
outright error in including SmallInt in the first place. Said another way, s
ince
designers are already required to accommodate the given DBMS for things like
the
range of values for a SmallInt, where is the problem in using DBMS specific
features like TinyInt? After all, if portability is the holy grail of standa
rds
use, it is already the case that porting a database from one DBMS to another
might break because one vendor used 2-bytes for their SmallInt while another
used 1-byte for their SmallInt. Thus, in terms of data types certainly, it m
akes
sense to use things like TinyInt to accomplish you goal.
In case you were wondering, I bring this up becuase of a comment Mr. Celko m
ade
about not using SQL's TinyInt datatype.

> SQL99 defines a Boolean datatype but SQL Server doesn't support it.
Is that true of SQL 2005 as well?
Thomas