2012年3月27日星期二
anyone use a 3rd party product to load balance SQL Server?
solution. The data is in 1 database. So SQL 2000 doesn't support load
balancing, and neither will 2005. That sucks. Oracle supposedly does load
balancing and I can't understand why even SQL Server 2005 won't do it. So
either I go with an 8+ CPU SQL Server or consider a 3rd party load balancer
that will somehow (?) make it work so that multiple SQL brains can balance
the load of 1 single database that is presumably shared on 1 set of disks.
I'm finding very little information on load balancing SQL Server when
searching the net. Is there a product like Legato, for example, (I haven't
tried it) that will somehow give me load balancing with SQL Server?HK wrote:
> I'm working on a project needing a high-end large mission critical
> database solution. The data is in 1 database. So SQL 2000 doesn't
> support load balancing, and neither will 2005. That sucks. Oracle
> supposedly does load balancing and I can't understand why even SQL
> Server 2005 won't do it. So either I go with an 8+ CPU SQL Server or
> consider a 3rd party load balancer that will somehow (?) make it work
> so that multiple SQL brains can balance the load of 1 single database
> that is presumably shared on 1 set of disks.
> I'm finding very little information on load balancing SQL Server when
> searching the net. Is there a product like Legato, for example, (I
> haven't tried it) that will somehow give me load balancing with SQL
> Server?
You may want to take a look at Federated Database Servers on SQL Server
2000/2005 if that architecture will work for you.
--
David Gugick
Quest Software
www.imceda.com
www.quest.com|||"David Gugick" <david.gugick-nospam@.quest.com> wrote in message
news:uzLN8u5wFHA.3180@.TK2MSFTNGP14.phx.gbl...
> HK wrote:
> > I'm working on a project needing a high-end large mission critical
> > database solution. The data is in 1 database. So SQL 2000 doesn't
> > support load balancing, and neither will 2005. That sucks. Oracle
> > supposedly does load balancing and I can't understand why even SQL
> > Server 2005 won't do it. So either I go with an 8+ CPU SQL Server or
> > consider a 3rd party load balancer that will somehow (?) make it work
> > so that multiple SQL brains can balance the load of 1 single database
> > that is presumably shared on 1 set of disks.
> >
> > I'm finding very little information on load balancing SQL Server when
> > searching the net. Is there a product like Legato, for example, (I
> > haven't tried it) that will somehow give me load balancing with SQL
> > Server?
> You may want to take a look at Federated Database Servers on SQL Server
> 2000/2005 if that architecture will work for you.
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>
Creating the partitioned views sounds very challenging for an already-in-use
DB that needs to be cut over in the middle of the night. Plus ongoing
maintenance. Maybe I'm missing the boat, but that's why I'd want to stay
away from federated databases.
Anyone know why this isnt working...?
I forgot to include to include the XML yesterday...
I am using the following C#...
SQLXMLBULKLOADLib.SQLXMLBulkLoad3Class loader = new SQLXMLBULKLOADLib.SQLXMLBulkLoad3Class();
loader.ConnectionString = "PROVIDER=SQLOLEDB; Server=(local); database=TestDB; user id=test; password=password";
loader.ErrorLogFile = "C:\log.txt";
loader.SchemaGen = true;
loader.SGDropTables = true;
loader.CheckConstraints = true;
loader.Execute( "C:\xsd.xsd"/* this file below */, "C:\xml.xml" );
To generate tables in the database using the following xml schema... (file: xsd.xsd...)
<?xml version="1.0" standalone="yes"?>
<xs:schema id="PartnerData" xmlns="" xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:sql="urn:schemas-microsoft-com:mapping-schema">
<xs:element name="PartnerData">
<xs:complexType>
<xs:choice minOccurs="0" maxOccurs="unbounded">
<xs:element name="MessageHeader">
<xs:complexType>
<xs:sequence>
<xs:element name="SystemName" type="xs:string" minOccurs="0" />
<xs:element name="CreationDate" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="Year" type="xs:string" minOccurs="0" />
<xs:element name="Month" type="xs:string" minOccurs="0" />
<xs:element name="Day" type="xs:string" minOccurs="0" />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="MessageBody">
<xs:complexType>
<xs:sequence>
<xs:element name="Product" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="Manufacturer" type="xs:string" minOccurs="0" />
<xs:element name="Brand" type="xs:string" minOccurs="0" />
<xs:element name="LongDescription" type="xs:string" minOccurs="0" />
<xs:element name="ShortDescription" type="xs:string" minOccurs="0" />
<xs:element name="Category" type="xs:string" minOccurs="0" />
<xs:element name="Family" type="xs:string" minOccurs="0" />
<xs:element name="ProductCode" type="xs:string" minOccurs="0" />
<xs:element name="InformationsUrl" type="xs:string" minOccurs="0" />
<xs:element name="PictureUrl" type="xs:string" minOccurs="0" />
<xs:element name="PublicationDate" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="Year" type="xs:string" minOccurs="0" />
<xs:element name="Month" type="xs:string" minOccurs="0" />
<xs:element name="Day" type="xs:string" minOccurs="0" />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="RecommendedRetailPrice" nillable="true" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:simpleContent>
<xs:extension base="xs:string">
<xs:attribute name="currency" type="xs:string" />
<xs:attribute name="numberOfDecimal" type="xs:string" />
</xs:extension>
</xs:simpleContent>
</xs:complexType>
</xs:element>
<xs:element name="Characteristics" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="Characteristic" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="Label" type="xs:string" minOccurs="0" />
<xs:element name="Value" type="xs:string" minOccurs="0" />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="Arguments" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="Argument" type="xs:string" minOccurs="0" />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
<xs:attribute name="codeEan" type="xs:string" />
<xs:attribute name="localisation" type="xs:string" />
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:choice>
</xs:complexType>
</xs:element>
</xs:schema
My problems is that it doesn't generate the tables and I am following the SQLXML2.0 documentation as closely as possible.
Does anyone know what I have to do to get it to generate the tables? Running it doesn't even produce an error.
Cheers,
Matt.
2012年3月25日星期日
Anyone ever tried this one?
1. Schedule the batch so that it runs on it's own based on a schedule
2. Somehow get SQL or Access (which ever is better) to talk to the "Tag.exe" program that does the actually conversion but on the mp3 server.
3. Completely automate the process and be able to convert multiple mp3 files based on schedule, batch file and software.
Has anyone ever done anything like this and if so which database program did you use and how did you get it to work? Details please as I am still learning SQL Server 2005 and am not a .Net guru either.
Any help is much appreciated. I may be reached at: erico4@.u.washington.eduI wouldn′t automate that from SQL Server. Write an application which can be scheduled and uploads the data to SQL Server. SQL Server is for storing data not external processing.
HTH, Jens K. Suessmeyer.
http://www.sqlserver2005.de|||Jens - thanks but what exactly am I storing in SQL? Am I storing the tags themselves?
Example:
Artist: Company Name
Title: Test Title
Album: News & Information
Year: Copyright 2006 Company Name
Genre: news
Comment: John Smith etc etc
This was suggested by a coworker via email. But I'm not clear on what they mean. How would you interpret this?
If you end up using
Access .... you might be able to get the SQL script out of the Access wizard
http://office.microsoft.com/en-us/access/HP051876301033.aspx?pid=CH063648341033
I'm trying to piece together how all this flows from step one and so on. I need help thinking it through the process so I can have a better understanding of that part of it.
Thats a bit hard to tell for us, as we don′t know the specifics of the TAG program and the process itself. A possible solution for this could be:
1. Grab the Wav file from the folder
2. Extract the Tag information from the Wav file and store it in a flat file
3. Call the MP3 converter
4 Call the tag application using the flat file.
5. Delete the flat file.
This can be all done without any interaction with a SQL Server unless you want to persist the data on the server.
HTH, Jens K. Suessmeyer.
http://www.sqlserver2005.de
|||You would use Access or SQL Server to store some data related to the process and/or the files that are being processed. It sounds like tag.exe is applying the contents of the data file (Artist, Title, Album, Year, etc.) to the metadata of the mp3 file. If that's correct, you can replace everything that deals with these data files (create the file, parse the file, etc.) with a database and a corresponding query. At it's simplest, you could simply map the data file to a table in the database. If you wanted to start doing more interesting processing (say, all Titles with "Hendrix"), you would want to create separate tables for your query dimensions (e.g. Artist, Genre, Year)...
There are lots of options here for what you can do. There is a good starter link on SQL Server at http://msdn2.microsoft.com/en-us/sql/aa336347.aspx (Look in the section titled "How do I learn more about SQL Server 2005"...specifically the Architecture and Design link and the Programmability link).
Access is a great introductory tool to help get your head wrapped around how you can use a database in this scenario. SQL Server 2005 Express Edition when used in conjunction with Visual Studio 2005 would be another good option.
Hope that helps and good luck!
2012年3月19日星期一
Any way to bold certain words in data field?
Thanks!We used to have a system that fed us the data in seperate fields of
History:
Technique:
Comparison:
Findings:
Conculsion:
but now we have a system that is unable to split the data elements up before feeding it into our system. We can't add any formating on the upstream system either unfortionately. Now our report looks like garbage lol. Hopefully one of you gurus will be able to help.
2012年3月8日星期四
Any SQL experts?
I've added a section of the SP below for reference. Here is an overview of
my table structure (simplified).
EMPLOYEE TABLE
employeeid
name
IsActive
EMPLOYEE HISTORY TABLE
employeeid
name
IsActive
FromDate -- this signifies the start date the current employee record info
was valid.
ToDate -- this signifies the end date the current employee record info was
valid.
PROCESS TABLE
processid
name
startdate
enddate
Basically, this is what the SP is meant to do...... when the sp runs, it
will only carry out processes where the current date is between startdate
and enddate of the process record. And it should only process employees who
were active between these dates. I can find out backdated info about an
employee in the history table. However, if there have been no changes to an
employee (ie: new joiner), then there will be no records in the history
table so I have to read the current status from the master employee table.
Hope all this makes sense.
OLD CODE:
-- this works fairly quickly, but is not correct as it's
not looking at historic employee data
(employees.IsActive = 1)
NEW CODE:
(
--if a history record exists for current
employee, the read the info
(
(select top
1 IsActive
from
employeehistory eh
where
eh.todate > process.enddate
and
employeeid = employee.employeeid
order by
eh.enddate) = 1
)
OR
--if no history record exists
for current employee, then read current employee
--data in master employee table
(
not exists
(select *
from
employeehistory
where
employeeid = employee.employeeid
)
AND
(employees.IsActive
= 1)
)
)
The changed section has increased the process time massively. Furthermore,
it is doing a lot of work on tempdb and hence using up all the harddisk
space (and eventually fails). The SP is processing approx 500,000 records.
Does anyone know why this would happen, and what I can do to improve the
performace. What is it writing to tempdb? I've tried adding indexes to the
date/employee columns in the history table, but it doesn't help.Well, iterating over 500000 records will take a long time. You don't
provide the code of your SP; so we don't know how the iteration is taking
place and make it hard to give you any relevant help; however, using an
UNION instead of a Cursor for example might be a solution in your case.
Finally, I really don't understand why you have put an Order By in a
subquery.
Sylvain Lafontaine, ing.
MVP - Technologies Virtual-PC
E-mail: http://cerbermail.com/?QugbLEWINF
"amy" <amy@.nospam.com> wrote in message
news:uyBhOeCUGHA.4452@.TK2MSFTNGP12.phx.gbl...
> Hi, I have an SP that was working fine (took about 10 mins to process).
> I've added a section of the SP below for reference. Here is an overview
> of my table structure (simplified).
> EMPLOYEE TABLE
> employeeid
> name
> IsActive
> EMPLOYEE HISTORY TABLE
> employeeid
> name
> IsActive
> FromDate -- this signifies the start date the current employee record info
> was valid.
> ToDate -- this signifies the end date the current employee record info was
> valid.
> PROCESS TABLE
> processid
> name
> startdate
> enddate
> Basically, this is what the SP is meant to do...... when the sp runs, it
> will only carry out processes where the current date is between startdate
> and enddate of the process record. And it should only process employees
> who were active between these dates. I can find out backdated info about
> an employee in the history table. However, if there have been no changes
> to an employee (ie: new joiner), then there will be no records in the
> history table so I have to read the current status from the master
> employee table.
> Hope all this makes sense.
>
> OLD CODE:
> -- this works fairly quickly, but is not correct as
> it's not looking at historic employee data
> (employees.IsActive = 1)
>
> NEW CODE:
> (
> --if a history record exists for
> current employee, the read the info
> (
> (select top
> 1 IsActive
> from
> employeehistory eh
> where
> eh.todate > process.enddate
> and
> employeeid = employee.employeeid
> order by
> eh.enddate) = 1
> )
> OR
>
> --if no history record exists
> for current employee, then read current employee
> --data in master employee table
> (
> not exists
> (select *
> from
> employeehistory
> where
> employeeid = employee.employeeid
> )
> AND
> (employees.IsActive = 1)
> )
> )
>
> The changed section has increased the process time massively.
> Furthermore, it is doing a lot of work on tempdb and hence using up all
> the harddisk space (and eventually fails). The SP is processing approx
> 500,000 records. Does anyone know why this would happen, and what I can do
> to improve the performace. What is it writing to tempdb? I've tried
> adding indexes to the date/employee columns in the history table, but it
> doesn't help.
>
>|||Thanks for your response. There are no cursors being used, just straight
forward selects/joins. The full SP is very long, I have only included the
part that has changed and is making the tempdb grow massively.
The order by is required because I need to find the 1st instance of the
employee history record after the process date.
Eg:
If the process date is 3rd feb and the employee history is:
empid name IsActive fromdate todate
2 tom 0 20th feb 20th march
2 tom 1 1st feb 20th feb
2 tom 0 29th jan 1st feb
2 tom 1 1st jan 29th jan
then i need to find the state of the employee as at 3rd feb. i do this by
finding the first record where the enddate > 3rd feb.
"Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
wrote in message news:OIqBtGEUGHA.5468@.TK2MSFTNGP14.phx.gbl...
> Well, iterating over 500000 records will take a long time. You don't
> provide the code of your SP; so we don't know how the iteration is taking
> place and make it hard to give you any relevant help; however, using an
> UNION instead of a Cursor for example might be a solution in your case.
> Finally, I really don't understand why you have put an Order By in a
> subquery.
> --
> Sylvain Lafontaine, ing.
> MVP - Technologies Virtual-PC
> E-mail: http://cerbermail.com/?QugbLEWINF
>
> "amy" <amy@.nospam.com> wrote in message
> news:uyBhOeCUGHA.4452@.TK2MSFTNGP12.phx.gbl...
>|||amy
It's hard to suggest without seeing the whole code and understand all
business requirements
I'd start looking at an execution plan , whether or not the optimizer ia
available to use indexes
Aaron has a great article at his web site
http://www.aspfaq.com/show.asp?id=2446
"amy" <amy@.nospam.com> wrote in message
news:O46x64GUGHA.3192@.TK2MSFTNGP09.phx.gbl...
> Thanks for your response. There are no cursors being used, just straight
> forward selects/joins. The full SP is very long, I have only included the
> part that has changed and is making the tempdb grow massively.
> The order by is required because I need to find the 1st instance of the
> employee history record after the process date.
> Eg:
> If the process date is 3rd feb and the employee history is:
> empid name IsActive fromdate todate
> 2 tom 0 20th feb 20th march
> 2 tom 1 1st feb 20th feb
> 2 tom 0 29th jan 1st feb
> 2 tom 1 1st jan 29th jan
> then i need to find the state of the employee as at 3rd feb. i do this by
> finding the first record where the enddate > 3rd feb.
>
> "Sylvain Lafontaine" <sylvain aei ca (fill the blanks, no spam please)>
> wrote in message news:OIqBtGEUGHA.5468@.TK2MSFTNGP14.phx.gbl...
>|||[Reposted, as posts from outside msnews.microsoft.com does not seem to make
it in.]
amy (amy@.nospam.com) writes:
> The order by is required because I need to find the 1st instance of the
> employee history record after the process date.
> Eg:
> If the process date is 3rd feb and the employee history is:
> empid name IsActive fromdate todate
> 2 tom 0 20th feb 20th march
> 2 tom 1 1st feb 20th feb
> 2 tom 0 29th jan 1st feb
> 2 tom 1 1st jan 29th jan
> then i need to find the state of the employee as at 3rd feb. i do this by
> finding the first record where the enddate > 3rd feb.
The standard idiom is something like:
SELECT eh.issactive
FROM (SELECT * FROM process WHERE processid = @.processid)
CROSS JOIN (employees e
JOIN employeehistory eh
ON e.employessid = eh.empolyeeid
AND e.startdate = (SELECT MAX(eh2.employeedate)
FROM employeehistory eh2
WHERE eh2.empoloyeeid = eh.employessid
AND e.employeedate <= p.processdate)
Unforteunately, this is not going to perform very well. I guess it is
not possible for you change the tables, but for this sort of operation,
it can be far more effecient to have one row per employee and day, even
if it takes up a lot more disk space.
It helps if you include CREATE TABLE and CREATE INDEX statements for your
tables. Also sample data as INSERT statements with sample data is good,
as that helps to test the logic of a query.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.seBooks Online for SQL
Server 2005
athttp://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000
athttp://www.microsoft.com/sql/prodinfo/previousversions/books.mspx|||Hi Amy
Check out http://www.aspfaq.com/etiquette.asp?id=5006 on how to post useful
DDL and example data. You don't say what should happen if the employee was
only active for part of the period. This will say if the employee was active
at the start of the period or currently active (but not necessarily an
employee at the time of the period) which may be a starting place.
SELECT
e.employeeid
e.name
[EMPLOYEE TABLE] e
JOIN [EMPLOYEE HISTORY TABLE] h ON h.employeeid = e.employeeid AND
h.IsActive = 1 AND ((h.FromDate <= @.FromDate AND h.ToDate >= @.FromDate)
UNION ALL
SELECT
e.employeeid
e.name
[EMPLOYEE TABLE] e
WHERE NOT EXISTS ( SELECT * FROM [EMPLOYEE HISTORY TABLE] h WHERE
h.employeeid = e.employeeid )
AND e.IsActive = 1
John
"amy" wrote:
> Hi, I have an SP that was working fine (took about 10 mins to process).
> I've added a section of the SP below for reference. Here is an overview o
f
> my table structure (simplified).
> EMPLOYEE TABLE
> employeeid
> name
> IsActive
> EMPLOYEE HISTORY TABLE
> employeeid
> name
> IsActive
> FromDate -- this signifies the start date the current employee record info
> was valid.
> ToDate -- this signifies the end date the current employee record info was
> valid.
> PROCESS TABLE
> processid
> name
> startdate
> enddate
> Basically, this is what the SP is meant to do...... when the sp runs, it
> will only carry out processes where the current date is between startdate
> and enddate of the process record. And it should only process employees w
ho
> were active between these dates. I can find out backdated info about an
> employee in the history table. However, if there have been no changes to a
n
> employee (ie: new joiner), then there will be no records in the history
> table so I have to read the current status from the master employee table.
> Hope all this makes sense.
>
> OLD CODE:
> -- this works fairly quickly, but is not correct as it
's
> not looking at historic employee data
> (employees.IsActive = 1)
>
> NEW CODE:
> (
> --if a history record exists for curre
nt
> employee, the read the info
> (
> (select to
p
> 1 IsActive
> from
> employeehistory eh
> where
> eh.todate > process.enddate
> and
> employeeid = employee.employeeid
> order by
> eh.enddate) = 1
> )
> OR
>
> --if no history record exists
> for current employee, then read current employee
> --data in master employee tabl
e
> (
> not exists
> (select *
> from
> employeehistory
> where
> employeeid = employee.employeeid
> )
> AND
> (employees
.IsActive
> = 1)
> )
> )
>
> The changed section has increased the process time massively. Furthermore
,
> it is doing a lot of work on tempdb and hence using up all the harddisk
> space (and eventually fails). The SP is processing approx 500,000 records
.
> Does anyone know why this would happen, and what I can do to improve the
> performace. What is it writing to tempdb? I've tried adding indexes to t
he
> date/employee columns in the history table, but it doesn't help.
>
>
>|||Depending on the circumstances, joining on a sub-query can result in
performance issues.
Instead of doing this:
not exists (select * from employeehistory where employeeid =
employee.employeeid)
Consider doing this:
left join employeehistory on employeehistory.employeeid =
employee.employeeid
Also consider inserting relevent transactions from employeehistory into a
temporary tables and joining with that.
You can use the Display Estimated Execution Plan feature of Query Analyzer
to determine what lookups and indexes are used and compare different
versions of a SQL statement:
http://msdn.microsoft.com/library/d... />
1_5pde.asp
"amy" <amy@.nospam.com> wrote in message
news:uyBhOeCUGHA.4452@.TK2MSFTNGP12.phx.gbl...
> Hi, I have an SP that was working fine (took about 10 mins to process).
> I've added a section of the SP below for reference. Here is an overview
> of my table structure (simplified).
> EMPLOYEE TABLE
> employeeid
> name
> IsActive
> EMPLOYEE HISTORY TABLE
> employeeid
> name
> IsActive
> FromDate -- this signifies the start date the current employee record info
> was valid.
> ToDate -- this signifies the end date the current employee record info was
> valid.
> PROCESS TABLE
> processid
> name
> startdate
> enddate
> Basically, this is what the SP is meant to do...... when the sp runs, it
> will only carry out processes where the current date is between startdate
> and enddate of the process record. And it should only process employees
> who were active between these dates. I can find out backdated info about
> an employee in the history table. However, if there have been no changes
> to an employee (ie: new joiner), then there will be no records in the
> history table so I have to read the current status from the master
> employee table.
> Hope all this makes sense.
>
> OLD CODE:
> -- this works fairly quickly, but is not correct as
> it's not looking at historic employee data
> (employees.IsActive = 1)
>
> NEW CODE:
> (
> --if a history record exists for
> current employee, the read the info
> (
> (select top
> 1 IsActive
> from
> employeehistory eh
> where
> eh.todate > process.enddate
> and
> employeeid = employee.employeeid
> order by
> eh.enddate) = 1
> )
> OR
>
> --if no history record exists
> for current employee, then read current employee
> --data in master employee table
> (
> not exists
> (select *
> from
> employeehistory
> where
> employeeid = employee.employeeid
> )
> AND
> (employees.IsActive = 1)
> )
> )
>
> The changed section has increased the process time massively.
> Furthermore, it is doing a lot of work on tempdb and hence using up all
> the harddisk space (and eventually fails). The SP is processing approx
> 500,000 records. Does anyone know why this would happen, and what I can do
> to improve the performace. What is it writing to tempdb? I've tried
> adding indexes to the date/employee columns in the history table, but it
> doesn't help.
>
>
Any response?
response from Microsoft within one working day?
The question below was posted over a week ago....
Hi
I have SQL2000 dev edition installed and am trying to install a new named
instance of MSDE2000A (to replicate what users will be doing).
However when I run Setup I get "The system administrator has set policies to
prevent this installation." But I am the system administrator!
Setup ini files is as follows:
[Options]
INSTANCENAME="System5"
TARGETDIR="C:\Program Files\Silvertree Engineering\IceSpy System5\"
DATADIR="C:\Program Files\Silvertree Engineering\IceSpy System5\Data\"
DISABLENETWORKPROTOCOLS=0
SECURITYMODE=SQL
and I am adding an SAPWD to the command line.
I have the same problem with trying to re-install MS-Office 2003 on a
client's PC. I also logged-in as the admin. I tried editing the registry by
deleting all MS-Office-related entries; somehow, the system still detects
that a previous version had been installed (the program folders were deleted,
including those in "docs & settings).
If you get e decent answer to this problem, please forward to me.
Thanx!
"quilkin" wrote:
> Am I in the wrong newsgroup? I thought I read that there should be some
> response from Microsoft within one working day?
> The question below was posted over a week ago....
> Hi
> I have SQL2000 dev edition installed and am trying to install a new named
> instance of MSDE2000A (to replicate what users will be doing).
> However when I run Setup I get "The system administrator has set policies to
> prevent this installation." But I am the system administrator!
> Setup ini files is as follows:
> [Options]
> INSTANCENAME="System5"
> TARGETDIR="C:\Program Files\Silvertree Engineering\IceSpy System5\"
> DATADIR="C:\Program Files\Silvertree Engineering\IceSpy System5\Data\"
> DISABLENETWORKPROTOCOLS=0
> SECURITYMODE=SQL
> and I am adding an SAPWD to the command line.
>
|||Microsoft may monitor these newsgroups but there is no guarantee that
someone from MS will respond.
Have you checked to see if there has been any changes the security settings
of your system?
Jim
"quilkin" <quilkin@.discussions.microsoft.com> wrote in message
news:BCEA697C-6EFB-4C68-89C0-D86D2B240254@.microsoft.com...
> Am I in the wrong newsgroup? I thought I read that there should be some
> response from Microsoft within one working day?
> The question below was posted over a week ago....
> Hi
> I have SQL2000 dev edition installed and am trying to install a new named
> instance of MSDE2000A (to replicate what users will be doing).
> However when I run Setup I get "The system administrator has set policies
> to
> prevent this installation." But I am the system administrator!
> Setup ini files is as follows:
> [Options]
> INSTANCENAME="System5"
> TARGETDIR="C:\Program Files\Silvertree Engineering\IceSpy System5\"
> DATADIR="C:\Program Files\Silvertree Engineering\IceSpy System5\Data\"
> DISABLENETWORKPROTOCOLS=0
> SECURITYMODE=SQL
> and I am adding an SAPWD to the command line.
>
Any Reporting Service Log experts out there?
undertand how our users are (or are not) leveraging our Reporting Services
implementation. We have been writing execution log data to a database based
on the Msft provided SSIS package that pulls data from the Report Server
database and I've discovered a hole in the information that I'm hoping
someone can help me fix.
We use data drive subscriptions pretty heavily which of course require
cached data credentials. In the "ExecutionLogs" table, these entries appear
as being requested by "System" and the user shows our proxy account. If you
review the "ReportServerService_..." log on the report server, you can see
the actual detail surrounding the processing of the subscription, but I can't
find a way to correlate these entries back to the ExecutionLog table. The
"ExecutionLogId" in the ExecutionLog table doesn't reference any of the
uniqueidentifiers that you see in the text ReportServerService log.
My end goal is to be able to update the User column in the ExecutionLog
table with the user who was the actual recipient of the report as identified
in the ReportServerService log file.
Anyone tackle this yet or have any ideas as to how it might be accomplished?I'll give you a big hint.. :)
Remember, you are not limited to just select statements in your datasets for
your reports.
I have a dataset that does INSERTS for my data driven reports... (since it
is data driven, you KNOW who the users are that are going to get the
reports)
cheers!
=-Chris
"KS" <ks@.community.nospam> wrote in message
news:C44736CF-304A-4478-98E9-01BDA5A7C0B2@.microsoft.com...
> We are working on developing some statistical reports to help us to better
> undertand how our users are (or are not) leveraging our Reporting Services
> implementation. We have been writing execution log data to a database
> based
> on the Msft provided SSIS package that pulls data from the Report Server
> database and I've discovered a hole in the information that I'm hoping
> someone can help me fix.
> We use data drive subscriptions pretty heavily which of course require
> cached data credentials. In the "ExecutionLogs" table, these entries
> appear
> as being requested by "System" and the user shows our proxy account. If
> you
> review the "ReportServerService_..." log on the report server, you can
> see
> the actual detail surrounding the processing of the subscription, but I
> can't
> find a way to correlate these entries back to the ExecutionLog table. The
> "ExecutionLogId" in the ExecutionLog table doesn't reference any of the
> uniqueidentifiers that you see in the text ReportServerService log.
> My end goal is to be able to update the User column in the ExecutionLog
> table with the user who was the actual recipient of the report as
> identified
> in the ReportServerService log file.
> Anyone tackle this yet or have any ideas as to how it might be
> accomplished?|||Thanks for the hint, Chris.
Unless I'm missing something, however, that still doesn't provide you with
the ability to cross reference the actual report delivery with the
ExecutionLog record to be able to access the other metrics that are being
captured (TimeDataRetrieval, TimeProcessing, TimeRendering, ByteCount,
RowCount...)
Need to find a way to accurately identify a specific data driven
subscription execution with the corresponding ExecutionLog entry.
Any other ideas? Like I said, all the necessary info is in
ReportServerServices_xxx.log, but I don't see how I can accurately tie that
back to the appropriate ExecutionLogId in the ExecutionLog table. The text
file inlcudes a number of uniqueidentifiers, but none of which match up to
the ExecutionLog. Can't go by exact time either as the text file is not
precise enough in the event of batch data driven subscription processing.
"Chris Conner" wrote:
> I'll give you a big hint.. :)
> Remember, you are not limited to just select statements in your datasets for
> your reports.
> I have a dataset that does INSERTS for my data driven reports... (since it
> is data driven, you KNOW who the users are that are going to get the
> reports)
> cheers!
> =-Chris
>
> "KS" <ks@.community.nospam> wrote in message
> news:C44736CF-304A-4478-98E9-01BDA5A7C0B2@.microsoft.com...
> > We are working on developing some statistical reports to help us to better
> > undertand how our users are (or are not) leveraging our Reporting Services
> > implementation. We have been writing execution log data to a database
> > based
> > on the Msft provided SSIS package that pulls data from the Report Server
> > database and I've discovered a hole in the information that I'm hoping
> > someone can help me fix.
> >
> > We use data drive subscriptions pretty heavily which of course require
> > cached data credentials. In the "ExecutionLogs" table, these entries
> > appear
> > as being requested by "System" and the user shows our proxy account. If
> > you
> > review the "ReportServerService_..." log on the report server, you can
> > see
> > the actual detail surrounding the processing of the subscription, but I
> > can't
> > find a way to correlate these entries back to the ExecutionLog table. The
> > "ExecutionLogId" in the ExecutionLog table doesn't reference any of the
> > uniqueidentifiers that you see in the text ReportServerService log.
> >
> > My end goal is to be able to update the User column in the ExecutionLog
> > table with the user who was the actual recipient of the report as
> > identified
> > in the ReportServerService log file.
> >
> > Anyone tackle this yet or have any ideas as to how it might be
> > accomplished?
>
>|||Hmm... this is a nice challenge.
Let's try this - forget the log for the moment - I know I could look this
up - but I'm not at a report server at the moment - does the report
Globals!ExecutionTime match the time stored in the ExectionLog? I mean, you
know the report name ...
Here is what I was thinking:
select name, b.executiontime
from reportserver.dbo.catalog c
inner join reportserver.dbo.executionlog ex
ON (c.ItemID = ex.ReportID)
inner join BobTable b on (c.name = b.ReportName)
where name = b.ReportName and b.executiontime between c.TimeStart and
c.TimeEnd
Where "BobTable" is your table that you store the report name and report
execution time when the report runs.
=-Chris
"KS" <ks@.community.nospam> wrote in message
news:2E0EE652-8A08-4D89-9DB3-50EB8B367FF6@.microsoft.com...
> Thanks for the hint, Chris.
> Unless I'm missing something, however, that still doesn't provide you with
> the ability to cross reference the actual report delivery with the
> ExecutionLog record to be able to access the other metrics that are being
> captured (TimeDataRetrieval, TimeProcessing, TimeRendering, ByteCount,
> RowCount...)
> Need to find a way to accurately identify a specific data driven
> subscription execution with the corresponding ExecutionLog entry.
> Any other ideas? Like I said, all the necessary info is in
> ReportServerServices_xxx.log, but I don't see how I can accurately tie
> that
> back to the appropriate ExecutionLogId in the ExecutionLog table. The
> text
> file inlcudes a number of uniqueidentifiers, but none of which match up to
> the ExecutionLog. Can't go by exact time either as the text file is not
> precise enough in the event of batch data driven subscription processing.
> "Chris Conner" wrote:
>> I'll give you a big hint.. :)
>> Remember, you are not limited to just select statements in your datasets
>> for
>> your reports.
>> I have a dataset that does INSERTS for my data driven reports... (since
>> it
>> is data driven, you KNOW who the users are that are going to get the
>> reports)
>> cheers!
>> =-Chris
>>
>> "KS" <ks@.community.nospam> wrote in message
>> news:C44736CF-304A-4478-98E9-01BDA5A7C0B2@.microsoft.com...
>> > We are working on developing some statistical reports to help us to
>> > better
>> > undertand how our users are (or are not) leveraging our Reporting
>> > Services
>> > implementation. We have been writing execution log data to a database
>> > based
>> > on the Msft provided SSIS package that pulls data from the Report
>> > Server
>> > database and I've discovered a hole in the information that I'm hoping
>> > someone can help me fix.
>> >
>> > We use data drive subscriptions pretty heavily which of course require
>> > cached data credentials. In the "ExecutionLogs" table, these entries
>> > appear
>> > as being requested by "System" and the user shows our proxy account.
>> > If
>> > you
>> > review the "ReportServerService_..." log on the report server, you can
>> > see
>> > the actual detail surrounding the processing of the subscription, but I
>> > can't
>> > find a way to correlate these entries back to the ExecutionLog table.
>> > The
>> > "ExecutionLogId" in the ExecutionLog table doesn't reference any of the
>> > uniqueidentifiers that you see in the text ReportServerService log.
>> >
>> > My end goal is to be able to update the User column in the ExecutionLog
>> > table with the user who was the actual recipient of the report as
>> > identified
>> > in the ReportServerService log file.
>> >
>> > Anyone tackle this yet or have any ideas as to how it might be
>> > accomplished?
>>
2012年2月25日星期六
Any MySQL book recommendations for newbie?
Kevin Yank, and I'll start working on a database soon. Any other
books, etc., you can think of that a know-nothing newbie should read?
Is the book above a good one?
Thanks!
Steve HThis is a Microsoft SQL Server group so if you are looking for
recommendations on MySQL books you will probably get more help elsewhere.
Some book lists and other resources for SQLServer (not MySQL):
http://vyaskn.tripod.com/sqlbooks.htm
http://www.aspfaq.com/2423
--
David Portas
SQL Server MVP
--
2012年2月23日星期四
any ideas?
The system is for enforcement officers where they are creating and working
on case files.
When they create a new case, the insert statement has logic that will go out
and find the last case number used and then increment that number by 1 and
assign this number as the case number.
I am looking to extend this application to Tablet PC's so they can take the
application out into the field. I have the database setup for merge
replication and I am using the Windows Syncrozination Manager to syncronize
the database when they logged into the network.
I would like to have the ability for them to create a new case while in the
field and disconnected from the network.
that is where the problem comes in, while they are disconnected they won't
know what the last case number was, so I need to assign a temporary case
number till they syncronize, during the syncrozination process assign
permannet case numbers.
Anybody had a situation like this or have any ideas how I could do this?
ThanksHi Mike
I have not deal with you situation, but I could envisage that a solution may
be to sync with a set of "holding" tables rather than the main ones and then
a batch process would update from the main tables at some other point.
John
"Mike Read" wrote:
> I have a system currently runs in house only.
> The system is for enforcement officers where they are creating and working
> on case files.
> When they create a new case, the insert statement has logic that will go out
> and find the last case number used and then increment that number by 1 and
> assign this number as the case number.
> I am looking to extend this application to Tablet PC's so they can take the
> application out into the field. I have the database setup for merge
> replication and I am using the Windows Syncrozination Manager to syncronize
> the database when they logged into the network.
> I would like to have the ability for them to create a new case while in the
> field and disconnected from the network.
> that is where the problem comes in, while they are disconnected they won't
> know what the last case number was, so I need to assign a temporary case
> number till they syncronize, during the syncrozination process assign
> permannet case numbers.
> Anybody had a situation like this or have any ideas how I could do this?
> Thanks
>
>
Any ideas on how to speed up this sp?
CREATE PROCEDURE isp_ap_calc_apt_totals
@.p_comp char(2),
@.p_vend char(6),
@.p_asofdate char(8)
as
if (@.p_asofdate <= '00000000')
begin
set @.p_asofdate = '99999999'
end
delete from XAPAPTTOT
where xapt_comp = @.p_comp and xapt_vend = @.p_vend and xapt_asof_date = @.p_asofdate
insert into XAPAPTTOT
select apph_comp, apph_vend, apph_type, apph_id, @.p_asofdate,
sum(apph_paymnts),
sum(apph_discts),
sum(apph_adjts),
count(apph_paymnts),
sum(apph_paymnts)+ sum(apph_discts) + sum(apph_adjts) +
b.apt_gross,
0,
max(str(yy,4) + replace(str(mm,2),' ','0') + replace(str(dd,2),' ','0'))
from APPHISTF.a join APTRANF.b on b.apt_comp = a.apph_comp and b.apt_vend = a.apph_vend and b.apt_type = a.apph_type and b.apt_id = a.apph_id
where ((a.apph_comp = @.p_comp) and (a.apph_vend = @.p_vend) and (a.apph_unpost_dt = 0)
and (str(a.yy,4) + replace(str(a.mm,2),' ','0') + replace(str(a.dd,2),' ','0') <= @.p_asofdate))
or ((a.apph_unpost_dt > 0 and a.apph_unpost_dt <= @.p_asofdate and b.apt_unposted_fg = 1 and b.apt_comp = @.p_comp and b.apt_vend = @.p_vend and b.apt_type = a.apph_type and b.apt_id = a.apph_id))
or (((str(a.yy,4) + replace(str(a.mm,2),' ','0') + replace(str(a.dd,2),' ','0') <= @.p_asofdate) and a.apph_unpost_dt > @.p_asofdate and b.apt_comp = @.p_comp and b.apt_vend = @.p_vend and b.apt_type = a.apph_type and b.apt_id = a.apph_id))
group by apph_comp, apph_vend, apph_type, apph_id
update XAPAPTTOT
set xapt_last_payck =
(select max(apph_payck) from APPHISTF
where apph_comp = xapt_comp and apph_vend = xapt_vend and apph_type = xapt_type
and apph_id = xapt_id
and str(yy,4) + replace(str(mm,2),' ','0') + replace(str(dd,2),' ','0') = xapt_last_paydt )
where xapt_comp = @.p_comp and xapt_vend = @.p_vend and xapt_asof_date = @.p_asofdate
GOPost the DDL and the indexes for the tables. Read the hint sticky at the top of the forum. Sample data might help as well, but what kind of volume are we talking about?|||the volume varies(multiple customers)
1,000-10,000 APTRANF and APPHISTF records
the Relationship between the tables is for each APTRANF record
you can have 0 to 9999 APPHISTF records.(generally only 0 or 1)
in unusually instances the APPHISTF might have 2-5 records.
the APPHISTF is a payment history(detail) to the APTRANF(master)
we allow unposting of a payment (apph_unpost_dt) and reissueing a new payment.
not sure about DDL ?|||Read this link here
http://www.dbforums.com/t1196943.html|||Hi,
Since I don't have any idea of your table structures and indexes thereon, I would go with eliminating redundencies in your code to reduce time. Several of the conditions and calculations are repeated and have now been changed to occur once. The code is given below. Hope this helps:
CREATE PROCEDURE isp_ap_calc_apt_totals
@.p_comp char(2),
@.p_vend char(6),
@.p_asofdate char(8)
as
if (@.p_asofdate <= '00000000')
set @.p_asofdate = '99999999'
delete from XAPAPTTOT
where xapt_comp = @.p_comp
and xapt_vend = @.p_vend
and xapt_asof_date = @.p_asofdate
insert into XAPAPTTOT
select apph_comp
,apph_vend
,apph_type
,apph_id
,@.p_asofdate
,sum(apph_paymnts)
,sum(apph_discts)
,sum(apph_adjts)
,count(apph_paymnts)
,sum(apph_paymnts)+ sum(apph_discts) + sum(apph_adjts) + b.apt_gross
,0
,max(str_1)
from (select apph_comp
,apph_vend
,apph_type
,apph_id
,apph_paymnts
,apph_discts
,apph_adjts
,apph_paymnts
,apph_unpost_dt
,str(yy,4) + replace(str(mm,2),' ','0') + replace(str(dd,2),' ','0') str_1
from APPHISTF
where apph_comp = @.p_comp
and apph_vend = @.p_vend) a
inner join
(select apt_gross
,apt_type
,apt_id
,apt_unposted_fg
from APTRANF
where bapt_comp = @.p_comp
and apt_vend = @.p_vend) b
on ( b.apt_type = a.apph_type
and b.apt_id = a.apph_id)
where (a.apph_unpost_dt = 0
and a.str_1 <= @.p_asofdate)
or (a.apph_unpost_dt > 0
and a.apph_unpost_dt <= @.p_asofdate
and b.apt_unposted_fg = 1)
or (a.str_1 <= @.p_asofdate
and a.apph_unpost_dt > @.p_asofdate)
group by apph_comp, apph_vend, apph_type, apph_id
update XAPAPTTOT
set xapt_last_payck = (select max(apph_payck) from APPHISTF
where apph_comp = xapt_comp
and apph_vend = xapt_vend
and apph_type = xapt_type
and apph_id = xapt_id
and str(yy,4) + replace(str(mm,2),' ','0') + replace(str(dd,2),' ','0') = xapt_last_paydt )
where xapt_comp = @.p_comp and xapt_vend = @.p_vend and xapt_asof_date = @.p_asofdate
GO
--Scalability Experts.
2012年2月18日星期六
Any help with an update/join/count?
I'm trying to come up with an update statement for this
dataset... it's a simplified version of the data
that I'm working with, so it might not make
a lot of sense as to why it's structured as it is...
Given these tables:
create table t1 (theKey1 int,
theKey2 int,
needToPutTheDataHere varchar(1),
tooFewError int default 0,
tooManyError int default 0)
insert into t1 values(1,1,'',0,0)
insert into t1 values(2,2,'',0,0)
insert into t1 values(3,3,'',0,0)
create table t2 (theKey1 int,
theKey2 int,
theData varchar(1))
insert into t2 values(1,1,'A')
insert into t2 values(1,1,'B')
insert into t2 values(2,2,'C')
I need to update so that I get this:
t1:
thekey1 thekey2 needToPutTheDataHere too
FewError tooManyError
1 1 NULL 0 1
2 2 C 0 0
3 3 NULL 1 0
- link t1 and t2 on thekey1 and thekey2
- put t2.theData in t1.needToPutTheDataHere if there is exactly one
matching row in t2
- set t1.tooFewError or t1.tooManyError to 1 if there is less
than one or more than 1 matching row in t2.
I could also eliminate the tooFew/tooMany columns and just use a
count (0,1,...x) if that would increase performance for large datasets.
Any help on how to write this update would be much appreciated!I think that you might want to use a VIEW that will always be right
instead of updating constantly.
While I am glad to get any DDL, you have no keys and a VARCHAR(1)
column -- think about that for two seconds. Also, we do not have links
in SQL -- that is assembly language; we have references and joins.
Let's use a tally instead of assembly language style flags:
CREATE TABLE Foobar
(thekey1 INTEGER NOT NULL,
thekey2 INTEGER NOT NULL,
needtoputthedatahere CHAR(1) DEFAULT '' NOT NULL,
tally INTEGER DEFAULT 0 NOT NULL,
INSERT INTO Foobar VALUES (1, 1, '', 0, 0);
INSERT INTO Foobar VALUES (2, 2, '', 0, 0);
INSERT INTO Foobar VALUES (3, 3, '', 0, 0);
CREATE TABLE NewFoobar
(thekey1 INTEGER NOT NULL,
thekey2 INTEGER NOT NULL,
thedata CHAR(1) NOT NULL,
PRIMARY KEY (thekey1, thekey2, thedata)); -- needs a key
INSERT INTO NewFoobar VALUES (1, 1, 'a');
INSERT INTO NewFoobar VALUES (1, 1, 'b');
INSERT INTO NewFoobar VALUES (2, 2, 'c');
Try this:
UPDATE Foobar
SET needtoputthedatahere
= (SELECT F1.thedata
FROM NewFoobar AS F1
WHERE Foobar.thekey1 = F1.thekey1
AND Foobar.thekey2 = F1.thekey2
GROUP BY F1.thekey1, F1.thekey2
HAVING COUNT(*) = 1) -- empty scalar table expressions become
NULL
tally =(SELECT COUNT(*)
FROM NewFoobar AS F1
WHERE Foobar.thekey1 = F1.thekey1
AND Foobar.thekey2 = F1.thekey2) -- empty table
expressions count zero
WHERE EXISTS
(SELECT *
FROM NewFoobar
WHERE Foobar.thekey1 = NewFoobar.thekey1
AND Foobar.thekey2 = NewFoobar.thekey2);
You can drop the WHERE clause on the UPDATE if you wish.
2012年2月13日星期一
Any advice on testing Data Warehouse solution?
being developed for a few months. The development team has been working on
developing this for the company. Soon they will be turning this over to the
DBA group to do some level of integration testing with the rest of our
production data.
What guidelines are there for testing data warehouse solutions?
What types of things should I be looking for other than user sign off that
the data looks good?Function test of Backup/restore of full data size
Performance/Load testing of full data size
Kevin Connell, MCDBA
----
The views expressed here are my own
and not of my employer.
----
"Doug Needham" <dneedham@.cfsloans.com> wrote in message
news:eqTaIH3xDHA.2148@.TK2MSFTNGP12.phx.gbl...
quote:
> We are looking at deploying a data warehouse solution soon that has been
> being developed for a few months. The development team has been working on
> developing this for the company. Soon they will be turning this over to
the
quote:
> DBA group to do some level of integration testing with the rest of our
> production data.
> What guidelines are there for testing data warehouse solutions?
> What types of things should I be looking for other than user sign off that
> the data looks good?
>
>
2012年2月11日星期六
ANSI-92 inner join vs. where clause syntax
My query, working with an intersection table:
SELECT Description, EmailAddress
FROM Accounts_Roles r, Accounts_Users u, Accounts_UserRoles ur
WHERE
r.RoleID = ur.RoleID
AND
u.UserID = ur.UserID
This works fine, but i want to write it using 'inner join' style, so I tried:
SELECT Description, EmailAddress
FROM Accounts_Roles r, Accounts_Users u
INNER JOIN Accounts_UserRoles ur
ON
r.RoleID = ur.RoleID
AND
u.UserID = ur.UserID
which gives me an error (The column prefix 'r' does not match with a table name or alias name used in the query.)
Any ideas as to how I'm screwing this up would be appreciated.
Thanks,
Gordon ZI'd use:SELECT Description, EmailAddress
FROM Accounts_UserRoles ur
INNER JOIN Accounts_Roles r
ON (r.RoleID = ur.RoleID)
INNER JOIN Accounts_Users u
ON (u.UserID = ur.UserID)-PatP|||Someone here (at Rackspace) has already come up with a cleche based on talking to me: Take the JOIN out of your WHERE clause (and then adding something like "...and take your head out of your @.$$"), which is actually based on what Rudy (come out and play here!!!) would enumerate for you very accurately, but I just say that WHERE is processed after JOIN. So I'll let you come up to your own conclusion ;)|||i really admire people who recognize that there is a difference between JOIN syntax and the older table list method, and are trying to make the change and learn the better way
ANSI-89 DB JOINS
ANSI-92 JOIN SYNTAX...I'm not sure how much this limits my ability to
deal with the following situation, so I'm soliciting the help of a
guru...I apologize for the lack of scripted table structure, but this
database is embedded in an application that I have no true schema for.
I have a crude diagram of the tables and some of the relationships, but
I've managed to have manually mapped some of the fields in the tables
I'm working with.
What I have is a table(A) that I need to join with 10 other
tables....I'm joining on an identifier in the (A) that may exist many
times in any of the other 10 tables...and may not be in ANY of the
tables.
When I run this query:
SELECT
SAMPLES.PK_SampleUID,UDFSAMPLEDATA02.AlphaData,UDF SAMPLEDATA01.AlphaData,UDFSAMPLEDATA03.AlphaData,
UDFSAMPLEDATA05.AlphaData, UDFSAMPLEDATA06.AlphaData,
UDFSAMPLEDATA07.AlphaData, UDFSAMPLEDATA08.AlphaData,
UDFSAMPLEDATA09.AlphaData,UDFSAMPLEDATA10.AlphaDat a
FROM SAMPLES, UDFSAMPLEDATA01
,UDFSAMPLEDATA02,UDFSAMPLEDATA03,UDFSAMPLEDATA05,U DFSAMPLEDATA06
,UDFSAMPLEDATA07 ,
UDFSAMPLEDATA08, UDFSAMPLEDATA09, UDFSAMPLEDATA10
WHERE UDFSAMPLEDATA02.AlphaData<>' ' AND
UDFSAMPLEDATA01.FK_SampleUID=SAMPLES.PK_SampleUID AND
UDFSAMPLEDATA02.FK_SampleUID=SAMPLES.PK_SampleUID AND
UDFSAMPLEDATA03.FK_SampleUID= SAMPLES.PK_SampleUID AND
UDFSAMPLEDATA05.FK_SampleUID = SAMPLES.PK_SampleUID AND
UDFSAMPLEDATA06.FK_SampleUID = SAMPLES.PK_SampleUID AND
UDFSAMPLEDATA07.FK_SampleUID = SAMPLES.PK_SampleUID AND
UDFSAMPLEDATA08.FK_SampleUID = SAMPLES.PK_SampleUID AND
UDFSAMPLEDATA09.FK_SampleUID=SAMPLES.PK_SampleUID AND
UDFSAMPLEDATA10.FK_SampleUID = SAMPLES.PK_SampleUID
I return what appears to be the gazillion COMBINATIONS of all the
fields in all the tables...they query doesn't even finish before the
ODBC driver I'm working with crashes my VBscript...
Is there some way to take the multiple returned rows from a join and
work them all into ONE row per identifier?
Any help I can garner would just make my week!
TIA!
JA database that doesn't support the ANSI92 syntax may have its own
proprietary syntax for outer joins. Otherwise, you can use UNION in
place of an outer join, assuming UNION and EXISTS are supported:
CREATE TABLE T1 (x INTEGER PRIMARY KEY)
CREATE TABLE T2 (x INTEGER PRIMARY KEY)
INSERT INTO T1 (x) VALUES (1)
INSERT INTO T1 (x) VALUES (2)
INSERT INTO T2 (x) VALUES (1)
SELECT T1.x, T2.x
FROM T1, T2
WHERE T1.x = T2.x
UNION ALL
SELECT T1.x, NULL
FROM T1
WHERE NOT EXISTS
(SELECT *
FROM T2
WHERE T2.x = T1.x) ;
--
David Portas
SQL Server MVP
--|||Thanks for the direction David - I'll start working with this...I'm
sure I'll have more questions as I go.
j
David Portas wrote:
> A database that doesn't support the ANSI92 syntax may have its own
> proprietary syntax for outer joins. Otherwise, you can use UNION in
> place of an outer join, assuming UNION and EXISTS are supported:
> CREATE TABLE T1 (x INTEGER PRIMARY KEY)
> CREATE TABLE T2 (x INTEGER PRIMARY KEY)
> INSERT INTO T1 (x) VALUES (1)
> INSERT INTO T1 (x) VALUES (2)
> INSERT INTO T2 (x) VALUES (1)
> SELECT T1.x, T2.x
> FROM T1, T2
> WHERE T1.x = T2.x
> UNION ALL
> SELECT T1.x, NULL
> FROM T1
> WHERE NOT EXISTS
> (SELECT *
> FROM T2
> WHERE T2.x = T1.x) ;
> --
> David Portas
> SQL Server MVP
> --|||You're welcome, but you'll probably find better help in a group or
forum dedicated to the database you are using. This is a Microsoft SQL
Server group.
--
David Portas
SQL Server MVP
--
2012年2月9日星期四
ANSI PADDING OFF not working
Hi,
I have set ANSI PADDING off by default for the database but even then when I create any table ANSI PADDING is on.
While creation of table I am able to set it off before I create it and then create the table with correct settings. But when I am altering the table it is always off even when I am setting specifically before altering the table.
Any ideas how to make this work?
Thanks.
The SQL Server ODBC driver and OLEDB provider sets several SET options ON by default. And ANSI_PADDING is one of those. So it deosn't matter if you switch it OFF at the database level. Any connection made by one of the data access API will automatically have ANSI_PADDING ON. I am curious as to why you want to set it to OFF. It is recommended to use the ANSI settings by default since lot of features depend on it (like computed column indexes, indexed view matching etc) and it is also compatible with other databases like DB2/Oracle. See the topics below for more details:
SET ANSI_PADDING topic
http://msdn2.microsoft.com/en-us/library/ms187403.aspx
SET ANSI_DEFAULTS topic
http://msdn2.microsoft.com/en-us/library/ms188340.aspx
Btw, the only way is to modify your application code to set ANSI_PADDING off after making the connection. This also means that if you need to work with the schema in OSQL or ISQLW or SSMS for example you need to do the same because they all use OLEDB/ODBC/ADO.NET to make connection to SQL Server. So it is actually easier to get into trouble not using the defaults. Note that in the future we may remove the SET options and consider making the ANSI settings default.
|||If ANSI Padding is on is stored with spaces and we do not want that.|||I am not sure if there is a question in your response. If you want to use ANSI_PADDING OFF on the server-side then you need to enable in explicitly after connection. This is the only way due to the reasons I described before. Best is to trim values before it reaches the database server on the client-side.
Additionally, you need to be aware of recompilation of SPs/statements that access tables with ANSI_PADDING off and the session setting is different. So please be aware of all these limitations in addition to other missing enterprise features (computed column matching, indexed views matching etc). See the link below for the how batch compilation, recompilation etc works and how the SET options are used:
http://www.microsoft.com/technet/prodtechnol/sql/2005/recomp.mspx
|||I can give you reasons NEVER to have padding on. Any tables you create will have a hidden padding on setting so that you can no longer control padding with set, ODBC, ADO or even database level switches. But more importantly, features such as the like clause with the underscore pattern match will return incorrect results on any of these “damaged” tables.
Eg. Select fred where fred like ‘123__’
Will return results of:
123
123A
123AB
It should only return 123AB. <null> should not be a character. Time consuming bugs like these are the reason I would recommend never setting padding on and I hope you seriously consider defaulting to padding off in future. It's by far the more useful setting.
ANSI PADDING OFF not working
Hi,
I have set ANSI PADDING off by default for the database but even then when I create any table ANSI PADDING is on.
While creation of table I am able to set it off before I create it and then create the table with correct settings. But when I am altering the table it is always off even when I am setting specifically before altering the table.
Any ideas how to make this work?
Thanks.
The SQL Server ODBC driver and OLEDB provider sets several SET options ON by default. And ANSI_PADDING is one of those. So it deosn't matter if you switch it OFF at the database level. Any connection made by one of the data access API will automatically have ANSI_PADDING ON. I am curious as to why you want to set it to OFF. It is recommended to use the ANSI settings by default since lot of features depend on it (like computed column indexes, indexed view matching etc) and it is also compatible with other databases like DB2/Oracle. See the topics below for more details:
SET ANSI_PADDING topic
http://msdn2.microsoft.com/en-us/library/ms187403.aspx
SET ANSI_DEFAULTS topic
http://msdn2.microsoft.com/en-us/library/ms188340.aspx
Btw, the only way is to modify your application code to set ANSI_PADDING off after making the connection. This also means that if you need to work with the schema in OSQL or ISQLW or SSMS for example you need to do the same because they all use OLEDB/ODBC/ADO.NET to make connection to SQL Server. So it is actually easier to get into trouble not using the defaults. Note that in the future we may remove the SET options and consider making the ANSI settings default.
|||If ANSI Padding is on is stored with spaces and we do not want that.|||I am not sure if there is a question in your response. If you want to use ANSI_PADDING OFF on the server-side then you need to enable in explicitly after connection. This is the only way due to the reasons I described before. Best is to trim values before it reaches the database server on the client-side.
Additionally, you need to be aware of recompilation of SPs/statements that access tables with ANSI_PADDING off and the session setting is different. So please be aware of all these limitations in addition to other missing enterprise features (computed column matching, indexed views matching etc). See the link below for the how batch compilation, recompilation etc works and how the SET options are used:
http://www.microsoft.com/technet/prodtechnol/sql/2005/recomp.mspx
|||I can give you reasons NEVER to have padding on. Any tables you create will have a hidden padding on setting so that you can no longer control padding with set, ODBC, ADO or even database level switches. But more importantly, features such as the like clause with the underscore pattern match will return incorrect results on any of these “damaged” tables.
Eg. Select fred where fred like ‘123__’
Will return results of:
123
123A
123AB
It should only return 123AB. <null> should not be a character. Time consuming bugs like these are the reason I would recommend never setting padding on and I hope you seriously consider defaulting to padding off in future. It's by far the more useful setting.
ANSI PADDING OFF not working
Hi,
I have set ANSI PADDING off by default for the database but even then when I create any table ANSI PADDING is on.
While creation of table I am able to set it off before I create it and then create the table with correct settings. But when I am altering the table it is always off even when I am setting specifically before altering the table.
Any ideas how to make this work?
Thanks.
The SQL Server ODBC driver and OLEDB provider sets several SET options ON by default. And ANSI_PADDING is one of those. So it deosn't matter if you switch it OFF at the database level. Any connection made by one of the data access API will automatically have ANSI_PADDING ON. I am curious as to why you want to set it to OFF. It is recommended to use the ANSI settings by default since lot of features depend on it (like computed column indexes, indexed view matching etc) and it is also compatible with other databases like DB2/Oracle. See the topics below for more details:
SET ANSI_PADDING topic
http://msdn2.microsoft.com/en-us/library/ms187403.aspx
SET ANSI_DEFAULTS topic
http://msdn2.microsoft.com/en-us/library/ms188340.aspx
Btw, the only way is to modify your application code to set ANSI_PADDING off after making the connection. This also means that if you need to work with the schema in OSQL or ISQLW or SSMS for example you need to do the same because they all use OLEDB/ODBC/ADO.NET to make connection to SQL Server. So it is actually easier to get into trouble not using the defaults. Note that in the future we may remove the SET options and consider making the ANSI settings default.
|||If ANSI Padding is on is stored with spaces and we do not want that.|||I am not sure if there is a question in your response. If you want to use ANSI_PADDING OFF on the server-side then you need to enable in explicitly after connection. This is the only way due to the reasons I described before. Best is to trim values before it reaches the database server on the client-side.
Additionally, you need to be aware of recompilation of SPs/statements that access tables with ANSI_PADDING off and the session setting is different. So please be aware of all these limitations in addition to other missing enterprise features (computed column matching, indexed views matching etc). See the link below for the how batch compilation, recompilation etc works and how the SET options are used:
http://www.microsoft.com/technet/prodtechnol/sql/2005/recomp.mspx
|||I can give you reasons NEVER to have padding on. Any tables you create will have a hidden padding on setting so that you can no longer control padding with set, ODBC, ADO or even database level switches. But more importantly, features such as the like clause with the underscore pattern match will return incorrect results on any of these “damaged” tables.
Eg. Select fred where fred like ‘123__’
Will return results of:
123
123A
123AB
It should only return 123AB. <null> should not be a character. Time consuming bugs like these are the reason I would recommend never setting padding on and I hope you seriously consider defaulting to padding off in future. It's by far the more useful setting.
ANSI Equivalent for IsNumeric()?
I working with Teradata and need to find some way to identify if a field is numeric or not. I could not find anything in the Teradata documentation I have.
Thank you,
Ken E.There is no ANSI equivalent to IsNumeric(); you may have to code your own function. :shocked: