Friday, March 30, 2012

One Node One Machine Cluster?

Hi,
I am in the process of building a SQL Server "farm," but having to tone down
the pace due to $$. I have just purchased a RAID1 (2 disc/10K rpm/146 GB)
Channel 0, RAID5 (3 disc/10K rpm/146 GB) Channel 1 server with 2 CPU's. This
summer, I plan to buy an identical machine. They will both run Win 2003 EE,
with SQL Server 2000 EE. Rather than install SQL Server locally on my
existing machine, is there a problem with installing MSCS and a virtual
server, even though I only have the one machine now. Of course, it won't
give me failover protection, but it would give me the configuration I want
when I get my second box. Does this make any sense? Any caveats or words of
wisdom?
PS. The servers will be accessed in OTLP mode by an ADO.NET app running on
Terminal Server (Win 2003), and BizTalk Server 2004 - also on a Win 2003 box.
Most of the ADO.NET traffic will occur during business hours, with BTS
getting busy after hours.
Thanks.
John
Hi
Why don't you rather build a single node cluster? Build a normal cluster
node, but not add another node. Once you have bought your other machine, add
it to the cluster.
Just be aware, it is very difficult to get the exact same hardware months
apart. If you are going to use clustering, make sure all the hardware is on
the Windows Hardware Compatibility List (HCL).
A cluster "on the cheap" will just cause you hassles later. I've seen enough
"lets do it with a s little as we can", and the cluster ends up with worse
uptime than a single machine.
Regards
Mike
"JT" wrote:

> Hi,
> I am in the process of building a SQL Server "farm," but having to tone down
> the pace due to $$. I have just purchased a RAID1 (2 disc/10K rpm/146 GB)
> Channel 0, RAID5 (3 disc/10K rpm/146 GB) Channel 1 server with 2 CPU's. This
> summer, I plan to buy an identical machine. They will both run Win 2003 EE,
> with SQL Server 2000 EE. Rather than install SQL Server locally on my
> existing machine, is there a problem with installing MSCS and a virtual
> server, even though I only have the one machine now. Of course, it won't
> give me failover protection, but it would give me the configuration I want
> when I get my second box. Does this make any sense? Any caveats or words of
> wisdom?
> PS. The servers will be accessed in OTLP mode by an ADO.NET app running on
> Terminal Server (Win 2003), and BizTalk Server 2004 - also on a Win 2003 box.
> Most of the ADO.NET traffic will occur during business hours, with BTS
> getting busy after hours.
> Thanks.
> --
> John

One node Cluster for SQL 2005

Hi anyone know if you can cluster sql on a one node cluster in Windows 2003 ?
Regards Craig
Yes you can. You can even cluster SQL 2005 in Virtual Server with two
nodes, but it takes a bit of work.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"Craig Gibb" <Craig Gibb@.discussions.microsoft.com> wrote in message
news:E4E75A9E-46BC-4DB4-9D22-B5862BBAFBF1@.microsoft.com...
> Hi anyone know if you can cluster sql on a one node cluster in Windows
> 2003 ?
> Regards Craig
>
|||Hi thanks for the info Geoff just now iam having a problem with DTC and
windows 2003 sp1 i thin there is COM + rollup package for the problem,
but i can′t find it anywhere KB 897667. MS support is closed here and i
would really like to install it this evening any ideas ?
Regards Craig
"Geoff N. Hiten" wrote:

> Yes you can. You can even cluster SQL 2005 in Virtual Server with two
> nodes, but it takes a bit of work.
> --
> Geoff N. Hiten
> Senior Database Administrator
> Microsoft SQL Server MVP
>
> "Craig Gibb" <Craig Gibb@.discussions.microsoft.com> wrote in message
> news:E4E75A9E-46BC-4DB4-9D22-B5862BBAFBF1@.microsoft.com...
>
>
|||Here is what I usually have to do with DTC on a Windows 2003 Server.
HOWTO: Enable DTC Between Web Servers and SQL Servers Running Windows Server
2003
http://support.microsoft.com/kb/555017/en-us
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"Craig Gibb" <CraigGibb@.discussions.microsoft.com> wrote in message
news:304D6E36-D457-4FDB-A46D-D3941CA3DF86@.microsoft.com...[vbcol=seagreen]
> Hi thanks for the info Geoff just now iam having a problem with DTC and
> windows 2003 sp1 i thin there is COM + rollup package for the problem,
> but i cant find it anywhere KB 897667. MS support is closed here and i
> would really like to install it this evening any ideas ?
> Regards Craig
> "Geoff N. Hiten" wrote:

One ms sql server 2000 instance blocks the other (thru a dts-job)

Hello,

we use two instances of the ms sql server 2000 (Version: 8.00.760) on a
4 processor windows 2000 (sp 4) machine.
A dts-packet on one instance completly blocks the work on the other
instance. Both instances have 2 dedicated processors. The two
processors of the blocked instance are in idle states.
How is this possible? Has someone had the same behavior of the sql
server and found a solution for this problem?

Thanks in advance for answers

Jrgen Simonsen(juergen.simonsen@.web.de) writes:
> we use two instances of the ms sql server 2000 (Version: 8.00.760) on a
> 4 processor windows 2000 (sp 4) machine.
> A dts-packet on one instance completly blocks the work on the other
> instance. Both instances have 2 dedicated processors. The two
> processors of the blocked instance are in idle states.
> How is this possible? Has someone had the same behavior of the sql
> server and found a solution for this problem?

Have you tried to access that blocked instance, to see if it is possible
to run queries on it? Have you checked the SQL Server error log for the
blocked instance?

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

one more time

i′m sorry my bad english

i have a dts in sql server 2005

for example in sql server 2000 i can do that :

over DTS right-clik choose option "Scheduled pachage" and this create a scheduled what i mean is that this produces a "JOB"

what i mean is that i need to know how can to create a scheduled to DTS′s in sql 2005

Please anyone of you provide some solution for this.

Okay, I completely misunderstood what you wanted before. But, I think I can help you now.

Once you have the SSIS package (DTS packages mean something else - like packages from SQL SERVER 2000 DTS) you can use SQL Server Agent to shedule the package to be run at a certain time.

Use the SQL Server Agent to schedule your package.

|||

sorry man:

what i do in sql server 2005 is:

right-click over SQL Server Agent , choose the option "New" and choose option "Schedule" but isn't what i want, may be i am not specfic

when i do right-click over dts in sql server 2000 and choose the option schedule package this produces an scheduled and i can see it in the part of the jobs

Now how i can do that in sql server 2005? i need to see of my DTSs the jobs that i created and you said me that i can do by SQL Server Agent in SQL server 2005 and this isn′t do what i want.

Please anyone of you provide some solution for this.
Thanks :)

|||

limon26,

I merged this post into your earlier thread. If you want to reply to another reply you should use the reply button and not the new thread button. This will keep the discussion in one thread and others that can help will be able to follow the discussion more easy.

Thanks.

|||

i apologies for that

so what can i do about my DTSs in sql server 2005 ?

|||

I am not that experienced with SSIS for 2005, hopefully swells or somebody else will be able to help you further...

|||

In SQL Server 2000, you could create a schedule by right-clicking on a package in Enteprise Manager and selecting "Schedule Package." SQL Server 2000 created the SQL Server Agent job. You could find the jobs in Enterprise Manager under the Management folder, in the jobs tree.

In SQL Server 2005 there are some changes ...

1. SQL Server Agent is not automatically enabled. Make sure the service has been started.

2. You do not automatically create scheduled jobs for SSIS. You should right click on the SQL Server Agent icon in the SQL Server Object Explorer in the Management Studio and select to create a new Agent Job. When you do that, you can add a new job step to execute an SSIS package. You can also add a new schedule.

The end result is similar to 2000 - jobs are listed under the SQL Server Agent node; jobs have steps - one step executes a package; and jobs have schedules.

An important difference is that SSIS now has an Agent subsytem for more better integrated execution.

hth

Donald

|||

After create your ssis package. Here is step you have to do to make a job run....

My ssis is running every 10 min.....

Here is what I do to make it work.......

Create the proxy account, And SQL Server Agent running under your NT Acount and then the job Steps running under the "SQL Agent Service Account" and your nt account as the owner of the job. It will works perfect.

I don't know if you know how to create proxy account. Here is how to create:

Use master

CREATE CREDENTIAL [MyCredential] WITH IDENTITY = 'yourdomain\myWindowAccount', secret = 'WindowLoginPassword'

Use msdb

Sp_add_proxy @.proxy_name='MyProxy', @.credential_name='MyCredential'

Sp_grant_login_to_proxy @.login_name=' devlogin', @.proxy_name='MyProxy'

Sp_grant_proxy_to_subsystem @.proxy_name='MyProxy', @.subsystem_name='SSIS'

Use master

CREATE CREDENTIAL [MyCredential] WITH IDENTITY = 'yourdomain\myWindowAccount', secret = 'WindowLoginPassword'

Use msdb

Sp_add_proxy @.proxy_name='MyProxy', @.credential_name='MyCredential'

Sp_grant_login_to_proxy @.login_name=' devlogin', @.proxy_name='MyProxy'

Sp_grant_proxy_to_subsystem @.proxy_name='MyProxy', @.subsystem_name='SSIS'

kenny

|||Similar problems, but (so far) the credentials/proxy doesn't seem to resolve it.

We have two people who have developed some SSIS packages, myself and

another developer. The other developer's SSIS package runs fine from

BIDS, DTexec and DTexecUI, but not from SQL Server agent. My SSIS

package runs fine from BIDS, DTexec, DTexecUI and local SQLServer

Agent. I haven't had the opportunity to have it deplyed into another

environment to check if I have problems from the server.

What was interesting to me, in attempting to test the credentials, was

that the working SQL Server Agent job, it I edit it to use the proxy,

fails. If I edit the job back to using the SQL Agent Service Account

(on the RUN AS tab on the task), then it works successfully.

Now this may be because I've set up the credential incorrectly, since

it was my first time (be gentle :-)). I'm going to try and modify the

script above and see if that works any better. The credential I set up

used a local user I created that was part of the powerusers group. The

package does file operations locally (based on the current

configuration file). The 'ProtectLevel' was set to

'EncryptSenstitiveWithPassword', and this specific test job is using

SQL Server as the package source.

Nay help? The KB http://support.microsoft.com/kb/918760 states that

this is ' This behavior is by design.' - but surely it should also be

easy to use?|||using the scipr (after removing the duplicate, and adding exec's), I

generated a proxy and a credential. The user being used is the same one

I create earlier (which is a member of the power user group on my

laptop). I get the following error:

-

Date 25/07/2006 14:21:59

Log Job History (Test Excel Orchestration)

Step ID 1

Server MyLaptop\SQL2005_SP1_TEST

Job Name Test Excel Orchestration

Step Name Invoke Excel Orchestration

Duration 00:00:01

Sql Severity 0

Sql Message ID 0

Operator Emailed

Operator Net sent

Operator Paged

Retries Attempted 0

Message

Executed as user: MyLaptop\SSISAgentJobCredUser. The package could not be loaded. The step failed.

-

this proves the credential at least worked - it did try to run with the correct user, but still no joy.

.... even less joy now, since I now get the following, trying to use this in the way that worked before:
Date 25/07/2006 14:27:45

Log Job History (Test Excel Orchestration)

Step ID 1

Server MyLaptop\RGALBRAITH\SQL2005_SP1_TEST

Job Name Test Excel Orchestration

Step Name Invoke Excel Orchestration

Duration 00:00:00

Sql Severity 0

Sql Message ID 0

Operator Emailed

Operator Net sent

Operator Paged

Retries Attempted 0

Message

Non-SysAdmins have been denied permission to run DTS Execution job steps without a proxy account. The step failed.|||

Hey,

Today I got the same error as you, to fixed I didn't create the credentials, I changed the owner for the Scheduled Jobs instead, and it's works great!

|||

I fixed this by adding the owner of the package to the owned schema's in the agent roles in msdb.

take a look at this link:

http://www.microsoft.com/technet/prodtechnol/sql/2005/newsqlagent.mspx

One more solution for a pagination

Let me know what you think about the following code:
DECLARE @.MaxIdValue int
DECLARE @.MaxSortFieldValue nvarchar(50)

SELECT TOP 1 @.MaxIdValue = [id], @.MaxSortFieldValue = [SortField]
FROM (
SELECT TOP PageNumber*RowsPerPage [id], [SortField]
FROM MyTable
WHERE (FilterCondition) ORDER BY [SortField], [id]
) T
ORDER BY [SortField] DESC, [id] DESC

SELECT TOP RowsPerPage * FROM MyTable
WHERE ([SortField] >= @.MaxSortFieldValue) AND (([id] > @.MaxIdValue) OR
([SortField] <> @.MaxSortFieldValue)) AND (FilterCondition)
ORDER BY [SortField], [id]

This is a dynamic SQL and it should be easily fixable.
PageNumber, RowsPerPage, FilterCondition and SortField are going to be
the variables and will be based on the user's search
condition/criteria.
----------
Thanks for you attention."Kurzman" <max@.virtuman.com> wrote in message
news:d8f86969.0408051335.56e477a2@.posting.google.c om...
> Let me know what you think about the following code:
> DECLARE @.MaxIdValue int
> DECLARE @.MaxSortFieldValue nvarchar(50)
> SELECT TOP 1 @.MaxIdValue = [id], @.MaxSortFieldValue = [SortField]
> FROM (
> SELECT TOP PageNumber*RowsPerPage [id], [SortField]
> FROM MyTable
> WHERE (FilterCondition) ORDER BY [SortField], [id]
> ) T
> ORDER BY [SortField] DESC, [id] DESC
> SELECT TOP RowsPerPage * FROM MyTable
> WHERE ([SortField] >= @.MaxSortFieldValue) AND (([id] > @.MaxIdValue) OR
> ([SortField] <> @.MaxSortFieldValue)) AND (FilterCondition)
> ORDER BY [SortField], [id]
> This is a dynamic SQL and it should be easily fixable.
> PageNumber, RowsPerPage, FilterCondition and SortField are going to be
> the variables and will be based on the user's search
> condition/criteria.
> ----------
> Thanks for you attention.

What I think is that your syntax is wrong :-)

If you want a pagination solution, then the usual answer is "do it on the
client side" - this article may be useful:

http://www.aspfaq.com/show.asp?id=2120

Simon|||> What I think is that your syntax is wrong :-)
> If you want a pagination solution, then the usual answer is "do it on the
> client side" - this article may be useful:
> http://www.aspfaq.com/show.asp?id=2120
> Simon

The client side is't primary. The bottleneck of pagination is a server
side, especially for a big tables. From this point of view following
code will works slowly. Also it will works correct only if ArtistName
+ '~' + Title is unique.
ArtistName + '~' + Title
>= @.aname + '~' + @.title

Following code hasn't this problem:

([SortField] >= @.MaxSortFieldValue) AND (([id] > @.MaxIdValue) OR
([SortField] <> @.MaxSortFieldValue)) AND (FilterCondition)
ORDER BY [SortField], [id]
If index [SortField], [id] exist it works fast.

P.S. Thanks for a link to intresting article.sql

One more question regarding Reporting action

All the reporting action samples I read from AdventureWorks, when pass link and parameter to the report in reporting service, is linked to the report as Analysis Service data source based report.

I am wondering if reporting action can pass parameter to regular relational db based reporting service report not Analysis Service data source based report.

If yes, could you share a simple sample?

I can't see why the data source of a report would matter, when using a reporting action. The Adventure Works examples may all be using an Analysis Services data source just for convenience. Have you tried editing an existing Adventure Works reporting action, to point to a relational report instead?|||

Hi Deepak, thank you so much for your reply! Yes, I did try the following using AW AS and AW sample reports RS projects:

Here is my reporting action:

Name: Sales Order Detail

Target Type: Dimension members

Target object: Internet Sales Order Detail

ReportServer:localhost

ReportPath:ReportServer?/AdventureWorks Sample Reports/Sales Order Detail

Parameters NameTongue TiedalesOrderNumber (Match the one at Sales Order Detail report

Parameter Value: [Internet Sales Order Details].[Sales Order Number].[Sales Order Number].CurrentMember.Name

And I make sure Sales Order Detail Report is running correctly and parameter is matching the parameter name I defined at the cube reporting action. Then I deploy the cube, however when drop the Sale Order Number dimension and right click, there is no Reporting Action showing up.

Could you give me some pointers?

Thanks

|||

Did you configure the "Caption", under "Additional Properties" - if not, you can refer to the Adventure Works "Sales Reason Comparisons" action, whose Caption is defined as an MDX expression:

"Sales Reason Comparisons for " +

[Product].[Category].CurrentMember.Member_Caption + "..."

|||

Thanks again. Just did that and deployed.

"Sales Reason Comparisons for " + [Internet Sales Order Details].[Sales Order Number].[Sales Order Number].CurrentMember.Member_Caption + "..."

However reporting action is still not showing. Deepak, anything else I miss?

Thanks

|||Can't think of anything else - are you able to see the standard Reporting action which comes with Adventure Works, ie. "Sales Reason Comparisons"? If so, I would suggest making a test copy of Adventure Works, and progressively modifying this action to incorporate your report in small steps, till it breaks - or maybe it doesn't break!

One More Question On Running Parallel Queries

An Execute SQL task takes 1 min to run a statement "insert into Mytable select * from view_using_joins"

Output: 10,225 rows affected.

But a Dataflow task configured to fetch data from the same view_using_joins into MyTable takes hours to do the same.

Could you please explain why is it so ?

Thanks

Subhash Subramanyam

I am not surprised that the Execute SQL Task is quicker. When you're inserting from one table to another in the same database then SSIS isn't going to outperform the database engine.

I don't know why it is hours quicker. There isn't really enough information here to say. What destination adapter are you using? Is the package running on the same machine as the database? Are you doing transformations on the way?

-Jamie

|||

Hi Jamie,

Thanks for your reply.

1) I am running packages on a different server.

2) Using OLEDB adapters for Source and Destination . Here Database as well as the server are same for Source and Destination

3) No transformations in between

Wonder if we have to do some extra settings to here to achieve the same duration as that of execute sql task ?

Thanks

Subhash Subramanyam

|||

Have you chosen Fast Load on your destination?

Thanks.

|||

Subhash512525 wrote:

Hi Jamie,

Thanks for your reply.

1) I am running packages on a different server.

2) Using OLEDB adapters for Source and Destination . Here Database as well as the server are same for Source and Destination

3) No transformations in between

Wonder if we have to do some extra settings to here to achieve the same duration as that of execute sql task ?

Thanks

Subhash Subramanyam

You're running the package on a different server? I'd suggest that's yur problem right there. The data is going to have to go over the network - obviously this is going to take time.

Also, in your destination are you inserting with Fast Load?

I say again, in this scenario the data-flow isn't going to outperform the Execute SQL Task.

-Jamie

|||

Great Jamie, you figured out.

If you don't mind spending few minutes here, I am coming back to my actual scenario.

I surely expect specific views from experts here for each of the questions here: Phil B, Rafael S, Darren G, Jwelch, JayH, Scott B, Ashwin S, Brian, Bob, Donald F and many others I am still not aware of.

Scenario:

1) My SSIS Packages are run at US server. scheduleld during Nights.

2) Each Package runs 6-8 queries each having Joins Parallelly pulling data from Oracle Database Source (UNIX) in Europe, Total Data extracted do not exceed 5 Million rows)

3) Destination Database is at US.

4) Network Bandwidth (2 Mbps)

Problem is that It almost takes ages to execute these Packages (Ranging from 25 hours to 30 hours)

Questions are:

1) Where should I expect to run the SSIS Packages to give a better performance?

2) How can I perform only incremental load (using Dataflow task) taking into consideration performance aspects? (Any links for this can help)

3) Does the overlap of the Schedules for SSIS packages afffect the performance?

4) Are there any limits on running number of queries in parallell to pull data from oracle source

5) Will it be the best way, If I spool the query results into flat files on a local system where the source (oracle database) runs at Europe and then ftp them to a shared server at US, which I can use it for importing into Destination table

Waiting for your reply,

Many Thanks and Regards

Subhash Subramanyam

|||Thanks Bob, Please give your answers for my below questions if don't mind.|||The more work you can do to prevent keeping the data transmission "pipe" open, the better.

Perform your source query in Europe, export that to a file, compress it, and then FTP it to the US. Then uncompress it, and load it with SSIS.

The idea is to keep your transmissions across "the pond" as short as possible.|||

From your question #2, I'm assuming you are pulling all rows every night. As Phil mentioned, you want to minimize how much data you are actually moving, so I'd definately make this incremental. A common way to implement that is by checking modified dates on the source system via a WHERE clause in your source SELECT statements. Store the range of modified dates that you retreive, and when the package is run the next night, start from the end of the previous range.

If you don't have modified dates in the source system, consider adding them. Alternatives are using triggers to track changes, or using a change data capture tool - I believe Oracle has one, and SQL Server will have one with SQL Server 2008.

|||

One more question:

6) If I have 6-8 queries running in parallel, Whether having a common connection Manager (for an Oracle source) for all performs better or having Distinct Connection Manager performs better ?

Still expecting suggestions and the views of rest of the experts for six questions listed here.

Regards

Subhash Subramanyam

|||

Subhash512525 wrote:

6) If I have 6-8 queries running in parallel, Whether having a common connection Manager (for an Oracle source) for all performs better or having Distinct Connection Manager performs better ?

It depends Smile Using a single one should result in the same performance as having several, assuming you are not using RetainSameConnection on them. Having a single connection manager doesn't mean that SSIS won't open multiple connections to the database. A Connection Manager manages multiple connections to the database, unless you force it to use only a single connection with RetainSameConnection.

A related note - in your scenario, have you tested whether performance is better if you run all queries sequentially or in parallel (by using precedence constraints on the data flow tasks)?

|||

jwelch wrote:

A related note - in your scenario, have you tested whether performance is better if you run all queries sequentially or in parallel (by using precedence constraints on the data flow tasks)?

Jwelch, This seem more practical. I'll test this and let you know..

Thanks

Subhash

One more question on best-practice for Backup Plan

I am not a SQL Server DBA, but have a temporary need to play that role. I am
new to the Maintenance Plan Wizard, but have thus far implemented the
following:
1. Daily System Full Backup Job (Selected System Databases)
1.1. Check Database Integrity
1.2. Rebuild Index
1.3. Update Statistics
1.4. Clean Up History
1.5. Backup Database (Full)
1.6. Runs daily at 6:15AM
2. Weekly User Full Backup Job (Selected All User Databases)
2.1. Check Database Integrity
2.2. Rebuild Index
3.3. Update Statistics
3.4. Clean Up History
3.5. Backup Database (Full)
3.6. Runs every Saturday at 6:15AM
3. Daily User Differential Backup Job (Selected All User Databases)
3.1. Check Database Integrity
3.2. Backup Database (Differential)
3.3. Runs Sun thru Fri at 6:15AM
4. Hourly User Log Backup Job (Selected All User Databases)
4.1. Backup Database (Transaction Log)
4.1. Runs Hourly Sun thru Sat at 6:00AM
This doesn't seem quite right to me. I am mostly confused by the additional
maintance options in the Wizard (Check Database Integrity, Shrink Database,
Reorganize Index, Rebuild Index, Update Statistics, Clean Up History, and
Execute SQL Server Agent Job). I am wondering if these should be included in
the full, differential, log backup jobs as I have done above or if they
should be run seperatly as one or more standalone jobs on some frequency. Is
there any info on best practices and/or examples on how to handle these
other job tasks?
It depends on the size of your databases and your backup window.
For example if you have several databases of only a few gigabytes each one
perhaps you can do full backups daily. If your databases are hundreds of
gigabytes the backup procedure you show make more sense.
Hope this helps,
Ben Nevarez
"Bill Fuller" wrote:

> I am not a SQL Server DBA, but have a temporary need to play that role. I am
> new to the Maintenance Plan Wizard, but have thus far implemented the
> following:
> 1. Daily System Full Backup Job (Selected System Databases)
> 1.1. Check Database Integrity
> 1.2. Rebuild Index
> 1.3. Update Statistics
> 1.4. Clean Up History
> 1.5. Backup Database (Full)
> 1.6. Runs daily at 6:15AM
> 2. Weekly User Full Backup Job (Selected All User Databases)
> 2.1. Check Database Integrity
> 2.2. Rebuild Index
> 3.3. Update Statistics
> 3.4. Clean Up History
> 3.5. Backup Database (Full)
> 3.6. Runs every Saturday at 6:15AM
> 3. Daily User Differential Backup Job (Selected All User Databases)
> 3.1. Check Database Integrity
> 3.2. Backup Database (Differential)
> 3.3. Runs Sun thru Fri at 6:15AM
> 4. Hourly User Log Backup Job (Selected All User Databases)
> 4.1. Backup Database (Transaction Log)
> 4.1. Runs Hourly Sun thru Sat at 6:00AM
> This doesn't seem quite right to me. I am mostly confused by the additional
> maintance options in the Wizard (Check Database Integrity, Shrink Database,
> Reorganize Index, Rebuild Index, Update Statistics, Clean Up History, and
> Execute SQL Server Agent Job). I am wondering if these should be included in
> the full, differential, log backup jobs as I have done above or if they
> should be run seperatly as one or more standalone jobs on some frequency. Is
> there any info on best practices and/or examples on how to handle these
> other job tasks?
>
>
|||What I am not sure of is the other options, such as index maintenance, etc.
Should they be run as standalone jobs... and how do I select which to run.
For example, reorganize index and rebuild index do not seem to be candidates
to run together in a single job and I have heard that it is probably ot a
good idea to run Shrink Database at all.
"Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
news:FE7E7F0F-7605-4724-8AC7-5A6BB7B4F3F4@.microsoft.com...[vbcol=seagreen]
> It depends on the size of your databases and your backup window.
> For example if you have several databases of only a few gigabytes each one
> perhaps you can do full backups daily. If your databases are hundreds of
> gigabytes the backup procedure you show make more sense.
> Hope this helps,
> Ben Nevarez
>
>
> "Bill Fuller" wrote:
|||Everything is "it depends".
Do not shrink your databases in a job and avoid it even manually.
Do not run reorganize index and rebuild index together. Most important, do
you really need to reindex? Do you have a fragmentation problem? How big are
your databases? How big are your tables? Spent some time learning about dbcc
showcontig.
Hope this helps,
Ben Nevarez
"Bill Fuller" wrote:

> What I am not sure of is the other options, such as index maintenance, etc.
> Should they be run as standalone jobs... and how do I select which to run.
> For example, reorganize index and rebuild index do not seem to be candidates
> to run together in a single job and I have heard that it is probably ot a
> good idea to run Shrink Database at all.
> "Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
> news:FE7E7F0F-7605-4724-8AC7-5A6BB7B4F3F4@.microsoft.com...
>
>
|||Ok, tweaked it some more and the plans are looking cleaner.
Now I am wondering about History Cleanup... does that clean all outdated
backup files from system and user?
"Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
news:9D0D7882-AE82-4D60-8524-B9A55F9F5C95@.microsoft.com...[vbcol=seagreen]
> Everything is "it depends".
> Do not shrink your databases in a job and avoid it even manually.
> Do not run reorganize index and rebuild index together. Most important, do
> you really need to reindex? Do you have a fragmentation problem? How big
> are
> your databases? How big are your tables? Spent some time learning about
> dbcc
> showcontig.
> Hope this helps,
> Ben Nevarez
>
>
> "Bill Fuller" wrote:
|||This cleans up information from the msdb database like backup and restore
history, jobs history and maintenace plan history.
It is up to you how much data you want to keep.
Hope this helps,
Ben Nevarez
"Bill Fuller" wrote:

> Ok, tweaked it some more and the plans are looking cleaner.
> Now I am wondering about History Cleanup... does that clean all outdated
> backup files from system and user?
> "Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
> news:9D0D7882-AE82-4D60-8524-B9A55F9F5C95@.microsoft.com...
>
>
|||It seems rather pointless to keep history data on backup files that have
long since been purged. Around here, we keep our customers backups on disk
that are required to erstore from the last full backup, and everythingis on
tape for 31 days. Anything older than that is gone and purged in our jobs
using Sp_delete_backuphistory
Kevin3NF
SQL Server dude
You want fries with that?
http://kevin3nf.blogspot.com/
I only check the newsgroups during work hours, M-F.
Hit my blog and the contact links if necessary...I may be available.
"Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
news:D532E1BC-E544-46F1-BBBD-FF789A528EBC@.microsoft.com...[vbcol=seagreen]
> This cleans up information from the msdb database like backup and restore
> history, jobs history and maintenace plan history.
> It is up to you how much data you want to keep.
> Hope this helps,
> Ben Nevarez
>
>
> "Bill Fuller" wrote:

One more question on best-practice for Backup Plan

I am not a SQL Server DBA, but have a temporary need to play that role. I am
new to the Maintenance Plan Wizard, but have thus far implemented the
following:
1. Daily System Full Backup Job (Selected System Databases)
1.1. Check Database Integrity
1.2. Rebuild Index
1.3. Update Statistics
1.4. Clean Up History
1.5. Backup Database (Full)
1.6. Runs daily at 6:15AM
2. Weekly User Full Backup Job (Selected All User Databases)
2.1. Check Database Integrity
2.2. Rebuild Index
3.3. Update Statistics
3.4. Clean Up History
3.5. Backup Database (Full)
3.6. Runs every Saturday at 6:15AM
3. Daily User Differential Backup Job (Selected All User Databases)
3.1. Check Database Integrity
3.2. Backup Database (Differential)
3.3. Runs Sun thru Fri at 6:15AM
4. Hourly User Log Backup Job (Selected All User Databases)
4.1. Backup Database (Transaction Log)
4.1. Runs Hourly Sun thru Sat at 6:00AM
This doesn't seem quite right to me. I am mostly confused by the additional
maintance options in the Wizard (Check Database Integrity, Shrink Database,
Reorganize Index, Rebuild Index, Update Statistics, Clean Up History, and
Execute SQL Server Agent Job). I am wondering if these should be included in
the full, differential, log backup jobs as I have done above or if they
should be run seperatly as one or more standalone jobs on some frequency. Is
there any info on best practices and/or examples on how to handle these
other job tasks?It depends on the size of your databases and your backup window.
For example if you have several databases of only a few gigabytes each one
perhaps you can do full backups daily. If your databases are hundreds of
gigabytes the backup procedure you show make more sense.
Hope this helps,
Ben Nevarez
"Bill Fuller" wrote:
> I am not a SQL Server DBA, but have a temporary need to play that role. I am
> new to the Maintenance Plan Wizard, but have thus far implemented the
> following:
> 1. Daily System Full Backup Job (Selected System Databases)
> 1.1. Check Database Integrity
> 1.2. Rebuild Index
> 1.3. Update Statistics
> 1.4. Clean Up History
> 1.5. Backup Database (Full)
> 1.6. Runs daily at 6:15AM
> 2. Weekly User Full Backup Job (Selected All User Databases)
> 2.1. Check Database Integrity
> 2.2. Rebuild Index
> 3.3. Update Statistics
> 3.4. Clean Up History
> 3.5. Backup Database (Full)
> 3.6. Runs every Saturday at 6:15AM
> 3. Daily User Differential Backup Job (Selected All User Databases)
> 3.1. Check Database Integrity
> 3.2. Backup Database (Differential)
> 3.3. Runs Sun thru Fri at 6:15AM
> 4. Hourly User Log Backup Job (Selected All User Databases)
> 4.1. Backup Database (Transaction Log)
> 4.1. Runs Hourly Sun thru Sat at 6:00AM
> This doesn't seem quite right to me. I am mostly confused by the additional
> maintance options in the Wizard (Check Database Integrity, Shrink Database,
> Reorganize Index, Rebuild Index, Update Statistics, Clean Up History, and
> Execute SQL Server Agent Job). I am wondering if these should be included in
> the full, differential, log backup jobs as I have done above or if they
> should be run seperatly as one or more standalone jobs on some frequency. Is
> there any info on best practices and/or examples on how to handle these
> other job tasks?
>
>|||What I am not sure of is the other options, such as index maintenance, etc.
Should they be run as standalone jobs... and how do I select which to run.
For example, reorganize index and rebuild index do not seem to be candidates
to run together in a single job and I have heard that it is probably ot a
good idea to run Shrink Database at all.
"Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
news:FE7E7F0F-7605-4724-8AC7-5A6BB7B4F3F4@.microsoft.com...
> It depends on the size of your databases and your backup window.
> For example if you have several databases of only a few gigabytes each one
> perhaps you can do full backups daily. If your databases are hundreds of
> gigabytes the backup procedure you show make more sense.
> Hope this helps,
> Ben Nevarez
>
>
> "Bill Fuller" wrote:
>> I am not a SQL Server DBA, but have a temporary need to play that role. I
>> am
>> new to the Maintenance Plan Wizard, but have thus far implemented the
>> following:
>> 1. Daily System Full Backup Job (Selected System Databases)
>> 1.1. Check Database Integrity
>> 1.2. Rebuild Index
>> 1.3. Update Statistics
>> 1.4. Clean Up History
>> 1.5. Backup Database (Full)
>> 1.6. Runs daily at 6:15AM
>> 2. Weekly User Full Backup Job (Selected All User Databases)
>> 2.1. Check Database Integrity
>> 2.2. Rebuild Index
>> 3.3. Update Statistics
>> 3.4. Clean Up History
>> 3.5. Backup Database (Full)
>> 3.6. Runs every Saturday at 6:15AM
>> 3. Daily User Differential Backup Job (Selected All User Databases)
>> 3.1. Check Database Integrity
>> 3.2. Backup Database (Differential)
>> 3.3. Runs Sun thru Fri at 6:15AM
>> 4. Hourly User Log Backup Job (Selected All User Databases)
>> 4.1. Backup Database (Transaction Log)
>> 4.1. Runs Hourly Sun thru Sat at 6:00AM
>> This doesn't seem quite right to me. I am mostly confused by the
>> additional
>> maintance options in the Wizard (Check Database Integrity, Shrink
>> Database,
>> Reorganize Index, Rebuild Index, Update Statistics, Clean Up History, and
>> Execute SQL Server Agent Job). I am wondering if these should be included
>> in
>> the full, differential, log backup jobs as I have done above or if they
>> should be run seperatly as one or more standalone jobs on some frequency.
>> Is
>> there any info on best practices and/or examples on how to handle these
>> other job tasks?
>>|||Everything is "it depends".
Do not shrink your databases in a job and avoid it even manually.
Do not run reorganize index and rebuild index together. Most important, do
you really need to reindex? Do you have a fragmentation problem? How big are
your databases? How big are your tables? Spent some time learning about dbcc
showcontig.
Hope this helps,
Ben Nevarez
"Bill Fuller" wrote:
> What I am not sure of is the other options, such as index maintenance, etc.
> Should they be run as standalone jobs... and how do I select which to run.
> For example, reorganize index and rebuild index do not seem to be candidates
> to run together in a single job and I have heard that it is probably ot a
> good idea to run Shrink Database at all.
> "Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
> news:FE7E7F0F-7605-4724-8AC7-5A6BB7B4F3F4@.microsoft.com...
> >
> > It depends on the size of your databases and your backup window.
> >
> > For example if you have several databases of only a few gigabytes each one
> > perhaps you can do full backups daily. If your databases are hundreds of
> > gigabytes the backup procedure you show make more sense.
> >
> > Hope this helps,
> >
> > Ben Nevarez
> >
> >
> >
> >
> > "Bill Fuller" wrote:
> >
> >> I am not a SQL Server DBA, but have a temporary need to play that role. I
> >> am
> >> new to the Maintenance Plan Wizard, but have thus far implemented the
> >> following:
> >>
> >> 1. Daily System Full Backup Job (Selected System Databases)
> >> 1.1. Check Database Integrity
> >> 1.2. Rebuild Index
> >> 1.3. Update Statistics
> >> 1.4. Clean Up History
> >> 1.5. Backup Database (Full)
> >> 1.6. Runs daily at 6:15AM
> >>
> >> 2. Weekly User Full Backup Job (Selected All User Databases)
> >> 2.1. Check Database Integrity
> >> 2.2. Rebuild Index
> >> 3.3. Update Statistics
> >> 3.4. Clean Up History
> >> 3.5. Backup Database (Full)
> >> 3.6. Runs every Saturday at 6:15AM
> >>
> >> 3. Daily User Differential Backup Job (Selected All User Databases)
> >> 3.1. Check Database Integrity
> >> 3.2. Backup Database (Differential)
> >> 3.3. Runs Sun thru Fri at 6:15AM
> >>
> >> 4. Hourly User Log Backup Job (Selected All User Databases)
> >> 4.1. Backup Database (Transaction Log)
> >> 4.1. Runs Hourly Sun thru Sat at 6:00AM
> >>
> >> This doesn't seem quite right to me. I am mostly confused by the
> >> additional
> >> maintance options in the Wizard (Check Database Integrity, Shrink
> >> Database,
> >> Reorganize Index, Rebuild Index, Update Statistics, Clean Up History, and
> >> Execute SQL Server Agent Job). I am wondering if these should be included
> >> in
> >> the full, differential, log backup jobs as I have done above or if they
> >> should be run seperatly as one or more standalone jobs on some frequency.
> >> Is
> >> there any info on best practices and/or examples on how to handle these
> >> other job tasks?
> >>
> >>
> >>
>
>|||Ok, tweaked it some more and the plans are looking cleaner.
Now I am wondering about History Cleanup... does that clean all outdated
backup files from system and user?
"Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
news:9D0D7882-AE82-4D60-8524-B9A55F9F5C95@.microsoft.com...
> Everything is "it depends".
> Do not shrink your databases in a job and avoid it even manually.
> Do not run reorganize index and rebuild index together. Most important, do
> you really need to reindex? Do you have a fragmentation problem? How big
> are
> your databases? How big are your tables? Spent some time learning about
> dbcc
> showcontig.
> Hope this helps,
> Ben Nevarez
>
>
> "Bill Fuller" wrote:
>> What I am not sure of is the other options, such as index maintenance,
>> etc.
>> Should they be run as standalone jobs... and how do I select which to
>> run.
>> For example, reorganize index and rebuild index do not seem to be
>> candidates
>> to run together in a single job and I have heard that it is probably ot a
>> good idea to run Shrink Database at all.
>> "Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
>> news:FE7E7F0F-7605-4724-8AC7-5A6BB7B4F3F4@.microsoft.com...
>> >
>> > It depends on the size of your databases and your backup window.
>> >
>> > For example if you have several databases of only a few gigabytes each
>> > one
>> > perhaps you can do full backups daily. If your databases are hundreds
>> > of
>> > gigabytes the backup procedure you show make more sense.
>> >
>> > Hope this helps,
>> >
>> > Ben Nevarez
>> >
>> >
>> >
>> >
>> > "Bill Fuller" wrote:
>> >
>> >> I am not a SQL Server DBA, but have a temporary need to play that
>> >> role. I
>> >> am
>> >> new to the Maintenance Plan Wizard, but have thus far implemented the
>> >> following:
>> >>
>> >> 1. Daily System Full Backup Job (Selected System Databases)
>> >> 1.1. Check Database Integrity
>> >> 1.2. Rebuild Index
>> >> 1.3. Update Statistics
>> >> 1.4. Clean Up History
>> >> 1.5. Backup Database (Full)
>> >> 1.6. Runs daily at 6:15AM
>> >>
>> >> 2. Weekly User Full Backup Job (Selected All User Databases)
>> >> 2.1. Check Database Integrity
>> >> 2.2. Rebuild Index
>> >> 3.3. Update Statistics
>> >> 3.4. Clean Up History
>> >> 3.5. Backup Database (Full)
>> >> 3.6. Runs every Saturday at 6:15AM
>> >>
>> >> 3. Daily User Differential Backup Job (Selected All User Databases)
>> >> 3.1. Check Database Integrity
>> >> 3.2. Backup Database (Differential)
>> >> 3.3. Runs Sun thru Fri at 6:15AM
>> >>
>> >> 4. Hourly User Log Backup Job (Selected All User Databases)
>> >> 4.1. Backup Database (Transaction Log)
>> >> 4.1. Runs Hourly Sun thru Sat at 6:00AM
>> >>
>> >> This doesn't seem quite right to me. I am mostly confused by the
>> >> additional
>> >> maintance options in the Wizard (Check Database Integrity, Shrink
>> >> Database,
>> >> Reorganize Index, Rebuild Index, Update Statistics, Clean Up History,
>> >> and
>> >> Execute SQL Server Agent Job). I am wondering if these should be
>> >> included
>> >> in
>> >> the full, differential, log backup jobs as I have done above or if
>> >> they
>> >> should be run seperatly as one or more standalone jobs on some
>> >> frequency.
>> >> Is
>> >> there any info on best practices and/or examples on how to handle
>> >> these
>> >> other job tasks?
>> >>
>> >>
>> >>
>>|||This cleans up information from the msdb database like backup and restore
history, jobs history and maintenace plan history.
It is up to you how much data you want to keep.
Hope this helps,
Ben Nevarez
"Bill Fuller" wrote:
> Ok, tweaked it some more and the plans are looking cleaner.
> Now I am wondering about History Cleanup... does that clean all outdated
> backup files from system and user?
> "Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
> news:9D0D7882-AE82-4D60-8524-B9A55F9F5C95@.microsoft.com...
> >
> > Everything is "it depends".
> >
> > Do not shrink your databases in a job and avoid it even manually.
> >
> > Do not run reorganize index and rebuild index together. Most important, do
> > you really need to reindex? Do you have a fragmentation problem? How big
> > are
> > your databases? How big are your tables? Spent some time learning about
> > dbcc
> > showcontig.
> >
> > Hope this helps,
> >
> > Ben Nevarez
> >
> >
> >
> >
> > "Bill Fuller" wrote:
> >
> >> What I am not sure of is the other options, such as index maintenance,
> >> etc.
> >> Should they be run as standalone jobs... and how do I select which to
> >> run.
> >> For example, reorganize index and rebuild index do not seem to be
> >> candidates
> >> to run together in a single job and I have heard that it is probably ot a
> >> good idea to run Shrink Database at all.
> >>
> >> "Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
> >> news:FE7E7F0F-7605-4724-8AC7-5A6BB7B4F3F4@.microsoft.com...
> >> >
> >> > It depends on the size of your databases and your backup window.
> >> >
> >> > For example if you have several databases of only a few gigabytes each
> >> > one
> >> > perhaps you can do full backups daily. If your databases are hundreds
> >> > of
> >> > gigabytes the backup procedure you show make more sense.
> >> >
> >> > Hope this helps,
> >> >
> >> > Ben Nevarez
> >> >
> >> >
> >> >
> >> >
> >> > "Bill Fuller" wrote:
> >> >
> >> >> I am not a SQL Server DBA, but have a temporary need to play that
> >> >> role. I
> >> >> am
> >> >> new to the Maintenance Plan Wizard, but have thus far implemented the
> >> >> following:
> >> >>
> >> >> 1. Daily System Full Backup Job (Selected System Databases)
> >> >> 1.1. Check Database Integrity
> >> >> 1.2. Rebuild Index
> >> >> 1.3. Update Statistics
> >> >> 1.4. Clean Up History
> >> >> 1.5. Backup Database (Full)
> >> >> 1.6. Runs daily at 6:15AM
> >> >>
> >> >> 2. Weekly User Full Backup Job (Selected All User Databases)
> >> >> 2.1. Check Database Integrity
> >> >> 2.2. Rebuild Index
> >> >> 3.3. Update Statistics
> >> >> 3.4. Clean Up History
> >> >> 3.5. Backup Database (Full)
> >> >> 3.6. Runs every Saturday at 6:15AM
> >> >>
> >> >> 3. Daily User Differential Backup Job (Selected All User Databases)
> >> >> 3.1. Check Database Integrity
> >> >> 3.2. Backup Database (Differential)
> >> >> 3.3. Runs Sun thru Fri at 6:15AM
> >> >>
> >> >> 4. Hourly User Log Backup Job (Selected All User Databases)
> >> >> 4.1. Backup Database (Transaction Log)
> >> >> 4.1. Runs Hourly Sun thru Sat at 6:00AM
> >> >>
> >> >> This doesn't seem quite right to me. I am mostly confused by the
> >> >> additional
> >> >> maintance options in the Wizard (Check Database Integrity, Shrink
> >> >> Database,
> >> >> Reorganize Index, Rebuild Index, Update Statistics, Clean Up History,
> >> >> and
> >> >> Execute SQL Server Agent Job). I am wondering if these should be
> >> >> included
> >> >> in
> >> >> the full, differential, log backup jobs as I have done above or if
> >> >> they
> >> >> should be run seperatly as one or more standalone jobs on some
> >> >> frequency.
> >> >> Is
> >> >> there any info on best practices and/or examples on how to handle
> >> >> these
> >> >> other job tasks?
> >> >>
> >> >>
> >> >>
> >>
> >>
> >>
>
>|||It seems rather pointless to keep history data on backup files that have
long since been purged. Around here, we keep our customers backups on disk
that are required to erstore from the last full backup, and everythingis on
tape for 31 days. Anything older than that is gone and purged in our jobs
using Sp_delete_backuphistory
--
Kevin3NF
SQL Server dude
You want fries with that?
http://kevin3nf.blogspot.com/
I only check the newsgroups during work hours, M-F.
Hit my blog and the contact links if necessary...I may be available.
"Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
news:D532E1BC-E544-46F1-BBBD-FF789A528EBC@.microsoft.com...
> This cleans up information from the msdb database like backup and restore
> history, jobs history and maintenace plan history.
> It is up to you how much data you want to keep.
> Hope this helps,
> Ben Nevarez
>
>
> "Bill Fuller" wrote:
>> Ok, tweaked it some more and the plans are looking cleaner.
>> Now I am wondering about History Cleanup... does that clean all outdated
>> backup files from system and user?
>> "Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
>> news:9D0D7882-AE82-4D60-8524-B9A55F9F5C95@.microsoft.com...
>> >
>> > Everything is "it depends".
>> >
>> > Do not shrink your databases in a job and avoid it even manually.
>> >
>> > Do not run reorganize index and rebuild index together. Most important,
>> > do
>> > you really need to reindex? Do you have a fragmentation problem? How
>> > big
>> > are
>> > your databases? How big are your tables? Spent some time learning about
>> > dbcc
>> > showcontig.
>> >
>> > Hope this helps,
>> >
>> > Ben Nevarez
>> >
>> >
>> >
>> >
>> > "Bill Fuller" wrote:
>> >
>> >> What I am not sure of is the other options, such as index maintenance,
>> >> etc.
>> >> Should they be run as standalone jobs... and how do I select which to
>> >> run.
>> >> For example, reorganize index and rebuild index do not seem to be
>> >> candidates
>> >> to run together in a single job and I have heard that it is probably
>> >> ot a
>> >> good idea to run Shrink Database at all.
>> >>
>> >> "Ben Nevarez" <BenNevarez@.discussions.microsoft.com> wrote in message
>> >> news:FE7E7F0F-7605-4724-8AC7-5A6BB7B4F3F4@.microsoft.com...
>> >> >
>> >> > It depends on the size of your databases and your backup window.
>> >> >
>> >> > For example if you have several databases of only a few gigabytes
>> >> > each
>> >> > one
>> >> > perhaps you can do full backups daily. If your databases are
>> >> > hundreds
>> >> > of
>> >> > gigabytes the backup procedure you show make more sense.
>> >> >
>> >> > Hope this helps,
>> >> >
>> >> > Ben Nevarez
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > "Bill Fuller" wrote:
>> >> >
>> >> >> I am not a SQL Server DBA, but have a temporary need to play that
>> >> >> role. I
>> >> >> am
>> >> >> new to the Maintenance Plan Wizard, but have thus far implemented
>> >> >> the
>> >> >> following:
>> >> >>
>> >> >> 1. Daily System Full Backup Job (Selected System Databases)
>> >> >> 1.1. Check Database Integrity
>> >> >> 1.2. Rebuild Index
>> >> >> 1.3. Update Statistics
>> >> >> 1.4. Clean Up History
>> >> >> 1.5. Backup Database (Full)
>> >> >> 1.6. Runs daily at 6:15AM
>> >> >>
>> >> >> 2. Weekly User Full Backup Job (Selected All User Databases)
>> >> >> 2.1. Check Database Integrity
>> >> >> 2.2. Rebuild Index
>> >> >> 3.3. Update Statistics
>> >> >> 3.4. Clean Up History
>> >> >> 3.5. Backup Database (Full)
>> >> >> 3.6. Runs every Saturday at 6:15AM
>> >> >>
>> >> >> 3. Daily User Differential Backup Job (Selected All User Databases)
>> >> >> 3.1. Check Database Integrity
>> >> >> 3.2. Backup Database (Differential)
>> >> >> 3.3. Runs Sun thru Fri at 6:15AM
>> >> >>
>> >> >> 4. Hourly User Log Backup Job (Selected All User Databases)
>> >> >> 4.1. Backup Database (Transaction Log)
>> >> >> 4.1. Runs Hourly Sun thru Sat at 6:00AM
>> >> >>
>> >> >> This doesn't seem quite right to me. I am mostly confused by the
>> >> >> additional
>> >> >> maintance options in the Wizard (Check Database Integrity, Shrink
>> >> >> Database,
>> >> >> Reorganize Index, Rebuild Index, Update Statistics, Clean Up
>> >> >> History,
>> >> >> and
>> >> >> Execute SQL Server Agent Job). I am wondering if these should be
>> >> >> included
>> >> >> in
>> >> >> the full, differential, log backup jobs as I have done above or if
>> >> >> they
>> >> >> should be run seperatly as one or more standalone jobs on some
>> >> >> frequency.
>> >> >> Is
>> >> >> there any info on best practices and/or examples on how to handle
>> >> >> these
>> >> >> other job tasks?
>> >> >>
>> >> >>
>> >> >>
>> >>
>> >>
>> >>
>>

One more problem with importing csv files

Hi,

I got one more problem with importing csv files using .net. The problem is that the csv file contains double-quotation marks (""). For example, the record looks like:

...,Bearing Double "D" Flange,...

And the result is: ... | Bearing Double | null (all following columns are null)

The code is as following:

string strCsvConn =@."Provider=Microsoft.Jet.OLEDB.4.0;Data Source=\;Extended Properties='text;HDR=Yes;FMT=Delimited(,)';";

using (OleDbConnection cn =newOleDbConnection(strCsvConn))

{

string strSQL ="SELECT * FROM " + strFileName;

OleDbCommand cmd =newOleDbCommand(strSQL, cn);

cn.Open();

using (OleDbDataReader dr = cmd.ExecuteReader())

{

while (dr.Read())

{

string str =Convert.ToString(dr[8]);

}

// Bulk Copy to SQL Server

//using (SqlBulkCopy bulkCopy = new SqlBulkCopy(strSqlConn))

//{

// bulkCopy.DestinationTableName = strSqlTable;

// bulkCopy.WriteToServer(dr);

//}

}

}

Any idea is highly appreciated.

shz

Hello shz,

I would recomend using a TextReader for processing a .csv file.

Hope this helps

regards,

G

|||

Thanks Gonzo. However, TextReader should be the last thing I'd like to use.

shz

sql

one more import issue

I am importing a text file with '~' column delimiter and my text has in the file has some format.If i import the data into the sql table all the text is appearing in one line by missing the formart.

(here is the sample data.
empid~empname~emp_txt*
1~jjjj~kkkgkhfdhg
hhhghg
hoghodshgohdsoiahgfhoghojhqgh*
2~3333~gdskhkgjh
ghfdgh
fkjhbdskjhkghdghah*

)

IF i import the above data into a table with emp_txt as text data type
i am getting
kkkgkhfdhg
hhhghg
hoghodshgohdsoiahgfhoghojhqgh
all in one row like:kkkgkhfdhg hhhghg hoghodshgohdsoiahgfhoghojhqgh.But i should get it same as in the notepad.
for that i have changed the datatype of the emp_text column as image in my database.
But when i change the datatype my DTS is giving the following error:

Error at Destination for Row number 1. Errors encountered so far in this task: 1.
Insert error,Column3('emp_text',DBTYPE_BYTES),status 2: Error converting value.
Invalid character value for cast specification.In the format file also specify asterisk (*) as a row separator.

One more easy one.

What's the use of Filegroup backup?
Let's say we have around 60 tables. 30 in one filegroup
rest are in another filegroup.
What's the application of backing up only filegroup
instead of whole databse.
Because in any situation if we have to restore WHOLE
database. We can't just restore just 30 tables (in 1
filegroup) we have to restore the WHOLE database.
How would the integrity of databse is mainained if we
restore from filegroup datbase of filegroup1 taken at 2 pm
and restore from filgroup datbaase of filegroup2 taken at
3 pm.
The main advantage of filegroup backup is for very large databases. You can
back up only the required parts. Together with log backups you can the
restore the filegroup backup and reapply the logs to bring everything back
to a consistent point.
So very useful if the size gets really big. In most circumstances full
backups and appropriate log backups are probably fine.
Mike John
<anonymous@.discussions.microsoft.com> wrote in message
news:00bf01c49fff$e77d9360$a401280a@.phx.gbl...
> What's the use of Filegroup backup?
> Let's say we have around 60 tables. 30 in one filegroup
> rest are in another filegroup.
> What's the application of backing up only filegroup
> instead of whole databse.
> Because in any situation if we have to restore WHOLE
> database. We can't just restore just 30 tables (in 1
> filegroup) we have to restore the WHOLE database.
> How would the integrity of databse is mainained if we
> restore from filegroup datbase of filegroup1 taken at 2 pm
> and restore from filgroup datbaase of filegroup2 taken at
> 3 pm.
>

One more easy one.

What's the use of Filegroup backup?
Let's say we have around 60 tables. 30 in one filegroup
rest are in another filegroup.
What's the application of backing up only filegroup
instead of whole databse.
Because in any situation if we have to restore WHOLE
database. We can't just restore just 30 tables (in 1
filegroup) we have to restore the WHOLE database.
How would the integrity of databse is mainained if we
restore from filegroup datbase of filegroup1 taken at 2 pm
and restore from filgroup datbaase of filegroup2 taken at
3 pm.The main advantage of filegroup backup is for very large databases. You can
back up only the required parts. Together with log backups you can the
restore the filegroup backup and reapply the logs to bring everything back
to a consistent point.
So very useful if the size gets really big. In most circumstances full
backups and appropriate log backups are probably fine.
Mike John
<anonymous@.discussions.microsoft.com> wrote in message
news:00bf01c49fff$e77d9360$a401280a@.phx.gbl...
> What's the use of Filegroup backup?
> Let's say we have around 60 tables. 30 in one filegroup
> rest are in another filegroup.
> What's the application of backing up only filegroup
> instead of whole databse.
> Because in any situation if we have to restore WHOLE
> database. We can't just restore just 30 tables (in 1
> filegroup) we have to restore the WHOLE database.
> How would the integrity of databse is mainained if we
> restore from filegroup datbase of filegroup1 taken at 2 pm
> and restore from filgroup datbaase of filegroup2 taken at
> 3 pm.
>

One more connection problem-Security Issue

My company gave me their old Windows 2000 server. I installed my
database on it and the ODBC connection works perfectly on the server.
When I go to my client machine I go the the ODBC Connections in it and
it fails. It refuses connections and will not reply to pings. It is not
a network issue, as I can telnet to the server just fine. It's strictly
a security issue. I just have no clue where to look. Any suggestions
would be appreciated. ThanksCurt (myemail@.nowehere.com) writes:

Quote:

Originally Posted by

My company gave me their old Windows 2000 server. I installed my
database on it and the ODBC connection works perfectly on the server.
When I go to my client machine I go the the ODBC Connections in it and
it fails. It refuses connections and will not reply to pings. It is not
a network issue, as I can telnet to the server just fine. It's strictly
a security issue. I just have no clue where to look. Any suggestions
would be appreciated. Thanks


Which version of SQL Server?

If you are using SQL 2005, you need to enable remote connections, as by
default Express and Developer Editon only accepts local connections. You
can use the Surface Area Configuration tool for this.

Also, if you are using SQL Express, make sure that SQL Broswer is running,
so that the named instance can be found.

If you are using SQL 2000, I would look in the Server Network Utility
to see which protcols that are installed.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

One more Beginner Problem

I need to restore multiple tables in a sequence. I am writing separate data flow for each table restore.

If there is any error in any of the table restore data flow, I want to write that error in a log file.

I am writing a script component in every data flow, which will get error code & error description, that i am writing in the log fie.

Is there a way to create a public reusable error logging script file, which we can call from every data flow and log only errors which we want?

Also want to confirm if the way i am doing is correct way or is there any easier/better way to achieve this?

It goes something like this:

Try
'create sqlcommand specifying connection and query
'create reader

reader = sqlcommand.ExecuteReader() <== this throws an "Object reference not set to an instance of an object" exception

'... other stuff here...
Catch ex as Exception
Row.DirectToErrorOutput()
Finally
'dispose sqlcommand and reader here...
End Try


Despite the Catch block, the Script Component still fails, and doesn't get redirected to the ErrorOutput path.

Any ideas on how to solve this?sql

One more basic question

Hi
How can we audit sysadmin accounts in SQL server?
Rgds
WilsonCan you explain "audit" as you are using it?

-PatP|||We are not using it. I am just exploring if it's possible in SQL server to enable auditing at the login levels.|||What meaning do you intend the word 'audit' to convey?|||SQL Server supports C2 level auditing. No one in their right mind would go there, but the product can do it.

What I meant was what does the term "audit" mean to you?

-PatP

One more - SQL query

Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit
Hello group
My english is very poor.
I'm beginner in sql and I've problem?
How get from table only first and last record with idPerson.
example:
Tables in SQL
EventLog
idPerson LogTime IdKomp
1 9:00 1
2 9:10 2
2 9:15 4
3 10:00 1
4 11:00 3
2 13:00 2
1 15:00 3
KompTable
IdKomp KompName
1 uranus
2 jupiter
3 mars
4 saturn
PersonTable
IdPerson PersonName
1 Jon
2 Bob
3 Tom
Select ..... ('?) order by idPerson
result:
PersonName MinTime KompName MaxTime KompName
Jon 9:00 uranus 15:00 mars
Bob 9:10 jupiter 13:00 jupiter
Tom 10:00 uranus (null) (null)
...
How join this table, meybe create temporary table.
Please help.Here is one solution.
Create a in-line view / table, which has the values you are after. Called th
is E_List.
Then join the EventLog table to it (twice), to get the details.
Then it's joining the other tables off to get the dispalyable results.
You could create the in-line view as an actual view.
SELECT P.PersonName, E1.LogTime, K1.KompName, E2.LogTime, K2.KompName
FROM ( SELECT EventLog.idPerson, MIN(EventLog.logtime) AS min_time, MAX(Even
tLog.logtime) AS max_time
FROM EventLog
GROUP BY EventLog.idPerson) AS E_List
JOIN EventLog AS E1 ON (E_List.idPerson = E1.idPerson AND E_List.Min_Time =
E1.LogTime)
JOIN EventLog AS E2 ON (E_List.idPerson = E2.idPerson AND E_List.Max_Time =
E2.LogTime)
JOIN PersonTable P ON (P.idPerson = E1.idPerson)
JOIN KompTable K1 ON (E1.idKomp = K1.IdKomp)
JOIN KompTable K2 ON (E2.idKomp = K2.IdKomp)|||Thanks LUV SQL
This is simple and work very good.
Good for You
PawelR
LUV SQL wrote:
> Here is one solution.
> Create a in-line view / table, which has the values you are after. Called
this E_List.
> Then join the EventLog table to it (twice), to get the details.
> Then it's joining the other tables off to get the dispalyable results.
> You could create the in-line view as an actual view.
> SELECT P.PersonName, E1.LogTime, K1.KompName, E2.LogTime, K2.KompName
> FROM ( SELECT EventLog.idPerson, MIN(EventLog.logtime) AS min_time, MAX(Ev
entLog.logtime) AS max_time
> FROM EventLog
> GROUP BY EventLog.idPerson) AS E_List
> JOIN EventLog AS E1 ON (E_List.idPerson = E1.idPerson AND E_List.Min_T
ime = E1.LogTime)
> JOIN EventLog AS E2 ON (E_List.idPerson = E2.idPerson AND E_List.Max_T
ime = E2.LogTime)
> JOIN PersonTable P ON (P.idPerson = E1.idPerson)
> JOIN KompTable K1 ON (E1.idKomp = K1.IdKomp)
> JOIN KompTable K2 ON (E2.idKomp = K2.IdKomp)

One more - SQL query

Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit
Hello group
My english is very poor.
I'm beginner in sql and I've problem?
How get from table only first and last record with idPerson.
example:
Tables in SQL
EventLog
idPerson LogTime IdKomp
1 9:00 1
2 9:10 2
2 9:15 4
3 10:00 1
4 11:00 3
2 13:00 2
1 15:00 3
KompTable
IdKomp KompName
1 uranus
2 jupiter
3 mars
4 saturn
PersonTable
IdPerson PersonName
1 Jon
2 Bob
3 Tom
Select ..... ('?) order by idPerson
result:
PersonName MinTime KompName MaxTime KompName
Jon 9:00 uranus 15:00 mars
Bob 9:10 jupiter 13:00 jupiter
Tom 10:00 uranus (null) (null)
...
How join this table, meybe create temporary table.
Please help.Thanks LUV SQL
This is simple and work very good.
Good for You
PawelR
LUV SQL wrote:
> Here is one solution.
> Create a in-line view / table, which has the values you are after. Called this E_List.
> Then join the EventLog table to it (twice), to get the details.
> Then it's joining the other tables off to get the dispalyable results.
> You could create the in-line view as an actual view.
> SELECT P.PersonName, E1.LogTime, K1.KompName, E2.LogTime, K2.KompName
> FROM ( SELECT EventLog.idPerson, MIN(EventLog.logtime) AS min_time, MAX(EventLog.logtime) AS max_time
> FROM EventLog
> GROUP BY EventLog.idPerson) AS E_List
> JOIN EventLog AS E1 ON (E_List.idPerson = E1.idPerson AND E_List.Min_Time = E1.LogTime)
> JOIN EventLog AS E2 ON (E_List.idPerson = E2.idPerson AND E_List.Max_Time = E2.LogTime)
> JOIN PersonTable P ON (P.idPerson = E1.idPerson)
> JOIN KompTable K1 ON (E1.idKomp = K1.IdKomp)
> JOIN KompTable K2 ON (E2.idKomp = K2.IdKomp)

One minute SQL Server 7 date time synchronization

Hello,
How to force SQL Server 7 synchronize its date&time from the system ?
It looks like it is polling once a minute.
Thanks,
Ivan
That's the way that SQL Server work. In SQL2K, SQL Server doesn't keep it's own counter anymore (AFAIK).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
"Ivan Rubin" <ivan.rubin@.gmx.net> wrote in message news:1bc4e8d1.0405040706.30ff025b@.posting.google.c om...
> Hello,
> How to force SQL Server 7 synchronize its date&time from the system ?
> It looks like it is polling once a minute.
> Thanks,
> Ivan
|||There is no way for triggering a synchronization event in the SQL
Server 7?
In our system, after receiving date & time data from the host and
setting it on the NT box through SetLocalTime Win32API, some stored
procedures are triggered for calculating expiration dates upon the
current (SQL Server) date & time.
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in message news:<OZKZpqfMEHA.3012@.tk2msftngp13.phx.gbl>...[vbcol=seagreen]
> That's the way that SQL Server work. In SQL2K, SQL Server doesn't keep it's own counter anymore (AFAIK).
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
>
> "Ivan Rubin" <ivan.rubin@.gmx.net> wrote in message news:1bc4e8d1.0405040706.30ff025b@.posting.google.c om...
|||> There is no way for triggering a synchronization event in the SQL
> Server 7?
Not AFAIK...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
"Ivan Rubin" <ivan.rubin@.gmx.net> wrote in message news:1bc4e8d1.0405041325.2cea1da3@.posting.google.c om...
> There is no way for triggering a synchronization event in the SQL
> Server 7?
> In our system, after receiving date & time data from the host and
> setting it on the NT box through SetLocalTime Win32API, some stored
> procedures are triggered for calculating expiration dates upon the
> current (SQL Server) date & time.
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in message
news:<OZKZpqfMEHA.3012@.tk2msftngp13.phx.gbl>...[vbcol=seagreen]

One minute SQL Server 7 date time synchronization

Hello,
How to force SQL Server 7 synchronize its date&time from the system ?
It looks like it is polling once a minute.
Thanks,
IvanThat's the way that SQL Server work. In SQL2K, SQL Server doesn't keep it's
own counter anymore (AFAIK).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
"Ivan Rubin" <ivan.rubin@.gmx.net> wrote in message news:1bc4e8d1.0405040706.30ff025b@.posting
.google.com...
> Hello,
> How to force SQL Server 7 synchronize its date&time from the system ?
> It looks like it is polling once a minute.
> Thanks,
> Ivan|||There is no way for triggering a synchronization event in the SQL
Server 7?
In our system, after receiving date & time data from the host and
setting it on the NT box through SetLocalTime Win32API, some stored
procedures are triggered for calculating expiration dates upon the
current (SQL Server) date & time.
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in message news:<OZ
KZpqfMEHA.3012@.tk2msftngp13.phx.gbl>...[vbcol=seagreen]
> That's the way that SQL Server work. In SQL2K, SQL Server doesn't keep it'
s own counter anymore (AFAIK).
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
>
> "Ivan Rubin" <ivan.rubin@.gmx.net> wrote in message news:1bc4e8d1.040504070
6.30ff025b@.posting.google.com...|||> There is no way for triggering a synchronization event in the SQL
> Server 7?
Not AFAIK...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
"Ivan Rubin" <ivan.rubin@.gmx.net> wrote in message news:1bc4e8d1.0405041325.2cea1da3@.posting
.google.com...
> There is no way for triggering a synchronization event in the SQL
> Server 7?
> In our system, after receiving date & time data from the host and
> setting it on the NT box through SetLocalTime Win32API, some stored
> procedures are triggered for calculating expiration dates upon the
> current (SQL Server) date & time.
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in messag
e
news:<OZKZpqfMEHA.3012@.tk2msftngp13.phx.gbl>...[vbcol=seagreen]sql

One minute SQL Server 7 date time synchronization

Hello,
How to force SQL Server 7 synchronize its date&time from the system ?
It looks like it is polling once a minute.
Thanks,
IvanThat's the way that SQL Server work. In SQL2K, SQL Server doesn't keep it's own counter anymore (AFAIK).
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
"Ivan Rubin" <ivan.rubin@.gmx.net> wrote in message news:1bc4e8d1.0405040706.30ff025b@.posting.google.com...
> Hello,
> How to force SQL Server 7 synchronize its date&time from the system ?
> It looks like it is polling once a minute.
> Thanks,
> Ivan|||There is no way for triggering a synchronization event in the SQL
Server 7?
In our system, after receiving date & time data from the host and
setting it on the NT box through SetLocalTime Win32API, some stored
procedures are triggered for calculating expiration dates upon the
current (SQL Server) date & time.
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in message news:<OZKZpqfMEHA.3012@.tk2msftngp13.phx.gbl>...
> That's the way that SQL Server work. In SQL2K, SQL Server doesn't keep it's own counter anymore (AFAIK).
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
>
> "Ivan Rubin" <ivan.rubin@.gmx.net> wrote in message news:1bc4e8d1.0405040706.30ff025b@.posting.google.com...
> > Hello,
> >
> > How to force SQL Server 7 synchronize its date&time from the system ?
> > It looks like it is polling once a minute.
> >
> > Thanks,
> > Ivan|||> There is no way for triggering a synchronization event in the SQL
> Server 7?
Not AFAIK...
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
"Ivan Rubin" <ivan.rubin@.gmx.net> wrote in message news:1bc4e8d1.0405041325.2cea1da3@.posting.google.com...
> There is no way for triggering a synchronization event in the SQL
> Server 7?
> In our system, after receiving date & time data from the host and
> setting it on the NT box through SetLocalTime Win32API, some stored
> procedures are triggered for calculating expiration dates upon the
> current (SQL Server) date & time.
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in message
news:<OZKZpqfMEHA.3012@.tk2msftngp13.phx.gbl>...
> > That's the way that SQL Server work. In SQL2K, SQL Server doesn't keep it's own counter anymore (AFAIK).
> >
> > --
> > Tibor Karaszi, SQL Server MVP
> > http://www.karaszi.com/sqlserver/default.asp
> >
> >
> > "Ivan Rubin" <ivan.rubin@.gmx.net> wrote in message news:1bc4e8d1.0405040706.30ff025b@.posting.google.com...
> > > Hello,
> > >
> > > How to force SQL Server 7 synchronize its date&time from the system ?
> > > It looks like it is polling once a minute.
> > >
> > > Thanks,
> > > Ivan

one matrix above another doesn't export properly to Excel

A newb question: I've got a report with a table leftmost in the body,
with a matrix immediately to the right of the table. Immediately
below the table are 2 textboxes displaying totals from the table.
Immediately below the first matrix is another matrix displaying totals
from the first matrix:
Table Matrix1
Textboxes Matrix2
All is well in preview, HTML view, and PDF export.
After Excel export, the sheet is arranged thus:
Table Matrix1
Matrix2 Textboxes
I've tried putting the 2 matrices in a rectangle, and even inside
nested rectangles, with no effect on the incorrect Excel layout.
The report is designed this way because the users want empty space
between the detail data (in the Table and Matrix1) and the totals.
The number of columns in Matrix 1 is dynamic and so I need Matrix2 to
expand in step with Matrix1.
I think. Help? RDL available if anybody wants it.
Thanks
MatrixLoverI am running into similar problem. I have one matrix over another and report
logic hide / unhide matrix. Every work like charm in reports, export to pdf,
but having trouble in excel export. When export to excel, the hidden matrix
also appears just below the active matrix. Any help appreciated.
"MatrixLover" wrote:
> A newb question: I've got a report with a table leftmost in the body,
> with a matrix immediately to the right of the table. Immediately
> below the table are 2 textboxes displaying totals from the table.
> Immediately below the first matrix is another matrix displaying totals
> from the first matrix:
> Table Matrix1
> Textboxes Matrix2
> All is well in preview, HTML view, and PDF export.
> After Excel export, the sheet is arranged thus:
> Table Matrix1
> Matrix2 Textboxes
> I've tried putting the 2 matrices in a rectangle, and even inside
> nested rectangles, with no effect on the incorrect Excel layout.
> The report is designed this way because the users want empty space
> between the detail data (in the Table and Matrix1) and the totals.
> The number of columns in Matrix 1 is dynamic and so I need Matrix2 to
> expand in step with Matrix1.
> I think. Help? RDL available if anybody wants it.
> Thanks
> MatrixLover
>

one line of results

I am trying to do a select statement where the results show up in one line like a,b,c,d,e...

I think I'm on the right track with the following code, but I have no idea what direction to go in. (I think my notes are not totally correct, but heck they're my notes. :-))

@.UM is getting wiped out after each into @.UM, how do I make it add to @.UM

declare @.UM varchar(3000) --declares variable
DECLARE abc CURSOR FOR --declares object for recordset
SELECT CONDCD FROM IBACOSTOCK GROUP BY CONDCD ORDER BY CONDCD
OPEN abc --stores results in recordset variable
FETCH NEXT FROM abc --grabs one line from recordset
INTO @.UM --stores line into variable
SELECT CONDCD FROM IBACOSTOCK GROUP BY CONDCD ORDER BY CONDCD
WHILE (@.@.FETCH_STATUS = 0) --as long as there are records in the recordset
begin
FETCH NEXT FROM abc --grabs next line from recordset
INTO @.UM --stores new line into variable
end
SELECT @.um
CLOSE abc --go back to while statement
DEALLOCATE abc --erase recordset
GOHow about...

DECLARE @.UM

SET @.UM = ''

SELECT @.UM = @.UM + CONDCD
FROM IBACOSTOCK
GROUP BY CONDCD ORDER BY CONDCD|||Yeah. I'm slow. I looked at my code afterwards and summed it up, and that's what I got. Thanks!

One last xp_cmdshell Q (sorry)

I run the below on the box that has psExec installed as sa (dbo) in QA.
The MSSQLSERVER service is set to Local Account.
Domain\Administrator will run c:\psexec \\10.2.27.230 -I cmdmgr_Resume.bat
from a CmdExec job sucessfully. I run same from command prompt successfully.
Do I have to set MSSQLSERVER agent to start as admin.?
Or can I explicitly say exec master.Domain\Administrator.cp_cmdshell or
similar?
Even our DBA is stumped!!
EXEC xp_cmdshell 'c:\psexec \\10.2.27.230 -I cmdmgr_Resume.bat' ;
output
---
NULL
PsExec v1.60 - Execute processes remotely
Copyright (C) 2001-2005 Mark Russinovich
Sysinternals - www.sysinternals.com
NULL
Access is denied.
Connecting to 10.2.27.230...
Couldn't access 10.2.27.230:
NULL
It looks like psExec doesn't have permission but it is SQL QA that is the
sourcer of the problem.Local Account? or Local System? The SQL Server service must execute with
the credentials needed to execute commands on the other computer. It cannot
execute as Local System. It must execute as a domain user account.
WARNING!!! It is extremely dangerous to use xp_cmdshell in this way. It is
even more dangerous to run as a domain admin. If you're a consultant, be
sure you have a battalion of lawyers in your pocket and review your
professional insurance contract, because it would be a simple thing for a
smart lawyer to prove gross negligence on your part.
"marcmc" <marcmc@.discussions.microsoft.com> wrote in message
news:642C7D5B-DA48-427D-9535-0BF66E90ADA4@.microsoft.com...
> I run the below on the box that has psExec installed as sa (dbo) in QA.
> The MSSQLSERVER service is set to Local Account.
> Domain\Administrator will run c:\psexec \\10.2.27.230 -I cmdmgr_Resume.bat
> from a CmdExec job sucessfully. I run same from command prompt
successfully.
> Do I have to set MSSQLSERVER agent to start as admin.?
> Or can I explicitly say exec master.Domain\Administrator.cp_cmdshell or
> similar?
> Even our DBA is stumped!!
> EXEC xp_cmdshell 'c:\psexec \\10.2.27.230 -I cmdmgr_Resume.bat' ;
> output
> ---
> NULL
> PsExec v1.60 - Execute processes remotely
> Copyright (C) 2001-2005 Mark Russinovich
> Sysinternals - www.sysinternals.com
> NULL
> Access is denied.
> Connecting to 10.2.27.230...
>
> Couldn't access 10.2.27.230:
> NULL
> It looks like psExec doesn't have permission but it is SQL QA that is the
> sourcer of the problem.|||thx again, what are the alternatives, create an account and lock it down,
maybe? or
"Brian Selzer" wrote:

> Local Account? or Local System? The SQL Server service must execute with
> the credentials needed to execute commands on the other computer. It cann
ot
> execute as Local System. It must execute as a domain user account.
> WARNING!!! It is extremely dangerous to use xp_cmdshell in this way. It i
s
> even more dangerous to run as a domain admin. If you're a consultant, be
> sure you have a battalion of lawyers in your pocket and review your
> professional insurance contract, because it would be a simple thing for a
> smart lawyer to prove gross negligence on your part.
> "marcmc" <marcmc@.discussions.microsoft.com> wrote in message
> news:642C7D5B-DA48-427D-9535-0BF66E90ADA4@.microsoft.com...
> successfully.
>
>|||Create an external process to do whatever it is you need to do. Then you
don't have to worry about the xp_cmdshell security hole. Another solution
is to use replication, but if you can't, then you can queue up the
information in a table and create a dequeue process to move it. Both of
these solutions is more robust, because you can still collect information if
the other system is down.
"marcmc" <marcmc@.discussions.microsoft.com> wrote in message
news:97BD8606-7F5A-4904-B13C-533FE167A77D@.microsoft.com...
> thx again, what are the alternatives, create an account and lock it down,
> maybe? or
> "Brian Selzer" wrote:
>
with
cannot
is
be
a
QA.
cmdmgr_Resume.bat
or
the|||On Wed, 7 Sep 2005 05:46:10 -0700, marcmc wrote:

>I run the below on the box that has psExec installed as sa (dbo) in QA.
>The MSSQLSERVER service is set to Local Account.
>Domain\Administrator will run c:\psexec \\10.2.27.230 -I cmdmgr_Resume.bat
>from a CmdExec job sucessfully. I run same from command prompt successfully
.
>Do I have to set MSSQLSERVER agent to start as admin.?
>Or can I explicitly say exec master.Domain\Administrator.cp_cmdshell or
>similar?
>Even our DBA is stumped!!
Hi marcmc,
(S)he should check out what BOL says about services accounts.
If you really want to do it like this, than the least unsafe way is to:
a) Have your network administrator set up a domain account for use by
the SQL Server service. Make sure to change the password policy to not
automatically age the password.
b) Give this account all privileges required for SQL Server operation,
plus all privileges required to run your batch file. Don't give any
other privileges to this account. If your version of windows allows it,
disable the posibility to log on using this account as a normal user
(i.e. make it an account that can only be used by services)
c) Change the startup parameters for your SQL Server instance to log in
using this account instead of logging in as Local System.
d) Keep your fingers crossed :-)
But I agree with Brian that there are other ways to do this, and that
they are probably better!
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)

One last before the week end

How do you translate into T-Sql :
TO_CHAR(Value,'99099')
Thanks for your help.Anobody for help ?|||what is Value, a date? have a look at CAST and CONVERT (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_ca-co_2f3o.asp)|||The value is a number.

TO_CHAR(12345,'99099')

I've had a look at Cast and Convert but I can't find an equivalent style anywhere.
So maybe there's another way to translate it...sql

One large database or many small....

Hi Experts

We are debating what is best:

1. To combine all the company's data in one large database, and use schemas and file groups to create logical and physical distribution on drives and namespaces

or

2. Distribute the data into smaller databases with related data - eg. products and product description in one db, Customers in another and orders and orderlines in a third db.

Just what are the pros and cons?

regards

Jens Chr

In almost any case I would suggest one database. The ability to use foreign key constraints is enough right there. The only reason I would suggest many databases is if you might need to seperate them into different servers, but that is very seldom.

It is just so much easier to work with data in the same database versus multiple databases. You get one log, which is easier to manage (unless you have tons of disk drive channels to have multiple log channels, it will be faster to have the one log). If you seperate stuff into filegroups instead of databases, you can still backup and restore them independently if you need to, as well as position the tables in a filegroup on different drives as needed.

|||I echo Louis' sentiments - listen to him. He's a wise and experienced SQL person :)

Check out my SQL Server 2005 Video Tutorials: http://www.learnsqlserver.com/

One large database or a few smaller databases?

Does it make sense to break up data into a few smaller (identical)
databases to keep from having hundreds of millions of rows in a few
tables in one single database?
I ask because I've got a website that's pumping about 500k new records
into 3 or 4 tables per day. In several months I'll probably be near 400
or 500 million rows in some of them.
Users to the website query the data, and so far, it's working well
(with only a few million rows in each). My indexes look good and my
queries only join 2 or 3 tables at most, but I am mostly doing
aggregations on lots of data (AVGs, SUMs, etc.) so I only return a few
rows at a time.
My app is set up to handle the data being in separate identical
databases, but I'd like to keep it all in the same place to keep things
more simple (for backups & upgrades, etc..)
But my fear is when I start to reach half a billion rows my query times
are going to suck, even if I cram more memory into my database server
(dual xeon 3.2 with 2GB of memory, room to grow to 8GB).
Anyone have any experience with this much data?
Thanks!!
You should not split up your solution into more than one database. A
good approach would be to use the analysis services from Sql Server as
you are requireing a lot of aggregation data which could be
precalculated. Therefore these cubes will provide a much feaster
response than quering just an OLTP database.
HTH, Jens Suessmeyer.
|||Jens, thanks for the quick reply. I'm using SQL 2005 standard... Does
that I mean need to upgrade to Enterprise?
Forgive my lack of knowledge on AS and data cubes, but are they kept in
real-time? Ideally the queries would be doing in "near-time" to when
the data has been inserted.
Thanks for your help.
|||No, AS is already included in SQL Server 2k5, but has to be instaleld
if not.
You sure can design the cube that it can fallback on non aggregated
realtime data , but perhaps the Analysis Services newsgroup can help
you further Within this they should know was is possible in your case.
HTH, Jens Suessmeyer.
|||On 28 Jan 2006 11:29:29 -0800, Jeff Turner wrote:

>Does it make sense to break up data into a few smaller (identical)
>databases to keep from having hundreds of millions of rows in a few
>tables in one single database?
>I ask because I've got a website that's pumping about 500k new records
>into 3 or 4 tables per day. In several months I'll probably be near 400
>or 500 million rows in some of them.
>Users to the website query the data, and so far, it's working well
>(with only a few million rows in each). My indexes look good and my
>queries only join 2 or 3 tables at most, but I am mostly doing
>aggregations on lots of data (AVGs, SUMs, etc.) so I only return a few
>rows at a time.
>My app is set up to handle the data being in separate identical
>databases, but I'd like to keep it all in the same place to keep things
>more simple (for backups & upgrades, etc..)
>But my fear is when I start to reach half a billion rows my query times
>are going to suck, even if I cram more memory into my database server
>(dual xeon 3.2 with 2GB of memory, room to grow to 8GB).
>Anyone have any experience with this much data?
>Thanks!!
Hi Jeff,
I agree with Jens - keep it in one database.
In addition to Jens' suggestion to use analysis server, here are some
other ideas that might help you gain speed as the amount of data grows.
1. Use indexed views. If your queries are usually for aggregates, they
could prove to be an excellent way to speed up the process, since they
are precalculated. Because SQL Server has to modify each indexed view as
the base data changes, indexed views do cause a slowdown for data
modifications, though. Don't use indexed views if you need millisecond
performance on insert and update operations.
2. Consider the use of partitioned views (SQL Server 2000) or table
partitioning (SQL Server 2005).
Hugo Kornelis, SQL Server MVP
|||Hi Jeff,
Some extra thoughts:
1. The optimizer is going to do better work for you if it's in one database.
2. Backups/restores become much more complicated if you have to keep
multiple databases in sync. Consider what you might have to do to restore to
a point in time. There are methods but it just starts to complicate things.
HTH,
Greg
"Jeff Turner" <zigjst@.gmail.com> wrote in message
news:1138476569.362520.240210@.f14g2000cwb.googlegr oups.com...
> Does it make sense to break up data into a few smaller (identical)
> databases to keep from having hundreds of millions of rows in a few
> tables in one single database?
> I ask because I've got a website that's pumping about 500k new records
> into 3 or 4 tables per day. In several months I'll probably be near 400
> or 500 million rows in some of them.
> Users to the website query the data, and so far, it's working well
> (with only a few million rows in each). My indexes look good and my
> queries only join 2 or 3 tables at most, but I am mostly doing
> aggregations on lots of data (AVGs, SUMs, etc.) so I only return a few
> rows at a time.
> My app is set up to handle the data being in separate identical
> databases, but I'd like to keep it all in the same place to keep things
> more simple (for backups & upgrades, etc..)
> But my fear is when I start to reach half a billion rows my query times
> are going to suck, even if I cram more memory into my database server
> (dual xeon 3.2 with 2GB of memory, room to grow to 8GB).
> Anyone have any experience with this much data?
> Thanks!!
>
|||Thanks to everyone for replying.
One thing I should point out is the data in these tables is broken down
by users to the website.
For example, if I did split this up by database, I could have one
database that holds all of the records for users with last names of
A-G. They would never have a need to query against the data in the
other databases.
So the practical minded side of me says, why bother running queries for
an A-G user in tables that hold everyone's data, when I could
physically have it narrowed down already.
Does that make sense? Or is SQL just smart & efficient enough with
indexing that the performance gain would be negligable compared to my
backup/maintenance costs.
Thanks again!