Monday, December 18, 2006

Work in Progress: Ira's Home Theater

As it says in my profile, I'm an audiophile and an equipment junkie, so I had to post(/brag) about my home theater system. It's not the most expensive setup around, but it sounds pretty damn good for anything from Autechre to zydeco, and The Matrix shakes the floor.

2x JBL LSR32s
2x Bose 301 (my roommate's old speakers, being used as surrounds)
1x B&W CC6 center channel
1x Aguilar GS212

1x TEAC POS receiver
1x Harman Kardon AVR 330
1x Crown K1
1x SWR 750x

1x Monster Power HTS 3600

Tara Labs Prism Omni speaker cables, Canare L-2T2S interconnects, Vampire, Homegrown, and Canare connectors, all hand-terminated with Cardas or Kester solder

1x Westinghouse LVM37-w1
1x Samsung LN-T5265F

1x XBox 360
1x HTPC running Windows XP Windows Vista Ultimate
1x Scientific Atlanta 8300 HDC

1x Logitech Harmony 890 universal remote
2x nameless black wood speaker stands
2x American Recording Technologies studio monitor stands

UPDATE (1/15/07):
Picked up a Harman Kardon AVR 330. The 645 was overkill for what I wanted, since I don't use HDMI at this point, and the 330 was a steal. I also got a pair of JBL E30s with it, which I'll either use as surrounds or give to my parents.

UPDATE 2 (2/13/07):
Found a mint B&W CC6 center channel. It's not quite as high-end as I was planning to go, but there's not enough space for a big center speaker, so this one will do nicely until I buy a warehouse. And then I'll just outfit the whole place with Turbosound.

UPDATE 3 (3/12/08):
Well, the Westinghouse TV blew up, but my roommate got his bonus and replaced it with a 52" Samsung that looks quite good on the wall of our new place. I picked up an SWR 750x and an Aguilar GS212 to play my bass through, and that stack is pulling double-duty as a massively overpowered subwoofer. Also got a Monster HTS 3600, mainly so that I could switch off the Crown using a remote. And finally, we got a new couch that sits in the way of the audio rack, so I just HAD to get a Logitech Harmony 890 to control the whole system via RF (and the one-button system configuration is nice, too). Converted all the display cables, too - the 360 now displays via VGA, the Scientific Atlanta PVR via HDMI, and the HTPC via DVI->HDMI.

Lists vs. temp tables

Dynamic SQL is in heavy use at my new office, and I'm not sure how I feel about a lot of it. For example, one of my coworkers showed me a neat trick by which one can append to a string with each successive row returned in a select statement, like so:

select @list = @list + cast(foo as varchar(10)) + ',' from #test2

But I noticed that this technique was used in many pieces of code in our system, and it bothered me, because I was pretty sure that creating and joining to temp tables would be faster than assembling long strings and then using them with "in" clauses. I wasn't positive, however, so I assembled this test script:


if OBJECT_ID('tempdb..#timing') is not null
drop table #timing

if OBJECT_ID('tempdb..#test1') is not null
drop table #test1

if OBJECT_ID('tempdb..#test2') is not null
drop table #test2

@i int
, @items int

set @items = 1022

create table #timing
testrun varchar(100)
, setupStartTime datetime
, setupEndTime datetime
, setupElapsed as datediff(ms, setupStartTime, setupEndTime)
, queryStartTime datetime
, queryEndTime datetime
, queryElapsed as datediff(ms, queryStartTime, queryEndTime)
, totalElapsed as datediff(ms, setupStartTime, setupEndTime) + datediff(ms, queryStartTime, queryEndTime)

identity(int, 10403, 96) as foo
, cast('some text' as varchar(20)) as bar

set @i = 0

while @i < @items begin insert into #test1 select 'cowabunga' set @i = @i + 1 end insert into #timing (testRun, setupStartTime) values ('Temp Table', getDate()) select foo into #test2 from #test1 create clustered index ix on #test2 (foo) update #timing set setupEndTime = getDate() where testRun = 'Temp Table' create clustered index ix on #test1 (foo) insert into #timing (testRun, setupStartTime) values ('List', getDate()) declare @list varchar(7000), @curID int set @list = '(' select @curID = min(foo) from #test2 --while @curID is not null --begin -- select @list = @list + cast(@curId as varchar(10)) + ',' -- select @curId = min(foo) from #test2 where foo > @curID

select @list = @list + cast(foo as varchar(10)) + ',' from #test2

set @list = substring(@list, 0, len(@list) - 1) + ')'

declare @sql varchar(8000)
set @sql = 'select * from #test1 where foo in ' + @list

update #timing
set setupEndTime = getDate()
where testRun = 'List'


update #timing
set queryStartTime = getDate()
where testRun = 'List'

exec (@sql)

update #timing
set queryEndTime = getDate()
where testRun = 'List'

update #timing
set queryStartTime = getDate()
where testRun = 'Temp Table'

select t1.* from #test1 t1
inner join #test2 t2
on =

update #timing
set queryEndTime = getDate()
where testRun = 'Temp Table'

select * from #timing

The results of this test vary somewhat, so it would be best to run the whole thing in a loop to assemble a statistically valid set of data, but the difference between the two methods is so pronounced that the SD doesn't really matter. Ready for the (approximate) difference?

Temp tables are 3x faster then strings.

Here's a typical result of this script running on our QA server, an 8-CPU 20-GB box running SQL 2000 build 2171:

Running it repeatedly yielded similar results with some higher outliers, but very few lower ones. The next problem will be how to propose changing our coding practices and re-writing a lot of stored procs. Stay tuned for that one, and possibly an entry about my subsequent de-hiring.

Tuesday, December 12, 2006

Shrinking log files of formerly replicated DBs

To start, I should note that a) there's probably a better way to do this, and b) I'm sure this is already out on the net somewhere, possibly even on my old blog as a post from Mike Forman. But this way works, it's fast, and I couldn't Google it when I needed it, so I'm posting it again.

In our QA Static and Dev environments, we have static copies of a DB that is a replication subscriber in Production. This DB gets refreshed (restored) from Production every few weeks, or when I break everything and have to restore a fresh copy. When restored, it still has transactions marked as pending replication in the log, and so the log cannot be shrunk.

Naturally, we don't have as much space as we'd like in QA and Dev, and since the DB in question is static, it doesn't need the 50 GB transaction log that it has in Production. So I wanted to shrink the transaction log, but was prevented from doing so by this error:

Cannot shrink log file 4 (Test_Log1) because all logical log files are in use.

So how do you fix this?

My solution was to fool (read: hack) the system tables to make SQL think this DB was still a replication subscriber, then use sp_repldone to set all the transactions to done. This short script will do it, although I recommend you BE CAREFUL, because this is the master database you're messing with.

sp_configure 'allow updates', 1
reconfigure with override

update master.dbo.sysdatabases set category = 1 --
where dbid = 10 -- replace with appropriate dbid for your DB

use Test

EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1

update master.dbo.sysdatabases set category = 0
where dbid = 10 -- ditto

sp_configure 'allow updates', 0
reconfigure with override

After the sp_repldone, you'll be able to shrink the log file to your heart's content. There may be another way to do this, such as attaching the DB without its log file, but I'm not sure that works, and besides, I like hacking system tables.

Thursday, December 7, 2006

Retrieve queries when SMS crashes

I can never remember where the .sql temp files are stored by SQL Management Studio so that I can retrieve queries-in-progress when it crashes. So I'm blogging this so I can find it later.

This time, at least, the files were in the following path:

C:\Documents and Settings\[user name]\My Documents\SQL Server Management Studio\Backup Files\Solution1

Monday, December 4, 2006

Start me up

I'm firing up another DBA blog, one that I'll hopefully be able to hang onto this time. I used to be the primary (or at least most prolific) contributor to DBA IQ, but I've moved on from that blog for two reasons. First, Google forced me to sign up for the Blogger beta, which removed my account from that blog, and second, I've changed jobs, and DBA IQ was set up as a sounding board for the stellar DBA team led by Rob Liander at Capital IQ.

I'm now at Bank of America working on the Global Credit Products team, where I'll be working on many of the same SQL development and management challenges that I did at Capital IQ and hopefully writing about the interesting ones. I won't go into the details of why I changed positions, but I will say that I knocked 30 minutes off my daily commute! Seriously, though, CapIQ is a great company (ahem, division of McGraw-Hill) and is full of cool and very bright people. Everyone should read Dan McKinley's development-focused blog and DBA IQ, which will hopefully continue to be updated by Rob, Mike Forman, and the other senior DBAs there.

I've mostly been working on getting up to speed here so far, although I've authored and implemented a new DR strategy for my team and I've been working on performance tuning. Stay tuned for more interesting SQL Server posts soon.