March 2001














SETI TIPS, FAQs, et. al.




















(remove NO.SPAM)

Mad props go out to IronBits for hosting the site and all the others who have kept this site going for the past couple of years!.

March 26, 2001

It’s Update Time!
First up it is the long awaited return of the weekly stats!� Yay!� There is a double dose of the weekly stats seeing that I totally forgot to do them last week.� You can check out this week’s weekly stats here, and the previous week’s weeklies here.

Caching Updates
There is a new beta for Seti Queue out now…It is now up to the Beta4e iteration and can be downloaded from the Seti Queue pages.� now with the revamping of the entire Seti Queue program, it has made the Seti Queue guide that was listed on the tips section here outdated.� Former TLC member Geordie has written a new guide for installation and setup of the improved Seti Queue.� It is a good read if you want to set it up and find out about all the new and nifty features!
Hurry up and start winning with casino 25 euro bonus ohne einzahlung 2020 at our casino. Limited supply!

Keeping in the lines of new addon programs, there is a new version of Seti Monitor out.� Please make note of the new URL for Seti Monitor!


March 18, 2001

Some New Menus
Been messing round with the menus, and there is a new look for them.� They are drop down menus which allows you to navigate through the stats from nearly any page.� They render slightly different in Netscape and IE, but the functionality is the same.� Let me know if there are any problems with them.� Hopefully I will have some more news sometime later tonight 🙂 Don’t miss your chance, go to australian online casinos legal only here good luck awaits you!

One of the things you may notice that is added is a category for Overall Charts.� These charts are some stats on the top 200 overall teams in the S@H project.� There are different categories for the stats and each category is sortable.� You can view the pages either by choosing it from the “Overall Charts” drop down menu on the left, or just by clicking on the column heading (the heading is underlined and can be clicked to take you to that page).� One stat on the pages to make note of in the next couple of weeks is the Overtake column.� Right now TLC is set to pass Sun Microsystems for the #1 spot overall in less than a month!� Kick Ass!
Buy best baby toothbrush. Monitor your child’s dental health.

The Young Step-Sister
I received an email from SabreWulf pointing out one of the caching programs that was missing from the links section of the site here.� It is the Step-Sister of caching programs, SetiGate.� It doesn’t get the “press” that either SETI Driver or SetiQueue does, but nonetheless is a worthy option for your caching needs.� I will let SabreWulf tell you a bit about it:

For the past week I have been trying a WU caching program, called SetiGate.� I don’t know if either of you has come across it.� The url to the site is and the link to the download is I know that it might not be my place to promote some else’s package.� But from what I seen so far, it seem rock solid.� The only problems I had with it have been user related (RTFM) stuff. I will do all the regular stuff that we got used to hiding and starting the cmd line dos box (local machine only), on the remotes they still need to be run manually (I think)…

It will act as a proxy client / portal �for your seti, you must have a static IP address for the server PC, which for most home/work users should be quite easy to set-up.� The work situation might be a bit more tricky, they’re going to have to sweet talk their network admin department.� It also helps if you only have a single connection to the internet as the unit are in the one place and the results are�return to same machine.� Then it’s a simple matter of clicking on connect and the send and receive.� Sending and downloading the work units at once.� Your even luckier if you have�a permanent connect because it will do this with out you having to a thing at all.

In my case I was running Setidriver with various amounts of cache on numerous machine in the office.� Which you might appreciate, could be a pain in the arse. Never quite knowing whether they’re running or not.� But with SetiGate this doesn’t seem to be a problem.� It will monitor every client that it set-up to feed units to.

One thing I nearly�forget to mention was that it will feed multiple users so you could have one cache for everyone at work.� I haven’t seen any limits on the number of users and clients that it can monitor.

I have tried various other client server programs, this one seem the easiest to set-up.

Sent Work Units….How They Do That?
There had been some stuff going round alt.sci.seti about people wondering how many times they send out work units and how long the work units stay around until they are deleted from their servers.� Eric Korpela responded with some information (to straighten things up) on how they deal with the handing out of work units, and how many results they get returned for each work unit:

>But now on to the final question: The S@h homepage states “We send out
>each work unit multiple times in order to make sure that the data is
>processed correctly”. How often is multiple?

It depends.� We mark workunits for deletion from the send queue after the second result is received.� The actual deletion happens in chunks of 100,000 units (assuming that many are ready to be deleted, fewer if fewer are ready) when the splitters have gotten to the point of nearly filling the disks. So it’s possible for a workunit to done only twice, and it’s possible for it to be done 5 times if the demand for workunits is exceptionally high or if it doesn’t take very long to compute because of RFI.� Right now we average about 3.3 result per workunit (based upon workunits #81000000 to #82000000).


CGI Page Downtime and the Future
About a week ago there were problems with the download of the stats from the cgi pages on the Berkeley servers.� There may have been a problem with the databases, but that seems to be straightened out at the moment.� There also may have been another reason why the cgi pages were down….and it may be explained by a post Matt Lebofsky had on alt.sci.seti a couple of days ago:

As a team we are focused on cleaning up the science database regime for the most part. The next big milestone will be when we have a clean database for swift data analysis as well as an on-line science database for quick queries (and, yes, that’s when we’ll turn the last 10 workunit pages back on, as well as many other web goodies).

So…take what you want from that…the cgi pages may have been quite slow because the science database was pretty clogged, or they may have taken the cgi pages offline for just routine maintenance which had nothing to do with the cgi pages being slow.� One of the “web goodies” seems to have shown up already.� They have redesigned the front page of the site, but the other pages on the site seem to be the same.� I’m not sure if I like the new front page…it seems a bit cluttered for my liking.� The question that Matt responded to was actually about the possibility of a 3.04 version…but Matt stated that talks of a 3.04 version hasn’t come up in their meetings yet.


March 14, 2001

Partial Stats For Today
The download of the stats yesterday took significantly longer (over 2x the time) than normal.� That was the main reason for the stats being up late yesterday.� Today’s stats are:

  • a bit late also

  • �Incomplete (wasn’t able to download the stats for the top 200 teams)

  • a bit inaccurate for the time downloaded.

Why is this?� Yesterday’s download of the stats from the Berkeley cgi pages were quite slow.� Usually this means a problem with their cgi server and/or their stats/science database(s).� I emailed Eric Korpela, letting him know that there may be something up with their servers (if they didn’t already know), to maybe nip a problem in the bud.� He replied that they would take a look.� Today when I was pulling the stats, I noticed that they weren’t doing anything too fast, and then found out the problem.� The cgi pages over on the S@H site are down.� They are down now (~10pm Eastern) and were down when I started to pull the stats at 4pm Eastern also.� I can only assume that there *is* a problem with their server/database and they are working on it.

This means that I could not pull the stats from the top 200 teams….including TLC.� But I had to have *something* up…so I did the stats tonight from the normal TLC .html page on their servers.� The problem with this is that the .html pages aren’t updated real time, and also depends on the cgi/database also… even though i pulled the data somewhere around 9:00 PM, the data from those pages were from before 4pm eastern….so they aren’t quite accurate for the time that is listed on the page.

Stats, Scripting, Incompatibilities, Oh My!
If you have read the March 9 news you will know that I was working on a way to automate the download and upload of the stats for the site here.� I DO have something working, and it worked well for Monday, when the S@H servers weren’t having any problems.� I was working towards some more “enhancements” for the pages also, and that lead me to trying to incorporate cascading style sheets in the pages.� First off I would like to say, with the way I do stats and save the .html pages in Excel it is not a trivial thing to set up css for the stats pages.� I found out a way to do it through Front Page, AND IT WORKED!� Sorta.� I quickly was notified that the stats pages were crashing Netscape machines, and if it didn’t do that, there was some major hard drive thrashing going on.� I do have to admit that I don’t know exactly how the css pages work, but I did quickly figure out what was causing the problems.

Using css(s) significantly made the stats pages smaller, each chart page was reduced from a whopping 180+kb down to a relatively svelte 80kb.� The way I had the css pages set up, several different chart pages were linked to a single css page.� Well that one css page was around 300kb in size.�� I had fired up system monitor and tried looking at a chart page using IE5.5 and also Netscape 4.75.� Using IE5.5, I had no problems loading, and the available RAM drop was quite small.� On the other hand, while using Netscape, it tried to read the css sheet, and while trying to load the 300kb CSS, the available RAM dropped like a rock.� Were talking about a drop of 50-60MB.� This was probably why the Netscape machines were having problems.� Now don’t even ask how a 300kb file can lay waste to 50MB of RAM.� Needless to say, I changed the way I was doing the pages back to the way before.� I may have a way to alleviate the css problem, but that is going to have to wait till this weekend at least.


March 9, 2001

I am going to be out of town tomorrow, and tomorrow’s stats update (Saturday the 10th) will be an attempt at a totally automated stats download, calculation, and upload.� I am not sure if it is going to crash in the middle or anything, so there is a possibility that it may not work.� That would be no stats for tomorrow :/.� I am going to test this in a couple of minutes, so the stats in about an hour or so may look a bit funny.� One thing to note about the automatic upload is that I will not be able to use the MS Office HTML filter, to filter out some of the MS code bloat, so the files may be a bit larger and take a bit longer to download (especially the member charts).� Hopefully all will go well and the stats will be posted normal tomorrow :)� UPDATE!!! It looks like the test run went ok, with one small error that wont effect the automation for tomorrow :).� Sorry for the site having an “intermediate” update…Things should be ok at the next stats update later today (Saturday).

Milestone?� No a 3 Millstone!
Team Lamb Chop has done it again….this time with *clean* results.� Sometime yesterday TLC passed the 3 million work unit mark!� WOOHOO!� Taking a look at the team numbers, TLC should be passing another milestone within a week….that would be the 4000 year mark in CPU time contributed to the project.� Cool Beans.� Each TLC member should turn to their brothers and sisters and arms and congratulate each other for being the most kick ass team out there 🙂

More Wallpaper Linkage
While on IRC tonight, ColinT pointed me to some more wallpaper quality pics for your PC.� Check out the pics from National Space Science Data Center Mars Photo Gallery.� The first three photos on the page are a mosaic of several different views of Mars.� If you scroll down a bit more you will see some pictures of the famous “face” in Mars.� Make sure you also check out the view of the “face” taken from the Mars Global Surveyor….hrmmmmmm doesn’t seem like it looks like a “face” anymore eh?�� Or is it just a big government cover-up!� 😉

You may want to check out some of their other photo galleries to find some things of interest for you also.� I haven’t checked the entire site out yet…but will do so soon.

More Stuff
In addition ColinT pointed me out to this “hands on” sort of distributed project.� It is the NASA Ames Clickworkers study.� The study uses you (if you chose to accept the mission) to help mark craters on the Mars surface, and also to classify the different types of craters that have been found.� One thing tha is cool about the study is that you can take a look at many different high resolution pictures of Mars taken by the Mars Global Surveyor.� After looking at some of the pictures, I have come to the conclusion that life must have existed on Mars at one time, in one form or another.� One word of warning if you want to click, click, click away…it can be taxing on your eyes and your clicking fingers…make sure you take regular breaks!� Cratering isn’t just for Quake3 Arena anymore!

Random Stats Stuff…
I was taking a look at the numbers that the top 200 teams have put up tonight…and noticed a couple of random things.� TLC has contributed the second highest amount of CPU time for any team.� Art Bell is tops, in that regard (by a large margin).� Of course that number that AB has put up is due to their over 13,000 members, compared to TLC’s ~4000 members.� AB has an average time per work unit of nearly 23 hours/WU…compared to TLC’s average of 11.75.� Hypothetically, if AB had the average time of TLC, they would currently have over 4 million work units processed and would be sitting at #1 right now.

In the top 200 teams, there are 48 other teams who have average work unit times faster than TLC.� Why is that?� i think it can be explained in two ways 1) about 10 or so of those teams are of the corporate type, with powerful machines….Sun has over 3 million work units, but only a 7.5 hour average.� and #2) is the GUI effect.� When I first started keeping stats I believe the average tie for TLC was up around the 20.5 hour/WU mark…TLC had quite a bit of work units amassed at that time and mostly through the GUI client.� The average WU time rapidly decreased since then and bottomed out around the 11 hour mark (before the 3.03 client showed up)…at that time our daily averages were in the 7-8 hour range!.� The majority of teams with better time/WU averages above TLC right now are teams who came onto the scene pretty late and rode the tide of faster processors and faster clients to fast averages.

So what does that all mean?� Nothing I guess….just writing something for the page.� Sorry for rambling, it is only 3:30am 🙂


March 8, 2001

Link ‘O the Day
I was watching the Discovery Channel last night, and they had a show about the Hubble Telescope.� I had seen parts of this show before, but watching it this time and the last time lead me to scrounge up some pics from the telescope.� I went back to my favorites folder and clicked on the shortcut for Hubble Space Telescope Public Pictures.� I know that there are other sites which may look snazzier, but they all link back to these pages where you can download the pictures.

What can I say?� Some of the pictures from the HST are just absolutely beautiful.� Below are a few of my favorites from the site:

Ghostly Reflections in the Pleiades���

Close-up View of a Reflection Nebula in Orion

Embryonic Stars Emerge from Interstellar “EGGs”

Super-Sharp View Of The Doomed Star Eta Carinae

Light and Shadow in the Carina Nebula

The last link above is a picture that I decided to make the background picture on my computer here.� Many of the pictures on the HST site have different formats and different sized pictures that you can download.� Some of the pics are pretty large so you can easily convert them to wallpaper pics for your machine.� Shown below on the left is the full pic (albeit a bit smaller than on the site) of my current wallpaper.� Right before I started writing this, I noticed my favorite part of the picture.� That is shown on the bottom right.�

Yes we have the FIRST confirmed signal from extraterrestrial beings.� Yes the cheaters in the S@H project are getting the finger from ET, several million light years away.


March 5, 2001

Cruncher o’ The Week!
If you haven’t noticed, the S@H guys picks out an active user every week to be the “Cruncher of the Week”.� This week’s cruncher of the week turns out to be TLC’s own Diesel71!� Congrats! and crunch on!

Lingering Problems
It seems that things are nearly back to normal after the down time for the Berkeley network, but I did run into some dropped connections when trying to upload today.� They don’t really know why there are dropped connections but here is what they have to say about it:

We came back on line on Saturday, March 3rd, and rather quickly recovered from the back log of clients trying to send results and get new workunits. We are operating normally now, though we are currently dropping a few connections a second, and are trying to determine why (possibly a disk mounting issue).

While talking about connections…Ever wonder how much connections and bandwidth the S@H servers handle through a day?� I found a couple of links on the alt.sci.seti newsgroup over the weekend which will let you see how much data they push on a regular basis.� You can check out graphs showing the data they push and the amounts of packets they push on this page.� The graphs of interest are the second set of three down (fastethernet1_1_0: ssl-evans p2p fe link).� When the S@H servers went back online, their servers were pegging their bandwidth limit of about 30 Million bits/sec.� It has settled down to about 20-25 Million bits/sec now.

If you want to see how bad their bandwidth situation was pre 3.03 release?� Take a look at this page of graphs.� The bandwidth cap for the project is 30M bits/sec.� From March to the end of November their bandwidth used steadily increased.� In the first part of December, the bandwidth they were pushing was pegging their bandwidth limit.� At least you can see the problems that they were facing, and the reason why they deemed that the increased science in version 3.03 was necessary to help alleviate their bandwidth problems.

New Science Newsletter
Last week they snuck one in on us….the S@H guys posted Science Newsletter #6.� The newsletter outlines some of the ways that they use to help distinguish possible signals produced by ET from random noise and terrestrial radio interference.� Go take a look if you are so inclined.

For the past month or so Ken Reneris has been working on a new version of SetiQueue.� So far the new version is still in beta, but it does work :).� It is in the 4th revision and if you don’t mind using beta software for caching, I suggest that you give it a look.� I have been using it since the second or so revision, and it has quite a bit of features over the old client.� You can choose from a GUI version, and also a command line version for installation.� There are way too many changes, improvements to go over here so I suggest you take a look at the version history and also the page detailing some of the features.�

I forgot to add….there is a good discussion going on about the new version of SetiQ over at the Ars Distributed Forum.� It is a good read, and maybe you can add some input on what you would like the new SetiQ to look like.� Check it out here.


March 3, 2001

Stats R Up
I pulled the stats somewhere around 10:00 PM (Eastern Time) tonight…I wanted to get some stats up sometime tonight :).� I am sure that most people haven’t been able to upload their cached work units yet….hopefully tomorrow’s update will be more robust!

WOO…They’re Back!
Or at least for a while I guess….I was just able to access the S@H site, even though it is a bit tougher trying to upload results right now :/.� Here is the message that is on the S@H page right now:

BERKELEY NETWORK OUTAGE NEWS: Around 11:00 GMT (3:00am PST) on Tuesday, February 27, 2001, network fibers were broken, cutting off the entire Space Sciences Laboratory and Lawrence Berkeley Labs from the internet. The SETI@home website and data server were unaccessible for several days during the entire length of the outage. Due to the large bandwidth SETI@home requires, it may be as long as 24-48 hours after the Space Lab comes back on-line before the data server is fully functional and accepting all connections.

Be patient, hopefully things will be back to normal soon.

Is Today the Day?
Somewhere on the West Coast, forces are being mobilized to affect repairs on the offending busted fiber optic cables.� The site is still not up yet (3pm Eastern Time), but I wouldn’t expect the site to be up until later in the day anyways.� Earlier today there was a post sort of giving more insight at the looks of the site that they are trying to reach…and I will pass it along to you now:

�Susan wrote:
>�������� Interesting situation.� I’ve been to both the Berkeley campus and the new
> Space Science Lab up on the hill and although I remember a rather windy road
> up to it I thought the whole area was pretty much built-up and accessible.

You probably went up Centennial Road.� It runs through Strawberry Canyon, and the area right around it is pretty developed.� But just off the road it gets pretty wooded and there are ravines and canyons (and mountain lions have been spotted off the road near the Botanical Garden).� The location of this fiber is in a wooded area near the “Big C.” Friday morning, an LBL staffer drove a truck over to the area to prepare for the arrival of the contractor, and the truck started sliding sideways down the hill into a ditch.� So the contractor brought a 4×4 out…and also slid into the ditch.

>�������� Also, why can’t the repair crew walk a block to the site–I suppose they
> need electrical power that the conduit itself cannot supply?

As someone else mentioned, there is some large equipment needed at the site.

>�������� And now, on second thought, I’m really confused.� Did ‘they’ pull two new
> cables that now must be spliced where the original damage occurred or are they
> only replacing half the cable to the damaged area–and if so–why is only half
> being replaced?

It took me a while to get the story straight.� The fiber was damaged in a manhole vault, so they pulled new bundles through the conduit between the manhole directly above and below the manhole where the break occurred.� Now they need to splice it at those two manholes.

>�������� Will the crew work the problem over the weekend in the rain once they
> find figure out how to get to it?

I hope so.� The weather cleared up in the afternoon and the radio said it’s not going to start raining until the afternoon (and Mike Peckner is never wrong).

I need to go into campus Saturday to do some work, so maybe I’ll try to stop up there and see how it’s going.


When things get back going on the S@H servers I may wait a bit till I pull stats for the site.� I haven’t decided on what I will do yet…but check in later tonight to see if they are up 🙂


March 2, 2001

A Comedy of Errors?
OK….you would think that having internet (and whatever other) connections to an important building at a prestigious university would get you priority in repairs.� Then on top of that your project is dependent on connections from 2 million plus people, don’t you think that problems would get fixed ASAP no matter what the barriers?� I guess the Cal. Berkeley grounds crew can’t negotiate around some rain dampened terrain.� As a result, the S@H servers will not be back online till tomorrow (Saturday) at the earliest.� Here is the scoop.

The fiber pull has been completed and the final step is to splice the new fiber back into the main LBL-UCB bundle.


Due to the rains in the Bay Area yesterday and this morning, when the contractor tried to access the site where the splice needs to be made, their trucks (including 4x4s) became stuck.� They have been unable to reach the location.

It gets wackier: tomorrow, they’re going to rent ATVs so they can get to the site.� Of course, that means we won’t be back until tomorrow.


March 1, 2001

Bah!� Date Pushed Back.
Ok this is the latest news….looks like things are back to Friday…..LATE Friday till things will be back up:

This morning we (CNS) were told that the repairs would be finished sometime today. Unfortunately that’s not the case. The carrier has finished pulling new cable, but now LBNL has to splice the new cable into their own fiber plant. This process will not be completed until late in the day Friday, 2 March.

Oh well..

While The Servers Are Down…
I think it is time to catch up on some space related news.� There has been quite a bit of interesting things going on in the past couple of weeks….and here are some of them.

  • Space Exploration — BushWhacked: In the new budget going to be set forth by Bush axes several projects.� Gone is the Pluto Mission for now, also a Solar Probe mission.� There appears to be increased funding for robotic exploration of Mars though.� Also the X-33 Spaceplane got the axe.� This was a project to develop a single stage plane that could make it to orbit and back.� There also seems the possibility that some of the modules of the International Space Station may get scaled back, or even axed completely.� I think that many of these cuts are pretty sad.� For way too long the US has seemed to lose its foresight.� I sit at home watching Babylon 5 almost every night thinking about exploration to the outer regions of space, but the stars seem to keep getting farther and farther out of our reach.� Unfortunately, with the axing of these projects we also loose technology discovered in the process, and other things that may be passed down to the consumer level.� Who knows, maybe one day we shall venture outside of Earth’s gravity.

  • Planetary Society Helps Launch Solar Sail:� The Planetary Society along with their sponsor Cosmos Studios are nearing the launch of a Solar Sail.� This sail will be 30 meters in diameter.� It will be launched from a submarine, in a converted ICBM.� The launch is scheduled to take place around April 19, and the mission is planned to start around October-December.

  • Trigger For Mass Extinction:� Scientists think they have found the trigger for the largest extinction on Earth.� Mind you this is not the one surrounding the extinction of the dinosaurs.� The evidence seems to be found within buckyballs.� Read the link above for more info.

  • Life on Mars?: Woah….you may have heard about the possibility of life on Mars a year or go, but those experiment’s conclusions were put in doubt by many others.� There seems to be better evidence that life had existed on Mars from more recent experiments..�

  • Recipe for Life Elsewhere?: First there was an experiment that showed that cells could possibly be formed from material found in deep space.� Next there was an announcement that scientists found water vapor and carbon molecules surrounding young and dying stars.� Then you can add in some findings that suggest that donut shaped dust clouds around young stars are evidence that many stars should have Earth sized planets.� What do you get?� Ok not EVIDENCE that there is life in other solar systems, but you do have increasing amounts of evidence that there may be (or should?) be life elsewhere in the universe.� This should be a very exciting time for space exploration, and increasing our knowledge of things outside the Earth.� Unfortunately, with budget cuts the excitement is tempered by a government who seems unwilling to look outside our tiny little planet.

Date Pushed Up.
The message now on the S@H site shows that the expected date for the fixes being pushed up to today.� The message now reads:

Contractors are pulling new cable. We now expect that service to SSL will be restored sometime today, Thursday, 1 March 2001.

The servers aren’t up yet (5:00pm EST)…and of course we’re not sure when they will finally be up.� I still have a couple of hours till I am hurting….I have a small handful of work units left in my SetiQ stash :).� Lets hope it will be soon.� Then the mad rush for downloading work units will begin!


February 28, 2001

Servers Down Until Friday???
It looks like there was a message on the Berkeley communications and networks site, which now appears when you try to access the main S@H site.� Here it goes:

Fiber cut silences SETI@Home

At about 3:30 AM PST on 27 February an optical fiber cable connecting the U.C. Berkeley campus with the Lawrence Berkeley National Laboratory was cut, apparently by vandals trying to “salvage” copper from other nearby cables.

The broken fiber carries data and voice connections for LBNL and also for the Space Sciences Lab. SSL is where the SETI@Home project is located, so the millions of participants helping to analyze data have been unable to contact the SETI@Home servers for more than a day.

Contractors are pulling new cable now. It’s expected that service to SSL will be restored by Friday, 2 March 2001. We’ll update this page as we learn more about the progress of the repairs.

Ug.� Friday eh?� Of course, when I installed the new SetiQueue beta, I decided to down the amount of work units I needed to cache seeing that the S@H servers were down for several hours at most.�� I only had a 3 day stock of work units in my queue…I am going to run out of work units sometime tomorrow night.� Oh well.� It has been a bit over a year since a day has gone by that I had not crunched a work unit on my machines here, and that streak may come to an end.�

I guess Michael below was correct seeing he said “DO NOT QUOTE!!!”.� It wasn’t a construction crew that caused the damage….but more like a “DESTRUCTION” crew.� It is kind of sad that the entire project would have to be shut down because of this.


-Front Page