December 18, 2006

Whatever happened to online etiquette?

Filed under: — 2:56 am

At the New York Times, David Pogue asks Whatever happened to online etiquette? and comes up with a list of reasons for the decline of this etiquette: anonymity, cries for attention, parents failing to teach social skills, young people spending too much time online, and even the current political climate.

Pogue is one of my favorite writers, and I hate to see him become the latest to take up the net’s equivalent of an Andy Rooney “kids these days” rant. I agree with Gina at Lifehacker—David couldn’t be more wrong. Here are the facts as I see them:

1. People are jerks. Not all of them, but many. What Pogue calls “online etiquette” never existed—or if it did, it was just like regular etiquette: something some of us aspire too, and others ignore and mock. People were jerks on Bulletin Board Systems in the 80s and on USENET in the 90s, and people are jerks on web forums now.

2. More people, more jerks. I’ve watched many a USENET newsgroup and web forum grow from a friendly community of 10-20 people to a semi-friendly community of 100 to a cruel, vindictive pile-on of 300 or more. It’s not that large groups can’t work—just that the larger the group, the more controls you need to keep it constructive. When a group outgrows the controls in place, it fails to be a community.

3. Anonymity isn’t the problem. While people have been arguing since the 80s about the lack of face-to-face communication sending common courtesy out the window, people in online communities have proven time and again that it doesn’t have to be that way. I’ve seen anonymous communities that work just fine, and plenty of non-anonymous ones that were overrun by jerks.

4. Maturity matters. One thing Pogue is probably correct about is that sometimes younger people have a greater tendency to be jerks than those a bit more mature. This isn’t an absolute rule, but obviously most of us become more graceful at dealing with society—online or offline—as we gain experience. Sites that have a younger audience need more controls to stay on topic. Needless to say, some younger folks are more mature than some older folks. That’s why I emphasized maturity rather than age.

5. Content inspires community. Quoting Gina’s comment at Lifehacker:

Also, netiquette in public forums has a lot to do with the content around which the community is centered. Lifehacker’s posts set out to help folks, so in kind, our readers want to help us and each other back. Digg is a popularity contest of oneupmanship. Gawker is all about making fun of things, so its readers mock each other and it right back in the comments. Karma’s a boomerang.

The secret to healthy online communities is probably some combination of social responsibility, consequences, and a feeling of community, all of which depend on the size of the site, its content, and how the community is controlled. Are there moderators? Do they deal quickly and fairly with problems? Are there automatic controls to prevent some of the more obvious problems? Or are the moderators so outnumbered that they represent a tiny voice among the thousands? When Pogue looked at digg—an explosively popular, poorly moderated “peanut gallery” where the value is in the links and their rankings, and the discussions rarely add much value—is it any wonder his worst fears were confirmed?

What Pogue has probably noticed is that, as his writing presence grew from a tiny thing read only by techies to a mass-audience phenomenon, he’s getting more and more emails and comments from jerks. It’s easy to look at this and think that people everywhere are losing their manners—as my quotation site grew from zero visitors to hundreds of thousands, I’ve had the same thoughts more than once. But now that my wife and I run several different sites, we’ve learned that the smaller ones have less jerks, and different sites attract different sorts of audiences.

Also, as I’ve run my biggest site for 12 years, I’ve seen good and bad behavior come and go in cycles. If I had to make a guess at an overall trend for today, I’d say it’s positive. At 150,000 visitors a day the site still attracts plenty of jerks, but I’ve been surprised at people’s good manners lately. Even most of the people who dislike the site are communicating it with better manners these days.

Don’t take my word for it, or David’s. Find some good communities and stay away from the bad ones. Give humanity the benefit of the doubt. If you run a site, enjoy and encourage the valuable comments from visitors and ignore the jerks. I for one will wait until I’m a bit older before I start ranting about how much nicer people were in “the old days.”

[via Lifehacker]

November 3, 2005

Cool Firefox plug-in: Tab Mix Plus

Filed under: — 3:57 pm

I’ve been using the tab browser preferences plug-in for Firefox ever since I switched to this browser. It (or more accurately, Firefox’s tab support) only has one thing that annoys me: I use Ctrl-W to close tabs, and sometimes I accidentally hit the key an extra time, closing the window. Searching for a solution to that, I found another tab extension, Tab Mix Plus, which replaces TBP and adds some useful features:

  • Undo close tab—It’s amazing how often this comes in handy.
  • Duplicate tabs—This creates a duplicate of the current tab, complete with back-button history. Very useful.
  • Open selected links in tabs—Select a block of text containing links, right-click, and instantly open every link in a tab. I use this every day with some of my site maintenance tasks.
  • Drag and drop reordering of tabs—I don’t need this often, but it’s a cool feature.
  • Display unread tabs in red—I tend to open tabs in the background for later reading. This feature highlights the ones I haven’t read yet. It also changes a tab’s title text to red when the page has updated, great for web applications.

Along with all of those features, it solves my original problem: you can set whether the hotkey closes the window when there is only one tab open.

This is an essential plug-in that completely replaces Tab Browser Preferences. It works well on both my PC and MacOSX laptop, and I have yet to experience any kind of crash. I know some of these features are going to be included in Firefox 1.5 without an extension, but until then Tab Mix Plus is very handy.

October 7, 2005

SpamAssassin Configuration Generator updated

Filed under: — 11:29 pm

Every now and then I notice a few spam messages creeping into my email inbox, which means it’s probably time to update my spam filter. SpamAssassin saves me from literally thousands of spam messages a day, but I’m still annoyed when I have to deal with a few personally.

I’ve been upgrading my local installation to SpamAssassin 3.1, and in the process I have updated my SpamAssassin Configuration Generator tool to work with version 3.0 and 3.1.

This tool is an easy way to create a SpamAssassin configuration file with some common settings. It has been linked from the documentation page for SpamAssassin for some time, and embarrassingly hasn’t worked with the current versions for the last year or so. As of now it’s finally up to date.

For those working with SpamAssassin 2.5x, the old version is still available.

July 27, 2005

A message from the real world

Filed under: — 11:33 am

For those of us who spend most of our time online, it’s easy to get into the habit of thinking everybody is like us. That’s why I like the dose of perspective I get from This Pew Internet Survey (PDF). Among the findings in this survey of internet users:

  • 10% are not really sure what “spam” means, and 3% have never heard the term. No wonder spam still works!
  • 9% have never heard the term “adware”, and 15% have never heard the term “phishing”.
  • Only 13% have a good idea what “podcasting” is.
  • Only 9% have a good idea what RSS feeds are. (Even less than podcasting!?)

Considering those statistics, this Weblog usability study comes as no surprise. Most participants (typical internet users) had no idea how to distinguish between a weblog and a “normal” site, and none of them had any idea how to subscribe to a weblog or feed.

I’m not saying there’s anything wrong with typical internet users—on the contrary, I think they’re right. Why is there a distinction between a weblog and a normal site, and why do just about all weblogs copy the design and navigation scheme Slashdot was using in 1999? Why do we spend time trading jargon like “trackback” and “podcast” instead of educating people? Something to think about.

Update 7/27/05: Keith Robinson writes about some of the same issues and the RSS issues in particular.

[via SEW and Digital Web]

May 25, 2005

The big problem with ads in RSS feeds

Filed under: — 6:00 am

Matt Haughey writes about why he thinks ads in RSS are a bad idea, and brings up an objection I haven’t heard much of in the endless debates about RSS ads. He divides his visitors into two categories: daily, devoted readers and random searchers, with the random visitors accounting for over 75% of traffic. RSS subscribers tend to be in the first category—devoted readers who don’t want to miss a single post—and he’d rather not annoy this group with ads.

I agree, and this is half the reason I don’t run any ads in RSS feeds. The other half of the reason: as I wrote about in Making Money from Content Sites last month, the devoted readers are far less likely to click on ads than the random searchers. And clicking on ads is all that matters, since the current options for RSS advertising (i.e. AdSense for Feeds) pay strictly by the click.

In short: it seems to me that ads in feeds not only annoy the last people you’d want to annoy, they also make little to no money due to lack of clicks. That last part’s just a theory, so I’d love to hear from anyone who has made money using ads in RSS.

Since I make my living from web advertising, I certainly have nothing ethically against RSS ads, and I personally don’t find them terribly annoying—I just doubt they’re a viable profit source right now, and I’m not sure they’ll ever be.

May 19, 2005

Personalized Google

Filed under: — 8:01 pm

Google has a new personalized home page feature that displays content you choose on a Google search page suitable for use as a home page. You can include things like Google News and BBC news on the page, and your GMail inbox if you have one. A nice JavaScript UI lets you drag the boxes around the page. Here’s the announcement at Google Blog, and a reaction from Jeremy Zawodny at Yahoo, who finds the whole thing eerily familiar.

They seem to be planning full RSS (or Atom?) support, but for now there are only about 10 feeds you can select from. Along with the BBC, Wired News, and Slashdot, I’m very pleased to report that they’ve chosen my Quotes of the Day as one of the feeds. It is using RSS, as you might have guessed, and someone at Google was nice enough to email me to let me know they’re using my feed and warn me that it might bring me some traffic (bring it on!)

As Jeremy pointed out, Google has taken a tiny step toward becoming a “portal” rather than a mere search engine. It will be interesting to see where they go from here.

May 4, 2005

Google Web Accelerator

Filed under: — 3:42 pm

The new and enigmatic Google Web Accelerator has just been announced. It’s an application that “uses the power of Google’s global computer network to make web pages load faster,” according to the FAQ. Reading a bit further, it appears to be a combination of a caching proxy and prefetching. It works for IE and Firefox, though only on Windows.

This is interesting and a bit spooky: the Google toolbar tracks every URL you visit, but this goes one step further by passing everything you view through Google’s servers. When I installed it, I had to agree to some lengthy legal language to that effect.

Experimenting briefly with the Accelerator turned on, Internet Explorer and Firefox do seem a bit faster, but with a broadband connection it’s nothing to write home about. Beyond the obvious privacy implications and the marginal speed increase, the main reasons I won’t use this long term have to do with prefetching:

  • As a web user, I don’t want my browser filling my bandwidth with requests for pages it hopes I’m going to click on. I don’t always click on the obvious things, and I’d rather keep some bandwidth open for background downloads, other browser sessions, and streaming audio.
  • As a webmaster, I’m concerned about the effect of widespread use of prefetching. For example, if my site is the first result on Google for a term, thousands of browsers are going to be loading my site in the background even when the user clicks on a different result. This costs me bandwidth, confuses my statistics, and could cause trouble with advertisers who are paying for real pageviews, not automated ones.

Regardless, this is very interesting and I can’t wait to see what becomes of Google’s latest “beta”. Google is getting dangerously close to becoming the world’s largest ISP.

February 1, 2005

MSN Search officially opens

Filed under: — 4:30 pm

The new MSN search that uses Microsoft’s new search engine rather than Yahoo’s results has been in beta testing for a while, but as of today it’s open for business. Microsoft is launching an advertising campaign for the new search engine that will include ads during the Super Bowl, the Oscars, and the Grammy awards.

Obviously Microsoft is willing to spend some of their billions to get this search engine noticed. I’m not sure how many people will use it over Google, but I’ll be keeping an eye on it. Currently my sites get most of their traffic from Google, about 1/4 of that amount from Yahoo, and MSN runs a distant third. The results at MSN are wildly different, though, so some sites will benefit more than others.

Also, Douglas at Stopdesign is impressed that MSN is using reasonably standard XML and CSS for the new site.

January 20, 2005

Followup on rel=nofollow

Filed under: — 1:31 pm

I previously mentioned Google’s announcement about the rel=nofollow attribute. There has been much enthusiasm about this, and much backlash against it, since then. It’s certainly not a miracle cure for anything, but I don’t see how it could hurt.

Yes, spammers will keep spamming. Eventually, three or four years from now, they’ll notice that their spam links are having less effect, but they’ll still do it—the fact remains that it’s little to no work for the spammer, so if even one person clicks on a link it’s still worthwhile. Just like email spam.

Many people have complained that this is only a good thing for Google—it just helps them sort out different types of links. Of course that’s true, but an advantage for Google is an advantage for all of us. If Google has more information about links, they can improve their search and ranking algorithms. We’ll see less spam in search results, even if we see just as much on our weblogs.

I’m not going to support the attribute on any of my sites. I monitor them daily and quickly remove any spam that gets through, so spammers already get no benefit here, and I want legitimate comments to get the benefit of their links. The biggest benefit for the web at large will be that there will no longer be thousands of abandoned weblogs out there that can be freely spammed with links that won’t be removed for years.

One more thing: as Scoble said, it’s amazing that Google was able to get their biggest competitors, and several other companies, to agree on this and implement it within a few days. Why can’t we see this kind of cooperation in the world of Instant Messaging?

January 7, 2005

A Bloglines clippings bug, and a hack to fix it

Filed under: — 4:46 pm

I use Bloglines as my RSS aggregator. I rely on it to follow over a hundred weblogs and news sites. I used to rely on its Clippings feature, but for the last few months a bug in their system has made the Clippings system less useful. I used to be able to quickly clip an item and have its title show up in the Clippings window, like this:

More recently, when I clip the same item, it would appear like this in the Clippings list:

Needless to say, this isn’t an improvement. I verified that it’s a Bloglines bug by trying it with a different username and a different computer. So I reported it as a bug. I also found another user talking about it in the Bloglines forum, so they know about it. (Did you know they had a forum? Try to find a link to it on their site.)

I’m sure Bloglines will fix the problem soon, but in the meantime I just want the feature to work, so I tried to fix it myself. I noticed that the problem appears first in the “Clip/Blog This” pop-up form. You can edit it there manually, but I want clipping to be a quick two-click process.

To make a long story short, I found a solution. There’s a Firefox extension, greasemonkey, that lets you add JavaScript extensions to pages to change their behavior. I wrote a quick one-line script, installed it with the plug-in, and now it works!

I love the idea of being able to fix broken web sites without waiting for their developers to do so. Greasemonkey was inspired by this extension that Adrian Holovaty made to solve problems he was having with a specific site, AllMusic. Now anyone can do it with a simple script rather than writing a whole browser extension. Excellent!

I wrote up a detailed article on how to do this at The JavaScript Weblog. If you have Greasemonkey installed and just want to fix the Bloglines bug, here’s a link to my script, just click on it and select Tools | Install User Script and you’re done.

Update 1/17/2005: This is no longer necessary, as Bloglines has fixed the clippings bug.

January 6, 2005

Trackback spam

Filed under: — 4:22 pm

Yesterday my wife and I dealt with a major trackback spam attack, and apparently we weren’t the only ones. Judging by the forum posts, this was a massive attack across at least hundreds of weblogs.

If you have a WordPress weblog, there’s a new plug-in that puts trackback pings into the moderation queue. Since trackbacks aren’t scanned by the built-in filtering system like comments, this is definitely needed. Most of us don’t get enough trackbacks to mind manually approving them.

Another possible solution is verifying the IP addresses of sites that send pings. This raises a couple of issues (such as sites like Blogger that might not ping from the same address as the weblog) but I think it’s worth pursuing.

One by one, the open, insecure systems of the past (email, weblog comments, trackback…) are being made useless by spammers. It’s going to be a messy place until a more secure alternative emerges.

December 10, 2004

The Google ABC

Filed under: — 10:30 pm

Google loves to create beta products, as Jeremy points out. The latest is Google Suggest, a demo of what Google looks like with a dynamic autocomplete feature. I’ll be writing about the JavaScript aspects of this elsewhere, but the first thing I noticed is that you get a suggestion as soon as you type a letter—presumably the most common search term for that letter. So here, for your entertainment, I present The Google ABCs:

If this new feature goes out of beta, I expect some serious competition for some of the letters, and a whole new kind of misspelling-based spam.

December 1, 2004

MSN spaces and standards

Filed under: — 11:58 pm

A good read: Eric Meyer talks about how Microsoft can support standards without breaking the web.

Meanwhile, MSN just launched their weblog service, MSN spaces, today. You can stop making fun of LiveJournal now—there’s a new place for newbies. And while I’m taking cheap shots, I’ll mention that the design of this new site shows Microsoft’s usual attitude toward standards. Here’s a sample weblog and here’s a screenshot of the same with all of the table cells outlined and here is the list of over 200 validation errors.

November 29, 2004

More skepticism on blogging for dollars

Filed under: — 4:19 pm

Via ProBlogger I ran across an article at CorporateBlogging: Blogs are Business Support Tools, Not Direct Money Makers. This echoes what Doc Searls said not too long ago, and I’ve heard the same tune from many different places. It’s completely correct, and it’s also completely wrong. Let me explain.

Many people have said the same thing about writing computer books: you can’t make money directly with books, but they’re good for your resumé. People who treat this as an absolute are missing the point: there’s more than one approach. There are writers who write for their resumé or reputation, and there are writers who write to make money.

The writers who write to make money have to do things a certain way:

  • Choose topics that are marketable rather than writing about whatever they please.
  • Perform on a publisher’s schedule rather than on their own.
  • Treat writing like a job, work on it full time, and be a team player.
  • Be willing to abandon a book or an entire subject and start over with something else if it isn’t working out.

The same applies to writing a weblog. If you want to make money, you have to treat it like a real business:

  • Focus on a topic that sponsors are interested in.
  • Write regularly whether you want to or not, or recruit additional writers.
  • Write and edit professionally to create quality content.
  • Consider your audience’s needs at least as important as your own ideas for the site.
  • Spend time doing unappealing things like marketing the site or contacting sponsors.
  • Be willing to change focus, style, or approach if the status quo isn’t working.

A personal weblog (like this one) isn’t likely to make money, and it isn’t intended to. It’s a resumé builder and a personal outlet. A company weblog like Google Blog or a corporate-sponsored weblog like Scoble’s isn’t going to make money directly—they’re business support tools. But a site that follows all of these rules—like PVRBlog or the Weblogs Inc. Network—can certainly make good money.

If you’re really interested in making money with a weblog, don’t listen to those who say it’s impossible. I’ve heard the same thing about writing books and running content-oriented websites for years, and I’ve made money doing both. I fully expect to make money with weblogs—in fact I’m already making some through WIN—and many others will too. Just keep in mind that it will take a different focus and different kinds of work than running a personal site or a business support tool, and it certainly won’t be a free lunch.

Update – 12/1/2004: Steve Rubel posts about yet another skeptical article at EContent. All of this skepticism makes me even more enthusiastic about making money with weblogs…

November 17, 2004

RSS Spam? ZDNet is very confused.

Filed under: — 6:11 pm

All of the media outlets are trying to work “blogging” into their strategies to remain relevant. The trouble is, they don’t have the usual editing and quality control, and things like this post at ZDnet are the result. Here’s a choice quote:

Lately I’ve seen my RSS feeds becoming heavily polluted by RSS spam – entries that are just ads, or sets of links that all lead to purchases (on which the spammer gets a cut). Maybe it’s because I’ve been covering cellular technology a lot. (I’d love to hear your experience.)

This paragraph is wrong on so many levels:

  • What is RSS spam? RSS isn’t anything like email. If you subscribe to a feed, what you get is entirely controlled by the authors of the site. So calling ads in RSS feeds “RSS spam” is like calling ads on Web pages “HTML spam” or ads in magazines “paper spam”.
  • He refers to “the spammer”. Who’s that? The site he’s reading a feed from or some mysterious third party who’s sneaking ads into the RSS feeds?
  • Does he think the fact that he’s “been covering cellular technology a lot” is actually causing him to receive personally-targeted ads in RSS feeds, or is it just bad phrasing?

He continues from there by going off on a bizarre tangent about the RSS specification, open source, and “the commons”. Apparently he thinks that the RSS standard includes “ethics” that aren’t being enforced. Or something.

Question is, who polices what no one owns? How can we maintain the cleanliness of the commons against those who don’t share its ethics? It’s a question that has haunted the Internet for 10 years now. It’s a question that, frankly, haunts every open source technology.

This paragraph sounds like a valid criticism of Wikipedia, but what does it have to do with RSS, or ads in RSS, or open source? And speaking of ethics, apparently lots of people have been leaving comments on this post but only the positive ones are getting through. It doesn’t take a “commons” to lose track of ethics. (via Matt)

November 11, 2004

On ICANN’s new transfer policy

Filed under: — 2:42 pm

ICANN has a new registrar transfer policy for domain names starting this Friday. It’s designed to make it easier for domain owners to switch registrars. Some of us have been waiting a long time for this, as we have domain names being held hostage by certain Australian domain name registrars who won’t be named here.

This was announced back in July with not much reaction from Slashdot and very little discussion anywhere else. Now that the policy is about to be in effect, everyone is spreading panic. Slashdot: new rules make domain hijacking easier. Kottke: ICANN’s Stupidity. Everyone is linking to this article at Netcraft that seems to have started it all.

I think the panic is a bit overblown (and a bit late–where was everyone’s concern in July?). If you read the actual policy it makes a couple of things clear:

  1. This policy is for registrar transfers, not ownership transfers. It doesn’t make it any easier for a domain to be hijacked, except perhaps by a corrupt registrar.
  2. The gaining registrar is still required to confirm the transfer: A transfer must not be allowed to proceed if no confirmation is received by the Gaining Registrar.

The big difference here is that the losing registrar has less ways to prevent the transfers. Considering the way some registrars have held domains hostage, this seems like a good thing to me, and I’ll avoid panic until I find a shred of evidence outside that Netcraft article. Ars Technica seems to agree.

Update: Jason Kottke has updated his entry to correct this–thanks for quoting me!

Another Update: Ross at Random Bytes has posted a rebuttal to Netcraft’s article that goes into lots of detail about the history behind this and the benefits of the new policy. Ross is also Director, Innovation and Research at Tucows, who assisted in initiating the new policy.

Google’s amazing expanding index

Filed under: — 2:06 pm

The search engine size war in review:

Anyone who works with large databases knows that “number of items” can be a very vague concept. Google’s number of items could vary depending on whether they decide to include or reject things like duplicates, obvious spam, malformed pages, newly crawled pages, and so on. Considering that they must index millions of new pages every day, it amuses me greatly that the “number of pages” at the bottom of the search page only changes when it’s convenient for marketing purposes.

November 4, 2004

Challenge-Response mail filters block themselves

Filed under: — 3:47 pm

Every week I get seven or eight email messages from challenge-response spam-blocking services like MailBlocks or Earthlink’s Spamblocker, asking me to take a few simple extra steps to get my email delivered. For the most part, I delete them.

I don’t believe in services like this because they start with the assumption that the recipient’s time is far more important than mine. Worse, most of the blocked messages are my responses to people’s questions. If you’re going to email me a question, don’t make it hard for me to get the answer to you. I won’t bother.

Now I read in techdirt that people like me aren’t the only thing keeping these systems from working. In an ironic twist, the challenge messages are being blocked by spam filters, so the sender often doesn’t see them at all.

This makes me wonder: what happens if everyone uses a service like this?

  • I send you a message.
  • Your spam blocker holds the message and sends me a challenge message.
  • My spam blocker holds the challenge message and sends you a challenge.
  • Your spam blocker holds the message….

I imagine MailBlocks has a way to prevent loops like this within their system, but if everyone uses different systems, what can they do? If they start whitelisting anything that looks like a challenge message, spam and viruses will start looking like challenge messages to get through. This whole idea just doesn’t scale.

October 5, 2004

I won’t pay for *this* content

Filed under: — 10:57 am

A company called Research and Markets has released a report on the US Market for Subscription-based Websites. You can view a description of the report but the real thing will cost you $280. I normally would have no problem with that—really good research is worth money—but how good could it be based on their description?

Days of free web contents are over, when web surfers got a whole lot of something for entirely nothing; from sending free e-greetings to watch live video broadcasts of their favourite football team match without giving a damn to their wallet. These privileges has become history, nevertheless it would be remembered as a pleasant memories.

Now I realize that most of these errors are probably simple translation errors, but I have to assume that if they spent so little time and effort on editing the description, they probably didn’t spend enough time researching and fact-checking the report. On top of that, they just published a press release about this report despite the fact that it appears to have been written in January 2003.

March 9, 2004

RankPulse: Tracking Google Results

Filed under: — 4:03 am

RankPulse has created something I wished for since the Google API was released: A site that polls Google on 1000 common search phrases, collects data daily, and provides some nice graphs. It’s good to see my quotations site at the top of one of their lists, and I have a feeling there are some great ideas for new sites buried in their database.

The list of keywords by number of results is particularly interesting. Who could have guessed there were far more results in Google for the words health and hotel than for the word computer? [via Searchblog]

Google’s New Look

Filed under: — 3:50 am

Google has been beta-testing a new design for a month or so. Bookmarklet Guru Jesse Ruderman has created a bookmarklet that lets you see the new design. [via anil]

They didn’t change much, but one thing I notice is that the Directory link has disappeared from the front page. Are users really so search-oriented that Google thinks nobody will miss it?

March 4, 2004

Shame on Comcast

Filed under: — 4:08 am

I use Comcast for cable Internet, and I’m generally happy with that. My account there also came with an email address, which I’ve used solely for a few monitoring scripts that run on my server. Nobody knows the address.

A few weeks ago, I received a spam message addressed to my Comcast email address. I assumed it was a fluke, but over the last few weeks I’ve received about twenty more. I just changed my username and email address there.

There’s a remote, remote chance that someone was guessing random email addresses, but I don’t think mine would have been that easy to guess. So it seems like Comcast might have sold a list of customer addresses. I made the new one longer and more random, so if any spam comes to that one, I’ll know for sure.

Wireless hotspot security with HotSpotVPN

Filed under: — 4:02 am

A post at SXSWblog (and another at a.wholelottanothing) reminded me that wireless networking at most hotels and conference centers isn’t encrypted, and the last thing you want to do is attend a conference with a thousand geeks, all of whom know how to run a packet sniffer, without some sort of protection.

The post recommended a service called HotSpotVPN, which is an encrypted VPN you can connect to through the public network. I just signed up – the sign up process is quick and easy if you have a Paypal account handy, and the first week of service is free. It’s $8.88 a month after that.

They also have some cute Flash movies to demonstrate the VPN setup process, which was very simple. (I recommend watching the whole movie before trying to follow along–the person who recorded them talks and moves the mouse rather quickly.)

I also thought I’d try their affiliate program, so if you sign up through this link I’ll make a small commission.

February 28, 2004

Mozilla Firefox

Filed under: — 4:02 am

I know that a blog entry about Mozilla is nearly as much of a cliché as a pop song about love, but I have to say I’m very impressed with the recently renamed Mozilla Firefox browser.

Firefox is quickly becoming by default browser for routine administration tasks. Its particular advantage is that it doesn’t forget HTTP passwords randomly like Internet Explorer does, and it remembers usernames and passwords for the various sites I log into more conveniently than IE.

Also, I previously mentioned some bookmarklets that make development easier in Mozilla. The Web Developer Extensions take this idea one step further with a toolbar that can disable various features, clear visited links, outline page elements and more. The EditCSS extension adds the ability to make changes to a stylesheet and view the results instantly. This is definitely my new favorite browser for web development.

February 20, 2004

Yahoo dumps Google

Filed under: — 1:43 am

Yahoo has stopped using Google results for their searches sooner than anyone expected. Their search engine is apparently based on Inktomi, but they’ve been very quiet about how it works.

I’m not sure if they’re using old data or if their crawler doesn’t understand redirected links like Google’s does, but several outdated URLs for my pages are in Yahoo’s index. The new URLs are also listed.

Currently, 36% of referral traffic to my largest site comes from Google and 13% comes from Yahoo. I haven’t seen any changes in these numbers since Yahoo’s switchover.

February 11, 2004

Down with virus warning messages

Filed under: — 4:16 am

Attention system administrators: Your well-intentioned but misguided attempt to warn me about viruses is driving me insane.

You see, the newest viruses forge the sender’s address. And when I say "newest", I mean virtually all of them in the last three years. So when you send a "Virus ALERT" message to the sender’s address, you’re not reaching the infected victim. You’re reaching an innocent party–which often turns out to be me–and creating a nuisance almost as significant as the original virus.

I use a number of very old email addresses that are all over the web, meaning I get lots of viruses and that my address is often forged as the sender. I use ClamAV on my server to eliminate the virus messages, and it’s so effective I rarely see a virus message at all. However, I get 10-20 pointless virus warnings per day. Since they’re all different and don’t contain a virus signature, these are almost impossible to filter.

So, if you’re an email administrator, or an anti-virus software company, please don’t compound the problem by sending out useless warnings. Thanks.

January 7, 2004

Too many blog search engines

Filed under: — 5:52 am

I was recently thinking of starting a blog search engine and aggregator. Then I ran across Ari Paparo’s big list of blog search engines. Maybe there are already enough.

I just added this site to ten or so of the search engines, so if you’re a new visitor who came from one of them, welcome!

December 31, 2003

RSS to NNTP gateway

Filed under: — 4:46 am

Gary Lawrence Murphy points to a beta RSS gateway that will allow you to read RSS feeds in an old-school NNTP newsreader. If this caught on, he points out that NNTP would solve many of the bandwidth problems of RSS.

My problem with the whole idea is that there isn’t a single Windows NNTP newsreader with a decent user interface. RSS readers, on the other hand, are getting better and better – I’ve recently been very impressed by FeedReader and FeedDemon. (I never actually use either one, but that’s a topic for later.)

December 29, 2003

Avoiding phishing scams

Filed under: — 4:26 am

A particular type of spam/scam called phishing is making the news more and more lately. This is a new cute name for the classic impersonation scheme where you get an email claiming to be from Paypal, eBay, or your bank and asking you to verify your username and password.

Right now these scam emails are pretty obvious to the informed–they make spelling and grammar mistakes the real company never would, include obvious fraudulent links, and ask for information no real company would ask for. But eventually one will be professional and subtle, so I thought I would share my strategy to guarantee these scammers can’t reach you. (More inside)


December 7, 2003

Why RSS isn’t always a good thing

Filed under: — 4:45 pm

Via Doc Searls: Gary Lawrence Murphy’s The End of RSS complains about the excessive bandwidth used by his RSS feed. Dan Sugalski also complains. They don’t know the half of it.

The Quotations Page offers RSS feeds to syndicate daily quotes. My logs show 74,257 requests for these files on a single day last week. Most downloaded the entire file despite the fact that it changes only once every 24 hours. Based on this, the RSS feeds use 157 MB of bandwidth per day. This is negligible to me (the rest of this busy site uses almost 5 GB per day) but I’ve had to do quite a bit of tweaking over the years to keep the sheer number of RSS requests from overwhelming the server.

In my case, a large part of the problem is Ximian Evolution, an information manager for the GNOME linux desktop. My feeds are included by default in every installation, which resulted in an effective distributed DOS attack against my site until I took measures against it. Thousands of sites using this software poll my site every 5 minutes.

Nearly 65% of my RSS requests are from Evolution. I have configured Apache to return a 403 error code to these requests. I hate to make the feed useless for these clients, but I had no other choice since my bug reports to the Evolution coders have been consistently ignored, and it will cut my RSS bandwidth in half.


(c) 2001-2007 Michael Moncur. All rights reserved, but feel free to quote me.
Powered by WordPress