Tuesday, December 4, 2007

Website Traffic Analysis: Is Weblog Analysis Adequate?

Summary :
The Internet bubble may have burst, but Internet marketing is stronger than
ever. Paid search engine advertisement is growing at a stratospheric rate of 25%
per quarter (not per year). Internet sales are growing at a rate much faster
than overall retail sales. The Amazon stock has nearly tripled in a year.



The Internet bubble may have burst, but Internet marketing is stronger than
ever. Paid search engine advertisement is growing at a stratospheric rate of 25%
per quarter (not per year). Internet sales are growing at a rate much faster
than overall retail sales. The Amazon stock has nearly tripled in a year.




What is happening is that all organizations, even small ones, are realizing that
Internet hype may have been overdone but the importance of Internet marketing is
growing stronger every day. Internet marketing is all about bringing traffic to
your web site, and then converting this traffic into achieving your strategic
goals, be they selling goods, building awareness or delivering information.



In this context, understanding traffic to your website gains critical
importance. Simple metrics such as counters or page views are totally inadequate
in the marketing context. Marketing managers and small business owners need to
know who is visiting, from where and when, why they are visiting and what they
are doing when on the website. These are the 5Ws of Internet marketing.





Currently this type of analysis is being conducted in two ways: Weblog analysis
and Tracking code analysis. The web logs are provided by Internet service
providers, and there are both online and offline tools to conduct the analysis.
Tracking code analyses are provided online by several providers such as Hit Box
and Web Site Traffic Report.



Although both types of analyses are supposed to count the same thing (visitor
actions on the website), there are technical differences that are subtle in
scope, but significant in impact. Tracking code systems are generally more
direct and more accurate. Further, tracking code systems are relatively so
inexpensive (usually starting at $10/month) that every organization should
investigate the value of their use. The cost of using inaccurate statistics far
exceeds the minor cost of tracking code systems.




The table on the following page highlights the main technical differences
between the two systems and their impact on the outcome.



 


























































Feature

Weblog System

Tracking Code System

What does it mean?



HTML
Frames:
Often each webpage may have two to four or even more frames

Often counts each frame as a
page view.

Page is only counted once.

Serious over count with


weblog
analysis.
A serious discrepancy.

Cached pages

Many



ISPs
store many
pages on



proxy


servers

to improve performance. These are mostly not counted by weblog systems.

Properly counts each as a page
view.

Undercount by weblog analysis.
The seriousness of the undercount depends on the use of proxy servers.

IP address pools

Many ISP have a pool of IP
addresses that change over time and even during a session. Weblog systems
are based on IP address and thus may over count users.

Uses an internal session cache
that does not rely on IP address.

over count by weblog analysis
can be significant.

False page views

If the users quickly skip over
pages even before the page is fully loaded, the weblog system count the
skipped over page as a page view.

The page view is not counted
until the tracking code (usually at the bottom of the page) is loaded.

over countby weblog systems
can be significant.

Artificial traffic

Scores of search engines
spider the web every day. Any hit by the spiders is counted as a page
view.

The spidering is usually too
quick to be recognized by the tracking code systems.

As the number of


search



engines

grows, the over count by weblog systems can be serious.

Placement of tracking code


No impact.

In some very slow systems, the
tracking code at the end of the page may not be recognized before page
change even when most of the page has been read.

This is a relatively rare
problem and is even of less significance with




faster


connections
.
Tracking code systems may yield a minor undercount, if any.





The analysis above clearly shows
that tracking code systems provide better data that is the basis of all
analysis.



All tracking code systems are not alike. Many produce scores of tables and
charts, but the critical insights are lost in charts and graphs. We recommend
Web Site Traffic Report (www.websitetrafficreport.com) where the key data is
intuitively organized in a single report that is automatically emailed to the
users every day while the users still have the options to go online at any time
and conduct additional analyses.

10 Web Design Tips




101 Tips on search engine optimization ebook


by: perfect optimization




(August 2006)



Many web designers do not pay attention in coding structure, the
result is no one can find those websites in search engines!. In
many cases search engine spiders cannot crawl 100% of those
sites, this is because of huge size, html code error, navigation
problem due to using scripts. By following these ten tips while
designing and developing your website, you can improve your web
presence.



1) Table structure: Nested tables can cause the huge page size
and it could be problem with download times. Use separate tables
wherever possible. Rather than using tables, use CSS to format
your page layouts.




2) Page size: small web pages load faster. Make your page size
to below 50K, so that search engine spiders can able to crawl
your entire website without having any time delay.



3) Frames: Do not use Frames. Some search engines not even
supported frames and it could be some display/alignment problem
with some browsers. It could also affect while bookmarking a
page. Remember, whatever you are trying to accomplish by using
frames can usually be done with CSS.





<A HREF="http://a.tribalfusion.com/h.click/aRmyfbmHMJ3Tvg2Wup56vZanFMLXcQT1cM31GnwpTrR2bUVTFBGVmf0PEUQQsnmQd3MYtZbtVmQM3srWYbZbBUAir2PvcR6fK2Hvq0HBJnWau4PBR3sU6UcnkUVMfPPUNWtM5TFj02bioUtnCf4Oikn/http://try.starware.com/tb/landing/horoscopes/horoscopes_wizard.php?aff_id=tango_lv-anim-cl2-728" TARGET="_blank"><IMG SRC=http://cdn5.tribalfusion.com/media/867856/Horo_468.gif WIDTH=468 HEIGHT=60 BORDER=0></A>




4) Alt tags: Minimize the use of graphics and Flash. If your
site having graphics try to explain those by adding ALT text.
Since search engine spiders cannot read text within images, you
should provide alternative text, so that spiders can understands
what they mean.




5) Navigation: Each pages on your website should be easily
connected. Use of JavaScript links and image maps may affect
spidering your entire site or you should use text links at the
bottom of your page to help search engine spiders to navigate
each pages. Try to check your web pages in a text only browser
and ensure you can easily navigate all the pages from page to
page.



6) Title and Meta tags: Add title and Meta tags to each pages in
your site and all title tags and Meta tags should be unique.



7) Sitemap: Provide a sitemap that linked from your homepage. It
must be essential if your site having dynamic pages, that
contains special characters such as ?,%,&,# etc. As these pages
are generated automatically, search engines spiders cannot
follow these links.



8) HTML codes: Internet explorer can display your page even if
any minor errors in the html code, but it could be a major
problem with search engine spiders. I recommend validating your
html codes through some online tools that available in the net.
By getting W3C standards can give your web pages greater
visibility in web searches.




9) Cascading Style Sheets: CSS enables you to independently
control the structure of your web page. By using CSS, you will
get its benefits like smaller file sizes, browser compatibility
and search engine friendly.



10) External CSS/JavaScript files: Call all your JavaScript and
CSS from external file, so that spiders can easily access your
data without causing any error.