The #1 Biggest Mistake That People Make With Adsense
By Joel Comm
It's very easy to make a lot of money with AdSense. I know it's easy because in a short space of time, I've managed to turn the sort of AdSense revenues that wouldn't keep me in candy into the kind of income that pays the mortgage on a large suburban house, makes the payments on a family car and does a whole lot more besides.

But that doesn't mean there aren't any number of mistakes that you can make when trying to increase your AdSense income - and any one of those mistakes can keep you earning candy money instead of earning the sort of cash that can pay for your home.

There is one mistake though that will totally destroy your chances of earning a decent AdSense income before you've even started.

That mistake is making your ad look like an ad.

No one wants to click on an ad. Your users don't come to your site looking for advertisements. They come looking for content and their first instinct is to ignore everything else. And they've grown better and better at doing just that. Today's Internet users know exactly what a banner ad looks like. They know what it means, where to expect it - and they know exactly how to ignore it. In fact most Internet users don't even see the banners at the top of the Web pages they're reading or the skyscrapers running up the side.

But when you first open an AdSense account, the format and layout of the ads you receive will have been designed to look just like ads. That's the default setting for AdSense - and that's the setting that you have to work hard to change.

That's where AdSense gets interesting. There are dozens of different strategies that smart AdSense account holders can use to stop their ads looking like ads - and make them look attractive to users. They include choosing the right formats for your ad, placing them in the most effective spots on the page, putting together the best combination of ad units, enhancing your site with the best keywords, selecting the most ideal colors for the font and the background, and a whole lot more besides.

The biggest AdSense mistake you can make is leaving your AdSense units looking like ads.

The second biggest mistake you can make is to not know the best strategies to change them.

For more Google AdSense tips, visit http://adsense-secrets.com
Copyright © 2005 Joel Comm. All rights reserved

Friday, November 21, 2008

Scraper Site


A scraper site is a website that copies all of its content from other websites using web scraping. No part of a scraper site is original. A search engine is not a scraper site: sites such as Yahoo and Google gather content from other websites and index it so that the index can be searched with keywords. Search engines then display snippets of the original site content in response to a user's search.

In the last few years, and due to the advent of the Google Adsense web advertising program, scraper sites have proliferated at an amazing rate for spamming search engines. Open content sites such as Wikipedia are a common source of material for scraper sites.


Made for AdSense

Some scraper sites are created for monetizing the site using advertising programs such as Google AdSense. In such case, they are called Made for AdSense sites or MFA. This is also a derogatory term used to refer to websites that have no redeeming value except to get web visitors to the website for the sole purpose of clicking on advertisements.

Made for AdSense sites are considered sites that are spamming search engines and diluting the search results by providing surfers with less-than-satisfactory search results. The scraped content is considered redundant to that which would be shown by the search engine under normal circumstances had no MFA website been found in the listings.

These types of websites are being eliminated in various search engines and sometimes show up as supplemental results instead of being displayed in the initial search results.

Some sites engage in "Adsense Arbitrage"--they will buy Adwords spots for lower cost search terms and bring the visitor to a page that is mostly Adsense. The arbitrager then makes the difference between the low value clicks he bought from AdWords and the higher value clicks generated by this traffic on his MFA sites. In 2007, Google cracked down on this business model by closing the accounts of many arbitragers. Another way Google and Yahoo are combating the proliferation of arbitrage are through quality scoring systems. For example, in Google's case, Adwords penalizes "low quality" advertiser pages by placing a higher per click value to its campaigns. This effectively evaporates the arbitrager's profit margin.

Legality

Scraper sites may violate copyright law. Even taking content from an open content site can be a copyright violation, if done in a way which does not respect the license. For instance, the GNU Free Documentation License (GFDL) and Creative Commons ShareAlike (CC-BY-SA) licenses require that a republisher inform readers of the license conditions, and give credit to the original author.

Techniques


Many scrapers will pull snippets and text from websites that rank high for keywords they have targeted. This way they hope to rank highly in the SERPs (Search Engine Results Pages). RSS feeds are vulnerable to scrapers.

Some scraper sites consist of advertisements and paragraphs of words randomly selected from a dictionary. Often a visitor will click on a pay-per-click advertisement because it is the only comprehensible text on the page. Operators of these scraper sites gain financially from these clicks. Ad networks such as Google AdSense claim to be constantly working to remove these sites from their programs, although there is an active polemic about this since these networks benefit directly from the clicks generated at these kind of sites. From the advertiser's point of view, the networks don't seem to be making enough effort to stop this problem.

Scrapers tend to be associated with link farms and are sometimes perceived as the same thing, when multiple scrapers link to the same target site. A frequent target victim site might be accused of link-farm participation, due to the artificial pattern of incoming links to a victim website, linked from multiple scraper sites.

Web Scraping


Web scraping (sometimes called harvesting) generically describes any of various means to extract content from a website over HTTP for the purpose of transforming that content into another format suitable for use in another context. Those who scrape websites may wish to store the information in their own databases or manipulate the data within a spreadsheet (Often, spreadsheets are only able to contain a fraction of the data scraped). Others may utilize data extraction techniques as means of obtaining the most recent data possible, particularly when working with information subject to frequent changes. Investors analyzing stock prices, realtors researching home listings, meteorologists studying weather, or insurance salespeople following insurance prices are a few individuals who might fit this category of users of frequently updated data.

Access to certain information may also provide users with strategic advantage in business. Attorneys might wish to scrape arrest records from county courthouses in search of potential clients. Businesses that know the locations of competitors can make better decisions about where to focus further growth. Another common, but controversial use of information taken from websites is reposting scraped data to other sites.

Scraper sites

A typical example application for web scraping is a web crawler that copies content from one or more existing websites in order to generate a scraper site. The result can range from fair use excerpts or reproduction of text and content, to plagiarized content. In some instances, plagiarized content may be used as an illicit means to increase traffic and advertising revenue. The typical scraper website generates revenue using Google AdSense, hence the term 'Made for AdSense' or MFA website.

Web scraping differs from screen scraping in the sense that a website is really not a visual screen, but a live HTML/JavaScript-based content, with a graphics interface in front of it. Therefore, web scraping does not involve working at the visual interface as screen scraping, but rather working on the underlying object structure (Document Object Model) of the HTML and JavaScript.

Web scraping also differs from screen scraping in that screen scraping typically occurs many times from the same dynamic screen "page", whereas web scraping occurs only once per web page over many different static web pages. Recursive web scraping, by following links to other pages over many web sites, is called "web harvesting". Web harvesting is necessarily performed by a software called a bot or a "webbot", "crawler", "harvester" or "spider" with similar arachnological analogies used to refer to other creepy-crawly aspects of their functions. Web harvesters are typically demonised, while "webbots" are often typecast as benevolent.

There are legal web scraping sites that provide free content and are commonly used by webmasters looking to populate a hastily made site with web content, often to profit by some means from the traffic the article hopefully brings. This content does not help the ranking of the site in search engine results because the content is not original to that page. Original content is a priority of search engines. Use of free articles usually requires one to link back to the free article site, as well as to a link(s) provided by the author. This is however not necessary as some sites those which provide free articles might also have a clause in their terms of service that does not allow copying content - link back or not. The site Wikipedia.org, (particularly the English Wikipedia) is a common target for web scraping.

Legal issues

Although scraping is against the terms of use of some websites, the enforceability of these terms is unclear.While outright duplication of original expression will in many cases be illegal, the courts ruled in Feist Publications v. Rural Telephone Service that duplication of facts is allowable. Also, in a February, 2006 ruling, the Danish Maritime and Commercial Court (Copenhagen) found systematic crawling, indexing and deep linking by portal site ofir.dk of real estate site Home.dk not to conflict with Danish law or the database directive of the European Union.

U.S. courts have acknowledged that users of "scrapers" or "robots" may be held liable for committing trespass to chattels,which involves a computer system itself being considered personal property upon which the user of a scraper is trespassing. However, to succeed on a claim of trespass to chattels, the plaintiff must demonstrate that the defendant intentionally and without authorization interfered with the plaintiff's possessory interest in the computer system and that the defendant's unauthorized use caused damage to the plaintiff. Not all cases of web spidering brought before the courts have been considered trespass to chattels.

In Australia, the 2003 Spam Act outlaws some forms of web harvesting.

Technical measures to stop bots

A web master can use various measures to stop or slow a bot. Some techniques include:

* Blocking an IP address. This will also block all browsing from that address.
* If the application is well behaved, adding entries to robots.txt will be adhered to. You can stop Google and other well-behaved bots this way.
* Sometimes bots declare who they are. Well behaved ones do (for example 'googlebot'). They can be blocked on that basis. Unfortunately, malicious bots may declare they are a normal browser.
* Bots can be blocked by excess traffic monitoring.
* Bots can be blocked with tools to verify that it is a real person accessing the site, such as the CAPTCHA project.
* Sometimes bots can be blocked with carefully crafted Javascript.
* Locating bots with a honeypot or other method to identify the IP addresses of automated crawlers.


Read More......

Wednesday, November 5, 2008

Webby Award




The Webby Awards is an international award honoring excellence on the Internet, including websites, interactive advertising, online film and video, and mobile web sites, presented by The International Academy of Digital Arts and Sciences since 1996. There is also a second set of awards called the People's Voice Awards for the same categories which are given by popular vote.


History

The phrase The Webby Awards was used from 1994-1996 by the World Wide Web Organization, which was first introduced in 1994 by WebMagic, Cisco Systems and ADX Kentrox. As one of its services, it sponsored "the monthly Webby awards to spotlight online innovation. Web.org was decommissioned in 1997.

The phrase The Webby Awards has been used since 1996 to describe an annual awards ceremony. It was initially sponsored by The Web magazine which was published by IDG, and produced by Tiffany Shlain. Winners were selected by a group which would officially become the International Academy of Digital Arts and Sciences (IADAS) in 1998. After The Web Magazine closed, the ceremonies continued.

In 2006, The Webby Awards launched three new award programs including categories honoring interactive advertising, mobile content, and the Webby Film and Video Awards, which honors original film and video premiering on the Internet. In 2008, the 12th Annual Webby Awards received nearly 10,000 entries from over 60 countries worldwide.

Awards granted

Categories

The Webby Awards are presented in over 100 categories among all four types of entries. A website can be entered on multiple categories and receive multiple awards.

In each category, two awards are handed out: a Webby Award selected by a panel of judges, and a People's Voice Award selected by the votes of visitors to The Webby Awards site.

Acceptance speeches

The Webbys are famous for limiting recipients to five word speeches, which are often humorous. For example, in 2005, former Vice President Al Gore's was "Please don't recount this vote." He was introduced by Vint Cerf who used the same format to state, "We all invented the Internet."At the 2007 awards, David Bowie's speech was "I only get five words? Shit, that was five. Four more there. That's three. Two."

Criticism

The Webbys have been criticized for their pay-to-enter and pay-to-attend policies, and for not taking most websites into consideration before distributing their awards.

Read More......

Social Bookmarking




Social bookmarking is a method for Internet users to store, organize, search, and manage bookmarks of web pages on the Internet with the help of metadata.



In a social bookmarking system, users save links to web pages that they want to remember and/or share. These bookmarks are usually public, and can be saved privately, shared only with specified people or groups, shared only inside certain networks, or another combination of public and private domains. The allowed people can usually view these bookmarks chronologically, by category or tags, or via a search engine.

Most social bookmark services encourage users to organize their bookmarks with informal tags instead of the traditional browser-based system of folders, although some services feature categories/folders or a combination of folders and tags. They also enable viewing bookmarks associated with a chosen tag, and include information about the number of users who have bookmarked them. Some social bookmarking services also draw inferences from the relationship of tags to create clusters of tags or bookmarks.

Many social bookmarking services provide web feeds for their lists of bookmarks, including lists organized by tags. This allows subscribers to become aware of new bookmarks as they are saved, shared, and tagged by other users.

As these services have matured and grown more popular, they have added extra features such as ratings and comments on bookmarks, the ability to import and export bookmarks from browsers, emailing of bookmarks, web annotation, and groups or other social network features.

History

The concept of shared online bookmarks dates back to April 1996 with the launch of itList,[2] the features of which included public and private bookmarks. Within the next three years, online bookmark services became competitive, with venture-backed companies such as Backflip, Blink, Clip2, ClickMarks, HotLinks, and others entering the market. They provided folders for organizing bookmarks, and some services automatically sorted bookmarks into folders (with varying degrees of accuracy). Blink included browser buttons for saving bookmarks; Backflip enabled users to email their bookmarks to others and displayed "Backflip this page" buttons on partner websites. Lacking viable models for making money, this early generation of social bookmarking companies failed as the dot-com bubble burst — Backflip closed citing "economic woes at the start of the 21st century". In 2005, the founder of Blink said, "I don't think it was that we were 'too early' or that we got killed when the bubble burst. I believe it all came down to product design, and to some very slight differences in approach."

Founded in 2003, del.icio.us pioneered tagging and coined the term social bookmarking. In 2004, as del.icio.us began to take off, Furl and Simpy were released, along with Citeulike and Connotea (sometimes called social citation services), and the related recommendation system Stumbleupon. In 2006, Ma.gnolia, Blue Dot, and Diigo entered the bookmarking field, and Connectbeam included a social bookmarking and tagging service aimed at businesses and enterprises. In 2007, IBM released its Lotus Connections product.

Sites such as Digg, reddit, and Newsvine offer a similar system for organization of "social news".

Advantages

With regard to creating a high-quality search engine, a social bookmarking system has several advantages over traditional automated resource location and classification software, such as search engine spiders. All tag-based classification of Internet resources (such as web sites) is done by human beings, who understand the content of the resource, as opposed to software, which algorithmically attempts to determine the meaning of a resource. Also, people tend to find and bookmark web pages that have not yet been noticed or indexed by web spiders. Additionally, a social bookmarking system can rank a resource based on how many times it has been bookmarked by users, which may be a more useful metric for end users than systems that rank resources based on the number of external links pointing to it.

For users, social bookmarking can be useful as a way to access a consolidated set of bookmarks from various computers, organize large numbers of bookmarks, and share bookmarks with contacts. Libraries have found social bookmarking to be useful as an easy way to provide lists of informative links to patrons.

Disadvantages

From the point of view of search data, there are drawbacks to such tag-based systems: no standard set of keywords (a lack of a controlled vocabulary), no standard for the structure of such tags (e.g., singular vs. plural, capitalization, etc.), mistagging due to spelling errors, tags that can have more than one meaning, unclear tags due to synonym/antonym confusion, unorthodox and personalized tag schemata from some users, and no mechanism for users to indicate hierarchical relationships between tags (e.g., a site might be labeled as both cheese and cheddar, with no mechanism that might indicate that cheddar is a refinement or sub-class of cheese).

Social bookmarking can also be susceptible to corruption and collusion. Due to its popularity, some users have started considering it as a tool to use along with search engine optimization to make their website more visible. The more often a web page is submitted and tagged, the better chance it has of being found. Spammers have started bookmarking the same web page multiple times and/or tagging each page of their web site using a lot of popular tags, obliging developers to constantly adjust their security system to overcome abuses.

Read More......

Saturday, November 1, 2008

Predominant compensation methods in affiliate marketing

The following models are also referred to as performance based pricing/compensation model, because they only pay if a visitor performs an action that is desired by the advertisers or completes a purchase. Advertisers and publishers share the risk of a visitor that does not convert.

Pay-per-sale (PPS) - (revenue share)

Cost-per-sale (CPS). Advertiser pays the publisher a percentage of the order amount (sale) that was created by a customer who was referred by the publisher. This form of compensation is also referred to as revenue sharing.

Pay-per-lead (PPL)/pay-per-action (PPA)

Cost-per-action or cost-per-acquisition (CPA), cost per lead (CPL). Advertiser pays publisher a commission for every visitor referred by the publisher to the advertiser (web site) and performs a desired action, such as filling out a form, creating an account or signing up for a newsletter. This compensation model is very popular with online services from internet service providers, cell phone providers, banks (loans, mortgages, credit cards) and subscription services.

Special CPA compensation models

Pay-per-call

Similar to pay per click, pay per call is a business model for ad listings in search engines and directories that allows publishers to charge local advertisers on a per-call basis for each lead (call) they generate (CPA). Advertiser pays publisher a commission for phone calls received from potential prospects as response to a specific publisher ad.

The term "pay per call" is sometimes confused with click-to-call, the technology that enables the “pay-per-call” business model. Call-tracking technology allows to create a bridge between online and offline advertising. Click-to-call is a service which lets users click a button or link and immediately speak with a customer service representative. The call can either be carried over VoIP, or the customer may request an immediate call back by entering their phone number. One significant benefit to click-to-call providers is that it allows companies to monitor when online visitors change from the website to a phone sales channel.

Pay-per-call is not just restricted to local advertisers. Many of the pay-per-call search engines allows advertisers with a national presence to create ads with local telephone numbers. Pay-per-call advertising is still new and in its infancy, but according to the Kelsey Group, the pay-per-phone-call market is expected to reach US$3.7 billion by 2010.
Pay-per-install (PPI)

Advertiser pays publisher a commission for every install by a user of usually free applications bundled with adware applications. Users are prompted first if they really want to download and install this software. Pay per install is included in the definition for pay per action (like cost-per-acquisition), but its relationship to how adware is distributed made the use of this term versus pay per action more popular to distinguish it from other CPA offers that pay for software downloads. The term pay per install is being used beyond the download of adware[1].

Pricing models in search engine marketing

Pay-per-click (PPC)

Cost-per-click (CPC). Advertiser pays publisher a commission every time a visitor clicks on the advertiser's ad. It is irrelevant (for the compensation) how often an ad is displayed. commission is only due when the ad is clicked. See also click fraud.

Pay per action (PPA)

Cost-per-action (CPA). Search engines started to experiment with this compensation method in spring 2007.

Pricing modes in display advertising

Pay-per-impression (PPI)

Cost-per-mil (mil/mille/M = Latin/Roman numeral for thousand) impressions. Publisher earns a commission for every 1,000 impressions (page views/displays) of text, banner image or rich media ads.

Pay per action (PPA) or cost per action (CPA)

Cost-per-action (CPA). Used by display advertising as pricing mode as early as 1998 . By mid-2007 the CPA/Performance pricing mode (50%) superseded the CPM pricing mode (45%) and became the dominant pricing mode for display advertising .

Shared CPM

Shared Cost-per-mil (CPM) is a pricing model in which two or more advertisers share the same ad space for the duration of a single impression (or page view) in order to save CPM costs. Publishers offering a shared CPM pricing model generally offer a discount to compensate for the reduced exposure received by the advertisers that opt to share online ad space in this way. Inspired by the rotating billboards of outdoor advertising, the shared CPM pricing model can be implemented with either refresh scripts (client-side JavaScript) or specialized rich media ad units. Publishers that opt to offer a shared CPM pricing model with their existing ad management platforms must employ additional tracking methods to ensure accurate impression counting and separate click-through tracking for each advertiser that opts to share a particular ad space with one or more other advertisers.

Compensation methods in contextual advertising

Pay-per-click (PPC)

See PPC/CPC in Search engine marketing.

Pay-per-impression (PPI)

see PPI/CPM in Display Advertising

Google AdSense offers this compensation method for its "Advertise on this site" feature that allows advertisers to target specific publisher sites within the Google content network.

Compensation methods grid

There are different names used for the same type of compensation method and some compensation methods are actually special cases for another method. This grid shows alternative names for the individual compensation methods. The "cost per ..." name was used as default.




Read More......

Site Map and Biositemap

Easy to know about site map just for you to make better ...

A site map (or sitemap) is a representation of the architecture of a web site. It can be either a document in any form used as a planning tool for web design, or a web page that lists the pages on a web site, typically organized in hierarchical fashion. This helps visitors and search engine bots find pages on the site.

While some developers argue that site index is a more appropriately used term to relay page function, web visitors are used to seeing each term and generally associate both as one and the same. However, a site index is often used to mean an A-Z index that provides access to particular content, while a site map provides a general top-down view of the overall site contents.


Benefits of sitemaps

Site maps can improve search engine optimization of a site by making sure that all the pages can be found. This is especially important if a site uses Adobe Flash or JavaScript menus that do not include HTML links.

Most search engines will only follow a finite number of links from a page, so if a site is very large, the site map may be required so that search engines and visitors can access all content on the site.

[edit] XML sitemaps

Google introduced Google Sitemaps so web developers can publish lists of links from across their sites. The basic premise is that some sites have a large number of dynamic pages that are only available through the use of forms and user entries. The sitemap files can then be used to indicate to a web crawler how such pages can be found.
Google, MSN, Yahoo and Ask now jointly support the Sitemaps protocol.

Since MSN, Yahoo, Ask, and Google use the same protocol, having a sitemap lets the four biggest search engines have the updated page information. Sitemaps do not guarantee all links will be crawled, and being crawled does not guarantee indexing. However, a sitemap is still the best insurance for getting a search engine to learn about your entire site.

XML sitemaps have replaced the older method of "submitting to search engines" by filling out a form on the search engine's submission page. Now web developers submit a sitemap directly, or wait for search engines to find it.

Biositemap

The Biositemaps Protocol allows scientists, engineers, centers and institutions engaged in modeling, software tool development and analysis of biomedical and informatics data to broadcast and disseminate to the world the information about their latest computational biology resources (data, software tools and web-services). The biositemap concept is based on ideas from and Crawler-friendly Web Servers, and it integrates the features of Sitemaps and RSS feeds into a decentralized mechanism for announcing and communicating updates to existent and introduction of new biomedical data and computing resources. These site, institution or investigator specific biositemap descriptions are posted in XML format online and are searched, parsed, monitored and interpreted by web search engines, human and machine interfaces, custom-design web crawlers and other outlets interested in discovering updated or novel resources for bioinformatics and biomedical research investigations. The biositemap mechanism separates the providers of biomedical resources (investigators or institutions) from the consumers of resource content (researchers, clinicians, news media, funding agencies, educational and research initiatives).

A Biositemap is an XML file that lists the biomedical and bioinformatics resources for a specific research group or consortium. It allows developers of biomedical resources to completely describe the functionality and usability and of each of their software tools, databases or web-services.



* when providers and consumers of bioinformatics and biomedical computing resources need to communicate in a scalable, efficient, agile and decentralized fashion. In these cases, a human (graphical) or a machine (computer) interface connects the descriptions of resources and facilities the search, comparison and utilization of most relevant resources for specific scientific studies. This infrastructure enables effective and timely matching of services and needs among biomedical investigators and the public in general.
* where meta-resources, computational or digital libraries need to update their contents to reflect the current states of newly developed biomedical materials and resources using AJAX, JSON or WSDL protocols.

Biositemaps supplement and do not replace the existing frameworks for dissemination of data, tools and services. By broadcasting a relevant and up-to-date Biositemap file on the web, investigators and institutions are only helping different engine's crawlers, machine interfaces and users dynamically acquire, interpret, process and utilize the most accurate information about the state of the resources disseminated by the developing group. Using this biositemap protocol does not guarantee that your resources will be included in search indexes nor does it influence the way that your tools are ranked or perceived by the community.

Read More......

About Google Guidelines

Here for you around Google guidlines, The Google webmaster guidelines are a list of suggested practices Google has provided as guidance to webmasters. Websites that do not follow some of the guidelines may be removed from the Google index. A website experiencing problems being indexed or ranked well can find direction in the guidelines. Often websites are not following all of the guidelines and may experience a lower ranking in Google's search engine results or complete removal from the Google index. There are currently thirty-one guidelines which are split into four categories:

1. Quality: There are five "basic principles" and eight "specific guidelines" in this category. These guidelines are directed toward deceptive behavior and manipulation attempts that may lessen the quality of the Google search engine results. Violations of the quality guidelines are the most common reason of a website being removed from Google's index.
2. Technical: There are five guidelines in this category. These guidelines cover specific issues that may inhibit a web page from being seen by Googlebot, which is Google's search engine crawler.
3. Design and content: There are nine guidelines in this category. These guidelines give practical information to webmasters concerning the way their site is built and represent the most common unintentional mistakes that webmasters make.
4. When your site is ready:There are five guidelines in this category. These guideline provide specific direction for a webmaster who has created a new site and are also relevant for older sites which are not yet in the Google index.

Let's Blogging !

Read More......

Web Design

Web page design is a process of conceptualization, planning, modeling, and execution of electronic media content delivery via Internet in the form of technologies (such as markup languages) suitable for interpretation and display by a web browser or other web-based graphical user interfaces (GUIs).

The intent of web design is to create a web site (a collection of electronic files residing on one or more web servers) that presents content (including interactive features or interfaces) to the end user in the form of web pages once requested. Such elements as text, forms, and bit-mapped images (GIFs, JPEGs, PNGs) can be placed on the page using HTML, XHTML, or XML tags. Displaying more complex media (vector graphics, animations, videos, sounds) usually requires plug-ins such as Flash, QuickTime, Java run-time environment, etc. Plug-ins are also embedded into web pages by using HTML or XHTML tags.

Improvements in the various browsers' compliance with W3C standards prompted a widespread acceptance of XHTML and XML in conjunction with Cascading Style Sheets (CSS) to position and manipulate web page elements. The latest standards and proposals aim at leading to the various browsers' ability to deliver a wide variety of media and accessibility options to the client possibly without employing plug-ins.

Typically web pages are classified as static or dynamic.

* Static pages don’t change content and layout with every request unless a human (web master or programmer) manually updates the page.

* Dynamic pages adapt their content and/or appearance depending on the end-user’s input or interaction or changes in the computing environment (user, time, database modifications, etc.) Content can be changed on the client side (end-user's computer) by using client-side scripting languages (JavaScript, JScript, Actionscript, media players and PDF reader plug-ins, etc.) to alter DOM elements (DHTML). Dynamic content is often compiled on the server utilizing server-side scripting languages (PHP, ASP, Perl, Coldfusion, JSP, Python, etc.). Both approaches are usually used in complex applications.

With growing specialization within communication design and information technology fields, there is a strong tendency to draw a clear line between web design specifically for web pages and web development for the overall logistics of all web-based services.

Web Site Design

A web site is a collection of information about a particular topic or subject. Designing a web site is defined as the arrangement and creation of web pages that in turn make up a web site. A web page consists of information for which the web site is developed. A web site might be compared to a book, where each page of the book is a web page.

There are many aspects (design concerns) in this process, and due to the rapid development of the Internet, new aspects may emerge. For non-commercial web sites, the goals may vary depending on the desired exposure and response. For typical commercial web sites, the basic aspects of design are:

* The content: the substance, and information on the site should be relevant to the site and should target the area of the public that the website is concerned with.
* The usability: the site should be user-friendly, with the interface and navigation simple and reliable.
* The appearance: the graphics and text should include a single style that flows throughout, to show consistency. The style should be professional, appealing and relevant.
* The visibility: the site must also be easy to find via most, if not all, major search engines and advertisement media.

A web site typically consists of text and images. The first page of a web site is known as the Home page or Index. Some web sites use what is commonly called a Splash Page. Splash pages might include a welcome message, language or region selection, or disclaimer. Each web page within a web site is an HTML file which has its own URL. After each web page is created, they are typically linked together using a navigation menu composed of hyperlinks. Faster browsing speeds have led to shorter attention spans and more demanding online visitors and this has resulted in less use of Splash Pages, particularly where commercial web sites are concerned.

Once a web site is completed, it must be published or uploaded in order to be viewable to the public over the internet. This may be done using an FTP client. Once published, the web master may use a variety of techniques to increase the traffic, or hits, that the web site receives. This may include submitting the web site to a search engine such as Google or Yahoo, exchanging links with other web sites, creating affiliations with similar web sites, etc.

Multidisciplinary requirements

Web site design crosses multiple disciplines of information systems, information technology and communication design. The web site is an information system whose components are sometimes classified as front-end and back-end. The observable content (e.g. page layout, user interface, graphics, text, audio) is known as the front-end. The back-end comprises the organization and efficiency of the source code, invisible scripted functions, and the server-side components that process the output from the front-end. Depending on the size of a Web development project, it may be carried out by a multi-skilled individual (sometimes called a web master), or a project manager may oversee collaborative design between group members with specialized skills.

Issues

As in collaborative designs, there are conflicts between differing goals and methods of web site designs. These are a few of the ongoing ones.

Lack of collaboration in design

In the early stages of the web, there wasn't as much collaboration between web designs and larger advertising campaigns, customer transactions, social networking, intranets and extranets as there is now. Web pages were mainly static online brochures disconnected from the larger projects.

Many web pages are still disconnected from larger projects. Special design considerations are necessary for use within these larger projects. These design considerations are often overlooked, especially in cases where there is a lack of leadership, lack of understanding of why and technical knowledge of how to integrate, or lack of concern for the larger project in order to facilitate collaboration. This often results in unhealthy competition or compromise between departments, and less than optimal use of web pages.

Liquid versus fixed layouts

On the web the designer has no control over several factors, including the size of the browser window, the web browser used, the input devices used (mouse, touch screen, voice command, text, cell phone number pad, etc.) and the size and characteristics of available fonts.

Some designers choose to control the appearance of the elements on the screen by using specific width designations. This control may be achieved through the use of a HTML table-based design or a more semantic div-based design through the use of CSS. Whenever the text, images, and layout of a design do not change as the browser changes, this is referred to as a fixed width design. Proponents of fixed width design prefer precise control over the layout of a site and the precision placement of objects on the page. Other designers choose a liquid design. A liquid design is one where the design moves to flow content into the whole screen, or a portion of the screen, no matter what the size of the browser window. Proponents of liquid design prefer greater compatibility and using the screen space available. Liquid design can be achieved through the use of CSS, by avoiding styling the page altogether, or by using HTML tables (or more semantic divs) set to a percentage of the page. Both liquid and fixed design developers must make decisions about how the design should degrade on higher and lower screen resolutions. Sometimes the pragmatic choice is made to flow the design between a minimum and a maximum width. This allows the designer to avoid coding for the browser choices making up The Long Tail, while still using all available screen space. Depending on the purpose of the content, a web designer may decide to use either fixed or liquid layouts on a case-by-case basis.

Similar to liquid layout is the optional fit to window feature with Adobe Flash content. This is a fixed layout that optimally scales the content of the page without changing the arrangement or text wrapping when the browser is resized.

Flash

Adobe Flash (formerly Macromedia Flash) is a proprietary, robust graphics animation or application development program used to create and deliver dynamic content, media (such as sound and video), and interactive applications over the web via the browser.

Flash is not a standard produced by a vendor-neutral standards organization like most of the core protocols and formats on the Internet. Flash is much more restrictive than the open HTML format, though, requiring a proprietary plugin to be seen, and it does not integrate with most web browser UI features like the "Back" button.

According to a study, 98% of US Web users have the Flash Player installed.[3] The percentage has remained fairly constant over the years; for example, a study conducted by NPD Research in 2002 showed that 97.8% of US Web users had the Flash player installed. Numbers vary depending on the detection scheme and research demographics.

Many graphic artists use Flash because it gives them exact control over every part of the design, and anything can be animated and generally "jazzed up". Some application designers enjoy Flash because it lets them create applications that do not have to be refreshed or go to a new web page every time an action occurs. Flash can use embedded fonts instead of the standard fonts installed on most computers. There are many sites which forgo HTML entirely for Flash. Other sites may use Flash content combined with HTML as conservatively as gifs or jpegs would be used, but with smaller vector file sizes and the option of faster loading animations. Flash may also be used to protect content from unauthorized duplication or searching. Alternatively, small, dynamic Flash objects may be used to replace standard HTML elements (such as headers or menu links) with advanced typography not possible via regular HTML or CSS (see Scalable Inman Flash Replacement).

Flash detractors claim that Flash websites tend to be poorly designed, and often use confusing and non-standard user-interfaces, such as the inability to scale according to the size of the web browser, or it's incompatibility with common browser features such as the back button. Up until recently, search engines have been unable to index Flash objects, which has prevented sites from having their contents easily found. This is because many search engine crawlers rely on text to index websites. It is possible to specify alternate content to be displayed for browsers that do not support Flash. Using alternate content also helps search engines to understand the page, and can result in much better visibility for the page. However, the vast majority of Flash websites are not disability accessible (for screen readers, for example) or Section 508 compliant. An additional issue is that sites which commonly use alternate content for search engines to their human visitors are usually judged to be spamming search engines and are automatically banned.

The most recent incarnation of Flash's scripting language (called "ActionScript", which is an ECMA language similar to JavaScript) incorporates long-awaited usability features, such as respecting the browser's font size and allowing blind users to use screen readers. Actionscript 2.0 is an Object-Oriented language, allowing the use of CSS, XML, and the design of class-based web applications.

CSS versus tables for layout



When Netscape Navigator 4 dominated the browser market, the popular solution available for designers to lay out a Web page was by using tables. Often even simple designs for a page would require dozens of tables nested in each other. Many web templates in Dreamweaver and other WYSIWYG editors still use this technique today. Navigator 4 didn't support CSS to a useful degree, so it simply wasn't used.

After the browser wars subsided, and the dominant browsers such as Internet Explorer became more W3C compliant, designers started turning toward CSS as an alternate means of laying out their pages. CSS proponents say that tables should be used only for tabular data, not for layout. Using CSS instead of tables also returns HTML to a semantic markup, which helps bots and search engines understand what's going on in a web page. All modern Web browsers support CSS with different degrees of limitations.

However, one of the main points against CSS is that by relying on it exclusively, control is essentially relinquished as each browser has its own quirks which result in a slightly different page display. This is especially a problem as not every browser supports the same subset of CSS rules. For designers who are used to table-based layouts, developing Web sites in CSS often becomes a matter of trying to replicate what can be done with tables, leading some to find CSS design rather cumbersome due to lack of familiarity. For example, at one time it was rather difficult to produce certain design elements, such as vertical positioning, and full-length footers in a design using absolute positions. With the abundance of CSS resources available online today, though, designing with reasonable adherence to ,;;' standards involves little more than applying CSS 2.1 or CSS 3 to properly structured markup.

These days most modern browsers have solved most of these quirks in CSS rendering and this has made many different CSS layouts possible. However, some people continue to use old browsers, and designers need to keep this in mind, and allow for graceful degrading of pages in older browsers. Most notable among these old browsers are Internet Explorer 5 and 5.5, which, according to some web designers, are becoming the new Netscape Navigator 4 — a block that holds the World Wide Web back from converting to CSS design. However, the W3 Consortium has made CSS in combination with XHTML the standard for web design.

Form versus Function

Some web developers have a graphic arts background and may pay more attention to how a page looks than considering other issues such as how visitors are going to find the page via a search engine. Some might rely more on advertising than search engines to attract visitors to the site. On the other side of the issue, search engine optimization consultants (SEOs) are concerned with how well a web site works technically and textually: how much traffic it generates via search engines, and how many sales it makes, assuming looks don't contribute to the sales. As a result, the designers and SEOs often end up in disputes where the designer wants more 'pretty' graphics, and the SEO wants lots of 'ugly' keyword-rich text, bullet lists, and text links. One could argue that this is a false dichotomy due to the possibility that a web design may integrate the two disciplines for a collaborative and synergistic solution. Because some graphics serve communication purposes in addition to aesthetics, how well a site works may depend on the graphic designer's visual communication ideas as well as the SEO considerations.

Another problem when using a lot of graphics on a page is that download times can be greatly lengthened, often irritating the user. This has become less of a problem as the internet has evolved with high-speed internet and the use of vector graphics. This is an engineering challenge to increase bandwidth in addition to an artistic challenge to minimize graphics and graphic file sizes. This is an on-going challenge as increased bandwidth invites increased amounts of content.

Accessible Web design

Web accessibility

To be accessible, web pages and sites must conform to certain accessibility principles. These can be grouped into the following main areas:

* use semantic markup that provides a meaningful structure to the document (i.e. web page)
* Semantic markup also refers to semantically organizing the web page structure and publishing web services description accordingly so that they can be recognized by other web services on different web pages. Standards for semantic web are set by IEEE
* use a valid markup language that conforms to a published DTD or Schema
* provide text equivalents for any non-text components (e.g. images, multimedia)
* use hyperlinks that make sense when read out of context. (e.g. avoid "Click Here.")
* don't use frames
* use CSS rather than HTML Tables for layout.
* author the page so that when the source code is read line-by-line by user agents (such as a screen readers) it remains intelligible. (Using tables for design will often result in information that is not.)

However, W3C permits an exception where tables for layout either make sense when linearized or an alternate version (perhaps linearized) is made available.

Website accessibility is also changing as it is impacted by Content Management Systems that allow changes to be made to webpages without the need of obtaining programming language knowledge.

Website Planning

Before creating and uploading a website, it is important to take the time to plan exactly what is needed in the website. Thoroughly considering the audience or target market, as well as defining the purpose and deciding what content will be developed are extremely important.

Purpose

It is essential to define the purpose of the website as one of the first steps in the planning process. A purpose statement should show focus based on what the website will accomplish and what the users will get from it. A clearly defined purpose will help the rest of the planning process as the audience is identified and the content of the site is developed. Setting short and long term goals for the website will help make the purpose clear and plan for the future when expansion, modification, and improvement will take place. Also, goal-setting practices and measurable objectives should be identified to track the progress of the site and determine success.

Audience

Defining the audience is a key step in the website planning process. The audience is the group of people who are expected to visit your website – the market being targeted. These people will be viewing the website for a specific reason and it is important to know exactly what they are looking for when they visit the site. A clearly defined purpose or goal of the site as well as an understanding of what visitors want to do or feel when they come to your site will help to identify the target audience. Upon considering who is most likely to need or use the content, a list of characteristics common to the users such as:

* Audience Characteristics
* Information Preferences
* Computer Specifications
* Web Experience

Taking into account the characteristics of the audience will allow an effective website to be created that will deliver the desired content to the target audience.

Content

Content evaluation and organization requires that the purpose of the website be clearly defined. Collecting a list of the necessary content then organizing it according to the audience's needs is a key step in website planning. In the process of gathering the content being offered, any items that do not support the defined purpose or accomplish target audience objectives should be removed. It is a good idea to test the content and purpose on a focus group and compare the offerings to the audience needs. The next step is to organize the basic information structure by categorizing the content and organizing it according to user needs. Each category should be named with a concise and descriptive title that will become a link on the website. Planning for the site's content ensures that the wants or needs of the target audience and the purpose of the site will be fulfilled.

Compatibility and restrictions

Because of the market share of modern browsers (depending on your target market), the compatibility of your website with the viewers is restricted. For instance, a website that is designed for the majority of websurfers will be limited to the use of valid XHTML 1.0 Strict or older, Cascading Style Sheets Level 1, and 1024x768 display resolution. This is because Internet Explorer is not fully W3C standards compliant with the modularity of XHTML 1.1 and the majority of CSS beyond 1. A target market of more alternative browser (e.g. Firefox and Opera) users allow for more W3C compliance and thus a greater range of options for a web designer.

Another restriction on webpage design is the use of different Image file formats. The majority of users can support GIF, JPEG, and PNG (with restrictions). Again Internet Explorer is the major restriction here, not fully supporting PNG's advanced transparency features, resulting in the GIF format still being the most widely used graphic file format for transparent images.

Many website incompatibilities go unnoticed by the designer and unreported by the users. The only way to be certain a website will work on a particular platform is to test it on that platform.

Planning Documentation

Documentation is used to visually plan the site while taking into account the purpose, audience and content, to design the site structure, content and interactions that are most suitable for the website. Documentation may be considered a prototype for the website – a model which allows the website layout to be reviewed, resulting in suggested changes, improvements and/or enhancements. This review process increases the likelihood of success of the website.

First, the content is categorized and the information structure is formulated. The information structure is used to develop a document or visual diagram called a site map. This creates a visual of how the web pages will be interconnected, which helps in deciding what content will be placed on what pages. There are three main ways of diagramming the website structure:

* Linear Website Diagrams will allow the users to move in a predetermined sequence;
* Hierarchical structures (of Tree Design Website Diagrams) provide more than one path for users to take to their destination;
* Branch Design Website Diagrams allow for many interconnections between web pages such as hyperlinks within sentences.

In addition to planning the structure, the layout and interface of individual pages may be planned using a storyboard. In the process of storyboarding, a record is made of the description, purpose and title of each page in the site, and they are linked together according to the most effective and logical diagram type. Depending on the number of pages required for the website, documentation methods may include using pieces of paper and drawing lines to connect them, or creating the storyboard using computer software.

Some or all of the individual pages may be designed in greater detail as a website wireframe, a mock up model or comprehensive layout of what the page will actually look like. This is often done in a graphic program, or layout design program. The wireframe has no working functionality, only planning, though it can be used for selling ideas to other web design companies.

Read More......
Custom Search

Bookmark and Share

Add to Google Reader or Homepage

Add to My AOL

Subscribe in NewsGator Online