Возраст домена | n/a |
Дата окончания | n/a |
PR | 3 |
ИКС | n/a |
Страниц в Google | 14 |
Страниц в Яндексе | 7 |
Dmoz | Нет |
Яндекс Каталог | Нет |
Alexa Traffic Rank | 8676767 |
Alexa Country | Нет данных |
История изменения показателей | Авторизация |
Идет сбор информации... Обновить
Nanaimo BC Canada, Nanaimo Hotels, Nanaimo Motels, Nanaimo Lodging, Nanaimo Accommodation, Nanaimo Acommodation
n/a
n/a
ISO-8859-1
19.08 КБ
1 396
9 966 симв.
8 254 симв.
Данные предоставлены сервисом semrush
Сайт | Общие фразы | PR | тИЦ | Alexa Rank | Alexa Country | |
---|---|---|---|---|---|---|
weather.gc.ca | 12 | 8 |
0 | 6338 | 110 | |
theweathernetwork.com | 10 | 8 |
0 | 1250 | 28 | |
wikipedia.org | 10 | 9 |
0 | 5 | 6 | |
viu.ca | 8 | 6 |
50 | 45249 | 1426 | |
tripadvisor.com | 8 | 7 |
0 | 264 | 81 | |
nanaimo.ca | 8 | 6 |
0 | 357654 | 13062 | |
youtube.com | 6 | 9 |
0 | 2 | 2 | |
tourismnanaimo.com | 6 | 5 |
0 | 1343895 | 63966 | |
wunderground.com | 5 | 8 |
0 | 581 | 153 | |
hellobc.com | 5 | 7 |
0 | 220012 | 9535 | |
Еще 40 сайтов после авторизации |
Данные предоставлены сервисом semrush
Данные linkpad ( 15 Ноября 2013 ) | |
Количество ссылок на сайт | 54 |
Количество доменов, которые ссылаются на сайт | 4 |
Количество найденных анкоров | 3 |
Исходящие (внешние) ссылки домена | 107 |
Количество доменов, на которые ссылается сайт | 79 |
Количество исходящих анкоров | 89 |
Внешние ссылки главной страницы ( 4 ) | |
maps.google.com/maps?f=q&hl=en&q=nanaimo | Google Maps |
weatheroffice.ec.gc.ca/city/pages/bc-20_metric_e.html | weather |
theweathernetwork.com/weather/stats/pages/C02094.htm | climate statistics |
buccaneerinn.com | <img> |
Внутренние ссылки главной страницы ( 12 ) | |
nanaimobar.htm | Nanaimo Bar Recipe |
#top | ^Top |
index.htm | Home |
gettinghere.htm | Getting Here |
adventures.htm | Adventures |
events.htm | Events |
dining.htm | Dining |
accommodations.htm | Accommodations |
kids.htm | Kids |
links.htm | Links |
sitemap.htm | Site Map |
contact.htm | Contact Us |
################################################################################
# For more examples see http://www.robotstxt.org/wc/exclusion-admin.html
# Default settings below allow all agents access to all objects in the webserver
################################################################################
User-agent: *
################################################################################
# LATEST CHANGES TO THE ROBOTS.TXT STANDARD INCLUDE #
################################################################################
################################################################################
# Crawl-delay:
################################################################################
# Crawl-delay: 60 would tell any robots named in the preceeding User-agent:
# section that they must wait 60 seconds between requests
################################################################################
Crawl-delay: 120
################################################################################
# Sitemap:
################################################################################
# Specifying the Sitemap location in your robots.txt file
# You can specify the location of the Sitemap using a robots.txt file. To do
# this, simply add the following line:
# Sitemap:
# The should be the complete URL to the Sitemap, such as:
# http://www.example.com/sitemap.xml
# This directive is independent of the user-agent line, so it doesn't matter
# where you place it in your file. If you have a Sitemap index file, you can
# include the location of just that file. You don't need to list each individual
# Sitemap listed in the index file.
################################################################################
################################################################################
# Disallow: /cgi-bin/
# Disallow: /images/
# Disallow: basket.aspx
################################################################################
################################################################################
# The Format
################################################################################
# The format and semantics of the "/robots.txt" file are as follows:
#
# The file consists of one or more records separated by one or more blank lines
# (terminated by CR,CR/NL, or NL). Each record contains lines of the form
# :
# The field name is case insensitive.
################################################################################
################################################################################
# Comments
################################################################################
# Comments can be included in file using UNIX bourne shell conventions: the '#'
# character is used to indicate that preceding space (if any) and the remainder
# of the line up to the line termination is discarded. Lines containing only a
# comment are discarded completely, and therefore do not indicate a record
# boundary.
################################################################################
################################################################################
# Records
################################################################################
# The record starts with one or more User-agent lines, followed by one or more
# Disallow lines, as detailed below. Unrecognised headers are ignored.
################################################################################
################################################################################
# User-agent
################################################################################
# The value of this field is the name of the robot the record is describing
# access policy for.
# If more than one User-agent field is present the record describes an identical
# access policy for more than one robot. At least one field needs to be present
# per record.
# The robot should be liberal in interpreting this field. A case insensitive
# substring match of the name without version information is recommended.
# If the value is '*', the record describes the default access policy for any
# robot that has not matched any of the other records. It is not allowed to
# have multiple such records in the "/robots.txt" file.
################################################################################
################################################################################
# Disallow
################################################################################
# The value of this field specifies a partial URL that is not to be visited.
# This can be a full path, or a partial path; any URL that starts with this
# value will not be retrieved. For example,
# Disallow: /help disallows both /help.html and /help/index.html
# Disallow: /help/ would disallow /help/index.html but allow /help.html.
# Any empty value, indicates that all URLs can be retrieved.
# At least one Disallow field needs to be present in a record.
################################################################################
################################################################################
# The presence of an empty "/robots.txt" file has no explicit associated
# semantics, it will be treated as if it was not present, i.e. all robots will
# consider themselves welcome.
################################################################################
Канада - Торонто - 74.200.0.83
T4G Limited
Q9 Networks
HTTP/1.1 200 OK
Date: Wed, 17 Jul 2019 18:48:34 GMT
Server: Apache
Last-Modified: Mon, 26 Nov 2012 05:42:58 GMT
Accept-Ranges: bytes
Content-Length: 19537
Connection: close
Content-Type: text/html
Кнопка для анализа сайта в один клик, для установки перетащите ссылку на "Панель закладок"