miisieq/robots-txt-bundle

Robots.txt generator for Symfony 2, 3 & 4

Installs: 2 494

Dependents: 0

Suggesters: 0

Security: 0

Stars: 3

Watchers: 1

Forks: 1

Open Issues: 0

Type:symfony-bundle

2.0.1 2018-10-15 20:29 UTC

This package is auto-updated.

Last update: 2024-10-16 09:23:15 UTC


README

The problem

It's pretty common workflow that we work on our projects in local environment, then deploy code to preproduction or staging server for out client to approve the work, then finally push to production environment.

While we absolutely want crawlers to index our production environment, we don't want to see our test servers in search results.

How it works?

Depending on the Symfony environment, application will return robots.txt file with rule that allows to index whole content only we are in prod environment. In case of another environment, the application will block whole site from indexing.

Installation

Step 1: Install the bundle

First, open a command console, enter your project directory and execute the following command to download the latest version of this bundle:

composer require miisieq/robots-txt-bundle

Step 2: Register the bundle in your kernel

Then add the bundle to your kernel:

class AppKernel extends Kernel
{
    public function registerBundles()
    {
        $bundles = [
            // ...

            new Miisieq\RobotsTxtBundle\MiisieqRobotsTxtBundle(),
        ];

        // ...
    }
}

Step 3: Configure the bundle

Add the following to your config file:

# app/config/config.yml

miisieq_robots_txt: ~

You can easily add links to your site maps:

# app/config/config.yml

miisieq_robots_txt:
    host: http://example.com
    production_environment: prod
    sitemaps:
        - "/sitemap.xml"
        - "/catalog/sitemap.xml"

Step 4: Register the routes

To allow to get your robots.txt file, register the following route:

# app/config/routing.yml
miisieq_robots_txt:
    resource: "@MiisieqRobotsTxtBundle/Resources/config/routes.yaml"
    prefix:   /

Step 5: Remove static robots.txt file (if exists)

rm web/robots.txt