Andy Schaff – Portent https://www.eigene-homepage-erstellen.net Internet Marketing: SEO, PPC & Social - Seattle, WA Wed, 15 Mar 2017 02:20:43 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.3 Real Web Devs Do SEO From Day One https://www.eigene-homepage-erstellen.net/blog/seo/real-web-dev-seo-from-day-one.htm https://www.eigene-homepage-erstellen.net/blog/seo/real-web-dev-seo-from-day-one.htm#comments Wed, 26 Oct 2016 12:14:34 +0000 https://www.eigene-homepage-erstellen.net/?p=33667 Search Engine Optimization (SEO) is a bit of a mystery for most developers. So, I’ve put together this checklist of SEO essentials that will make your SEO team and your client very happy. Inspect the HTML source code Get picky about HTML source code: Is your HTML structure valid? Valid HTML structure makes the page easier… Read More

The post Real Web Devs Do SEO From Day One appeared first on Portent.

]]>
Search Engine Optimization (SEO) is a bit of a mystery for most developers. So, I’ve put together this checklist of SEO essentials that will make your SEO team and your client very happy.

Inspect the HTML source code

Get picky about HTML source code:

  • Is your HTML structure valid? Valid HTML structure makes the page easier for search engines to parse. You can validate it here.
  • Do all pages include proper head elements, with a single title element and meta description? The title element (most people call it the “title tag,” but we know better) is the single strongest on-page ranking factor. The meta description is what shows up in the search snippet.
  • Can CMS users edit the meta description and title element on every page? They’re going to need to.
  • Does every content page (not paginated lists of multiple pages) have a single H1 tag? See valid HTML structure, above.
  • Are image alt attributes defined for all images? ALT attributes are a strong ranking signal.

HTTP header statuses

Your pages must return proper HTTP statuses. We often take this for granted – many applications use the right HTTP status codes out of the box. But we have run into a handful of issues over the years. It’s worth double-checking:

  • Missing pages must return a 404 response, not 200. A 200 response can create duplicate content and waste valuable crawl budget.
  • Permanent redirects must return a 301 response, not 302.

Two easy ways to check the status returned by the hosting server are Google Chrome console, or (my favorite) redbot.org.

In Chrome: Open console (ctrl + shift + j on windows) and traverse to the “Network” tab. Refresh the page you want to test and inspect the first row with the page URL. The “Status” column is your HTTP status.

eigene-homepage-erstellen.net returning HTTP status of 200

eigene-homepage-erstellen.net returning HTTP status of 200

For redbot.org, enter the full URL you want to test and analyze the return header:

redbot.org 301 check for eigene-homepage-erstellen.net/insights

redbot.org 301 check for eigene-homepage-erstellen.net/insights

Create a base robots.txt file

Help your SEO team out by implementing a solid robot.txt as a good starting point.

Portent’s resident SEO developer, Matthew Henry, wrote a great article on all things robots.txt. Definitely worth a read! I’ll only touch on a basic approach to robots.txt here.

  • Include a link to an XML sitemap. Verify generation and add it to the top of the file.
  • Set crawl-delay.  While this is a non-standard, it doesn’t hurt to throttle the crawler bots that do read and follow it. This can help reduce load on your server.

Here is a basic robots.txt example:

Sitemap: https://www.eigene-homepage-erstellen.net/sitemap_index.xml

User-agent: *
Crawl-Delay: 10

Your SEO team will probably want to add some Disallow directives. Coordinate with them they have crawled the site and done some analysis.

Trailing Slashes

Oh, trailing slashes… one of the biggest banes of a web developer’s existence. I’m not sure how many times I’ve asked, “Who gives a sh*t?!?.” Turns out, search engines do. I know search engines care and understand why, but that doesn’t stop me from hating it a little. Regardless, make sure you handle them.

Here are a few approaches to handling trailing slashes.

301 redirect counterpart URLs

Make sure your URLs follow a consistent pattern with a redirect.

Example Apache .htaccess rewrite rule for adding trailing slash:

RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^.*[^/]$ /$0/ [L,R=301]

Note: Test thoroughly! This is just an example.

Example nginx config rewrite rule for adding trailing slash.

if ($request_filename !~* .(gif|html|htm|jpe?g|png|json|ico|js|css|flv|swf|pdf|doc|txt|xml)$ ) {
     rewrite ^(.*[^/])$ $1/ permanent;
}

Note: Test thoroughly! This is just an example.

Example IIS web.config rewrite rule for adding a trailing slash.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <system.webServer>
        ...
        <rewrite>
            ...
            <rules>
                ...
                <rule name="trailing slash rewrite" stopProcessing="true">
                    <match url="(.*[^/])$" />
                    <conditions>
                    <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
                    <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
                    <add input="{REQUEST_FILENAME}" pattern="(.*?)\.html$" negate="true" />
		    <add input="{REQUEST_FILENAME}" pattern="(.*?)\.htm$" negate="true" />
                    <add input="{REQUEST_FILENAME}" pattern="(.*?)\.aspx$" negate="true" />
                    </conditions>
                    <action type="Redirect" url="{R:1}/" redirectType="Permanent" />
                </rule>
                ...
            </rules>
            ...
        </rewrite>
        ...
    </system.webServer>
</configuration>

Note: Test thoroughly! This is just an example.

Plugins/Extensions/Modules

Many CMS applications can handle this programmatically, either as an option/feature built into the core or via an extension, module, or plugin. I know that WordPress, Drupal, and Joomla all have this available.

Canonical tags

This is a last resort. If you are unable to utilize one of the other 2 methods above, you can use rel=canonical. If implemented correctly, canonical tags can help search engines figure out which URL to index. But canonical tags can cause unforeseen problems if not carefully implemented, and they’re a hack, not a real fix. Read Ian’s post on Search Engine Land for more about this.

Page speed

We’ve written a lot of articles on how page speed is a huge factor for SEO. ← Seriously, check them out. These 4 page speed factors should be at the top of your list to implement:

  • Browser caching
  • Page caching
  • Minification
  • Image compression

If you use WordPress, I wrote a detailed article about how to configure W3 Total Cache to handle most of these caching methods.

301 Redirects

I may have implemented more 301 redirects than anyone else on the planet. They also happen to be another bane of my existence. Many of the 301 redirects I implement are to fix bad inbound links from 3rd party sites. Perfect example:

Redirect 301 /109568960804534353862	https://www.eigene-homepage-erstellen.net/

But, if that link happens to generate a lot of incoming traffic, it is important to send the user to something better than a 404. This is generally the case for all 301 redirects — we want to preserve link authority, but also maximize a visitor’s experience by serving a page that’s relevant to what they are looking for, regardless of whether they found a stale link.

Here are 3 best practices around 301 redirects that should all web developers should consider.

Create 301 Redirect Map

A 301 redirect map is just a spreadsheet defining old URLs and their redirect targets. We always create redirect maps when migrating a client to a new site or platform. But they’re also good if you’re reworking URLs and/or site structure. Often your SEO team will provide this for you, but if you’re dealing with a major site migration, something like Drupal to WordPress, plan to generate a redirect map to guide the process. Trust me: It minimizes risk of catastrophic broken link messes. It also minimizes cleanup necessary after launch.

You can streamline the process.

Say you’ve created a custom script that queries the old site database and inserts old pages into the new application. Either create a mapping db table with old and new URLs, or create an attribute for each page that defines its old URL. You can then use this data to either generate 301 redirects in whatever web server you’re using (IIS, Apache, nginx), or you can create a 404 handler that checks the database for a matching URL and redirects accordingly.

Don’t do this programmatically, though. Use the database or attribute to generate a web server configuration directive. There’s less overhead.

Avoid multiple redirects

Redirect hops are not good for link authority, indexing, or site performance. Many times, especially with older sites, you’ll find redirect chains. Consolidate them so there’s a single redirect.

Do not 302 redirect

Unless the URL you are redirecting is truly temporary, do not use a 302 redirect. Be sure to understand the difference between 301 and 302 redirects. Simply put, 301 is permanent, and 302 is temporary — and we rarely see temporary directs. Use what’s technically correct.

Conclusion

Development that ignores modern SEO best practices is bad for the business. It also means cleanup projects later one. It’s easier to do it right at the beginning.

Take a proactive approach to SEO development. Learn best practices and use them from the start. It will lay the foundation for your client’s or your company’s site. It’s an approach web developers need to try more — as well as simply saying yes more. But that’s a topic for another day.

Feel free to hit myself or Portent’s SEO team up with questions in the comments!

The post Real Web Devs Do SEO From Day One appeared first on Portent.

]]>
https://www.eigene-homepage-erstellen.net/blog/seo/real-web-dev-seo-from-day-one.htm/feed 1
Why Page Speed Matters & PageSpeed Insights Fails – A Developer’s Tale https://www.eigene-homepage-erstellen.net/blog/design-dev/a-developers-take-on-page-speed.htm https://www.eigene-homepage-erstellen.net/blog/design-dev/a-developers-take-on-page-speed.htm#comments Fri, 18 Mar 2016 18:30:32 +0000 https://www.eigene-homepage-erstellen.net/?p=30093 Note: Check out our Ultimate Guide to page Speed for a top-to-bottom look at how you can improve page load times. At this point, anyone who browses, manages, and/or creates websites on the regular, should know that page speed matters. So.. everyone. It’s no surprise that we demand our information fast, like right meow. If… Read More

The post Why Page Speed Matters & PageSpeed Insights Fails – A Developer’s Tale appeared first on Portent.

]]>

Note: Check out our Ultimate Guide to page Speed for a top-to-bottom look at how you can improve page load times.

At this point, anyone who browses, manages, and/or creates websites on the regular, should know that page speed matters. So.. everyone. It’s no surprise that we demand our information fast, like right meow. If this is a new concept, you are lagging far behind. I believe the majority of users understand this but are still figuring out how to accomplish it. In this post, I answer why page speed matters to me as a senior developer, why the entire scope of the web application should be addressed to garnish the full potential, and why Google PageSpeed Insights is a thorn in my side.

Why page speed matters — a developer’s perspective

A colleague of mine asked me why page speed matters from my perspective. My first reaction was, “Because I want a super-fast user experience on every site I visit, just like everyone else.” For me, and I’m sure most others, it’s an intuitive response. I haven’t run the scientific numbers on this, but I’m pretty sure that if you surveyed 1,000 random people on whether they wanted faster loading websites or slower loading websites, 100% would choose faster loading.

So, why aren’t more sites fast? I’m sure it is any combination of reasons. Lack of budget, resources, knowledge, and/or know how. Perhaps laziness. Maybe the company they hired isn’t looking out for them like they should. Hmm… maybe. That got me thinking because it makes way too much sense. As a developer, I’ve been through the process countless times. The big site application that takes months and months to plan, develop, test, and release — and when it’s done, there is that feeling of wiping your hands clean and moving on. Was page speed and optimization baked into the plan? Too many times it can be an after thought, and probably still is in much of the web dev world.

Not Portent’s web dev world, however. It’s been an important part of our process for years.

And the more I think about why page speed matters from my perspective, the more I reflect on that process and think, “because it matters to our clients.” Whether or not they’re aware of its importance when engaging our partnership, it is one of the core subjects analyzed and discussed as the foundation level of the marketing stack, and one of my principle responsibilities at Portent.

So, let’s hunt our prey. Hmm, seems a bit much. Sprint to our goals? Corny. I just wanted to tie in a cheetah for a post on page speed (because they are awesome) while we transition to take a closer look.

Look at the entire scope

Getting great page speed isn’t just about implementing a few one-off recommendations you found on Google PageSpeed Insights. In fact, PageSpeed Insights can be misleading and confusing (more on that later). You have to look at the entire scope of your application, from your hosting environment to the front-end output.

Potential is in the combination

Most of the site speed analysis tools out there today are only able to focus on the front-end, or output of your web application. They analyze the source code, the time it takes to load all assets, resource optimization potential, and others. They aren’t able to see your environment setup and its configuration — for good reason, because that would be a major security hole. Granted, some of the metric results from these tools (like latency and time to first byte) will show issues that are directly affected by the environment, but they can only go so far. Regardless, you will not see the full potential of your application’s loading speeds until you have addressed both environment and application optimizations.

Compare your website to a car. You can make all the cosmetic changes you want to make the vehicle (front-end output) look pretty, but if the engine (server stack) is sluggish, or isn’t getting enough fuel (bandwidth), the overall performance still suffers. The stack technologies carry a heavier importance. If your PHP (or Java, Python, Ruby, etc.) threads are taking 5+ seconds to process the request and output browser code, you’ve lost the battle already.

We strongly advocate looking at the entire scope of the application, but if you have to choose, put your money into the environment — it will go further and still allow for front-end optimizations to be made later and/or with less effort.

A rant on Google PageSpeed Insights

Recently, our dev team was sent some feedback about our site’s grade on Google PageSpeed Insights from a guest at a seminar Ian was presenting at. I did some digging and comparison with other tools and simply put.. got pretty annoyed.

I’ve spent a LOT of time optimizing our environment and application as a forward facing example of the product we can provide to our clients. I have made great improvements that consistently provide visitors with sub 1 second load times on all of our pages. First time, first page visitors may see load times from 1.5 – 2.5 seconds, but after that, it’s all sub 1.[1]

However, if you ask PageSpeed Insights, we’re flunking. I don’t know who at Google came up with the scoring algorithm, but they can bite me.

webP detection

Blake and I recently implemented our lazy loading, responsive image solution on eigene-homepage-erstellen.net, which also includes webP support. webP is an image format developed by Google that employs lossless and lossy compression, reducing image sizes by approximately 25%.

Ironically, Google’s web-based PageSpeed grading system isn’t recognizing it:

PageSpeed Insights: eigene-homepage-erstellen.net images

Google’s PageSpeed Insights extension for Chrome, however, is — which is funny because it’s technically been deprecated:

PageSpeed Insights Extension: eigene-homepage-erstellen.net images

And if we take a closer look at the HAR (HTTP Archive), webP images are being served where applicable:

eigene-homepage-erstellen.net homepage HAR: webP images

Leverage browser caching

Often, your site is at the mercy of 3rd party analytics software. It’s a balance between using tools that help analyze and provide insights, and keeping bloat to a minimum. Google PageSpeed Insights is docking us for their own software. Perhaps they should consider putting an expiry date much further in the future.

PageSpeed Insights: browser caching on analytics.js

Tiny minification improvements

I’d also like to know how much weight is put into the grading algorithm for 2% reductions in minification:

PageSpeed Insights: Minify for 2% reduction

A combination of tools

It’s unfortunate how much weight gets put on that PageSpeed Insights grade. Clients refer to it. Potential customers look at it. It is a misrepresentation of how their site is really doing. I’m not saying it’s not helpful. Actually, let’s back up at tick — I don’t think the grading system is helpful. Analyzing the recommendations provided can be useful, but take it with a grain of salt and use other tools for your analysis!!

We have a great list here.

Look at tools that give a waterfall analysis, like Pingdom and Web Page Test. Use the browser console (ctrl+shift+j) and analyze the ‘Network’ tab. Combine the results and come up with a game plan for tackling the highest priority items, which are usually:

  • leveraging browser cache
  • optimizing images
  • combining and minifying javascript and css

In Summary

Take site speed seriously. The market is getting more and more competitive — don’t think for a second that a potential customer won’t leave your site in search of a better competitor’s. Press your development team to make site speed a priority. If they avoid it or shrug off its importance, hire us, and I will personally analyze your current setup, design and implement your environment stack, and optimize your application so your site can reach its full site speed potential.


  1. Results may vary for visitors outside of North America.

The post Why Page Speed Matters & PageSpeed Insights Fails – A Developer’s Tale appeared first on Portent.

]]>
https://www.eigene-homepage-erstellen.net/blog/design-dev/a-developers-take-on-page-speed.htm/feed 5
Ultimate Guide to Page Speed: WordPress Optimization https://www.eigene-homepage-erstellen.net/blog/design-dev/ultimate-site-speed-guide-wordpress-optimization.htm https://www.eigene-homepage-erstellen.net/blog/design-dev/ultimate-site-speed-guide-wordpress-optimization.htm#comments Fri, 04 Mar 2016 00:39:05 +0000 https://www.eigene-homepage-erstellen.net/?p=30040 Preface: Let’s make a ruckus Chapter 1: A Guide To This Guide Chapter 2: Why Page Speed Matters To Digital Marketing Chapter 3: What Impacts Page Speed? Chapter 4: Novice – Image Compression And Such Chapter 5: Intermediate – Server Compression And Geekery Chapter 6: Advanced – Varnish, Apache and nginx Chapter 7: Tools Chapter… Read More

The post Ultimate Guide to Page Speed: WordPress Optimization appeared first on Portent.

]]>
Preface: Let’s make a ruckus
Chapter 1: A Guide To This Guide
Chapter 2: Why Page Speed Matters To Digital Marketing
Chapter 3: What Impacts Page Speed?
Chapter 4: Novice – Image Compression And Such
Chapter 5: Intermediate – Server Compression And Geekery
Chapter 6: Advanced – Varnish, Apache and nginx
Chapter 7: Tools
Chapter 8: Glossary
Hidden Track: WordPress Optimization ← You are here

We show unabashed favoritism and profess our love for a piece of software. You lose all confidence in our sanity and wonder why you’re reading this document at all.

WordPress is our best friend. We can (and do) use it to power 90% of the sites we build. With the right technology stack, it’s as fast as any ‘enterprise’ toolset. Kept up to date, it’s secure (we’ve never had an up-to-date WordPress install hacked). The CMS is easy to use.

So yes, we’re biased. That’s why we’ve written an entire chapter just about WordPress. We’ll talk about a few general ways to add some zip, and then dive deep into our favorite caching tool, W3 Total Cache.

W3 Total Cache for WordPress

We work on a lot of WordPress sites and our caching plugin of choice is W3 Total Cache (W3TC). This plugin offers a full suite of caching options as well as integrations with New Relic and popular CDN services. W3TC modules include browser, page, database, and object caching. It also has minification options, user-agent detection and redirection, performance monitoring, and much more.

It is a very powerful plugin that addresses many of our site speed recommendations, including leveraging browser caching with expires headers, resource minification, gzip compression, and CDN-based recommendations including parallelized downloading across domains.

It is important to configure W3TC to fit the needs of your application, but we have put together a starting guide of recommendations.

General Settings

On the General Settings tab, enable Page Cache, Minify (auto), Object Cache, and Browser Cache. If you have a CDN, enable that as well. For each of these settings, use the best caching method available. If you are on a shared server environment, you may only have the options for Disk: Basic and Disk: Enhanced. If so, use Disk: Enhanced. If you are in a dedicated/virtual environment, ideally you have more advanced opcode options like OpCache/APC or XCache. If any of these are available, be sure to utilize them. Page Cache Page cache will keep entire pages of your site in local cache, reducing response time.

Enable these options:
W3 Total Cache Page settings

Minify

Minify is probably the trickiest option to configure and must be tested thoroughly. If you are able to use the auto option for both JavaScript and CSS, consider yourself lucky and go with it. Most of the time it is not that easy and requires manual defining and ordering of scripts. In this scenario, you will want to try and add in your theme’s required JavaScript and CSS scripts under their respectful file management area.

Using the auto option, W3TC tries to combine and minify all of the JavaScript into a single file as smartly as possible. However, without any ordering defined, scripts are often inserted into the combined file before their dependency file. For example, a custom script for your theme may have a jquery dependency.

If that script is inserted in to the combined file before jquery, it will break. The solution is to use the manual file management option to define and order your scripts. The same goes for CSS, if required. In a worst case scenario, you are unable to use combining and minifying for JS and CSS. If so, follow best use practices of at least minifying your resources, if not combining them too.

Here are other general options we recommend enabling:
W3 Total Cache Minify general settings

W3 Total Cache Minify HTML settings

Object Cache

Object caching helps further reduce execution time of commonly called operations. Enable this option and use the default values provided.

Browser Cache

We addressed browser caching earlier, utilizing a user’s web browser to store site resources for increased page load times and reduced server load. W3TC can generate these directives for you.

Enable these options:
W3 Total Cache Browser Caching general settings

W3 Total Cache Browser Caching CSS & JS settings

CDN

Configuring your CDN is made easy with W3TC’s built-in options. We (and W3TC) recommends using an Origin Pull CDN Type. This means that the CDN will automatically pull/mirror the resources from your site. The CDN should comply and mimic the header information generated by the server. This is ideal because it gives you control of your resources, like enabling cross-origin resource sharing (CORS) for JavaScript.

W3 Total Cache CDN general settings

Configuring your CDN usually involves creating and selecting a pull zone, entering the authorization key, and defining 1 or many CNAME hostnames. When you define (and setup) multiple CNAME hostnames, you are essentially utilizing parallelized resource downloading. W3TC is equipped to handle this.

Disable XML-RPC

XML-RPC is a remote procedure call protocol…

Let’s try that again, in English.

XML-RPC is a way to make your content available to other servers. It’s a kind of Application Programming Interface (API)…

One more try.

XML-RPC makes it easy for developers to grab stuff off of your site. It lets those developers ‘talk’ to your site to add, edit and delete content, upload files, get and edit comments, etc. That’s not terribly accurate, but it’s the gist.

WordPress comes with XML-RPC enabled by default. Unless you plan to use it or let others use it, you can turn it off. But, since WordPress 3.5, you have to do so manually.

An easier way is to use this plugin:

Disable XML-RPC.

Why do it?

It’s one more thing

It’s just one more action your server has to take. Your server is busy enough. Simplify its life a little.

Leaving XML-RPC hanging out there is kind of like walking around in an (unintended) hole in your pants. It’s not the end of the world. But it’s a bit… sloppy.

Developers can ping it

Other sites could, in theory, start accessing the API. They (hopefully) can’t do anything without your permission, but they can still make requests that slow your site.

Remove plugins

Some WordPress plugins are absolute performance hogs. Others are performance hogs if you use ’em wrong.

As a matter of course, we remove any plugins we aren’t using. They tend to pile up, so a quick cleanup is always good.

You can get a look at potential problems a couple of ways:

P3 Profiler

We’ve had good luck with P3 (Plugin Performance Profiler).

It is, of course, a plugin. That’s a bit meta. Using a plugin to find slow plugins… Never mind.

Install it, run it and you’ll get a list of plugins and performance data (screen capture directly from the folks at P3):
P3 report

We recommend installing it, running it, tweaking things and then uninstalling it. You can always install it again later, and one less plugin is always better.

Getting geekier

If you want to look impressive at the next PHP/LINUX meetup, you can:

  • Run ‘top’ at the command line, filtered for PHP threads. Get the average. Remove one plugin. Do it again. See the difference. And so on
  • Profile MySQL

But really, P3 Profiler should get you what you need.

The post Ultimate Guide to Page Speed: WordPress Optimization appeared first on Portent.

]]>
https://www.eigene-homepage-erstellen.net/blog/design-dev/ultimate-site-speed-guide-wordpress-optimization.htm/feed 4
Can and Should I Run WordPress in Parallel with my Existing Site? https://www.eigene-homepage-erstellen.net/blog/design-dev/can-and-should-i-run-wordpress-in-parallel-with-my-existing-site.htm https://www.eigene-homepage-erstellen.net/blog/design-dev/can-and-should-i-run-wordpress-in-parallel-with-my-existing-site.htm#comments Tue, 06 Oct 2015 20:34:47 +0000 https://www.eigene-homepage-erstellen.net/?p=29111 This is a question I get asked a lot as senior developer at Portent. A client, new or existing, has an established site running on some platform, and wants to add on a blog, a microsite, or the ability to easily control top-of-funnel marketing content. Technically, “in parallel” means WordPress (WP) is set up to… Read More

The post Can and Should I Run WordPress in Parallel with my Existing Site? appeared first on Portent.

]]>
Rocket Bug
This is a question I get asked a lot as senior developer at Portent. A client, new or existing, has an established site running on some platform, and wants to add on a blog, a microsite, or the ability to easily control top-of-funnel marketing content. Technically, “in parallel” means WordPress (WP) is set up to run alongside your existing site, hosted from the same server. Ideally, we set up WP to live under our top-level domain, which Portent’s SEO team recommends for better link authority.

What does WordPress do for you?

As one of the most popular content management systems in the open-source market, WordPress allows clients to update their content in real-time, avoiding the dependency of a developer and the overhead involved. Clients are able to adapt quickly to events, sharing thoughts, ideas, and promotions, in a matter of minutes. Additionally, WP has a strong community of developers, enthusiasts, and supporters who help with anything from extended functionality to general support. Let’s say you want to add lead capturing forms, a photo gallery, or a community forum. WP makes it super simple.

As a client, why do I want this?

I need better blogging capabilities on my site.

WordPress was originally designed as a blogging web application. It has come a long way since its beginnings to offer a much more complete content management system, but its origins lie in blogging. This is probably the most popular use of running WP in parallel, as many sites have WP installed in their “brandname.com/blog” sub-directory. Many times, this makes the blog look and feel different than the rest of the site, and there is nothing wrong with that.

I need to run a digital campaign or microsite.

Your company has a large legacy site that has been around since that mega site redesign back in 2012. It took a year to design and develop, and processes are well in place. It works well, but lacks the flexibility to create an exciting marketing campaign advertising your company’s latest endeavor. You want similar branding, but a sexier grab-your-attention look. Something new. Something fresh. WordPress can offer this capability while leaving the main site alone, allowing a simple and easy-to-use solution and a dynamic platform where you can go “full-marketer”. This is just one example of why a client would want to use WP alongside their current site. Whether it’s flexibility in a site for a specific promotion, the need for a new look and feel, or the combo of the two, there are plenty of valid reasons to have microsites running alongside your corporate site rather than letting them sit fully separately.

I need full control of all my top-of-funnel site content for marketing.

Web applications have been built to function in a very specific way. Online retail, information processing, report generation, wikis, and webmail are just a short list of all the specialized web apps out there. Many of them gave no forethought to marketing content and SEO concerns, which may be a reason you are reading this article now. WordPress can be installed in parallel to solve this issue, giving you the reins on your content and SEO capabilities.

So, can you do this?

Simply put, most of the time, yes you can.

I say this because there are basic requirements that are necessary to host WordPress, but most hosting environments fulfill them. It is highly likely your site is served by Apache or Windows IIS. Ask your server administrator to find out if your hosting environment has these technologies:

  • Apache, IIS, or nginx
  • PHP 5.4 or greater
  • MySQL 5.5 or greater
  • mod_rewrite Apache module or equivalent

Are there any downsides?

Running WordPress in parallel will most likely cause a dip in performance, as it requires more resources from your server. However, if kept in check, it should only be a small hit. Of course, having a second system requires you to manage and maintain both — and it is important to keep WordPress up to date for security reasons. Also, if you mimic a design for the WordPress install and need to make changes, they will have to be done in two places. Maintenance and updates definitely create more overhead with each additional parallel application put into play.

Other Solutions?

If you’re not installing WordPress on the same server as the main site, there is really only one other solution that tandem applications accomplish, and that is configuring the existing setup to serve WP via reverse proxy under a virtual sub-directory. The idea is that you have a sub-directory (like “brandname.com/blog”) serving a WP application that is hosted on a separate server, maintaining the SEO benefits with URLs that are under the top-level domain. This is an advanced solution that requires the aid of smart server administrators, but it is possible. Personally, I have only configured this solution in a dev environment with the main site being hosted on nginx.

If you’re in a pinch or just roadblocked on being able to set up WordPress in parallel, a sub-domain microsite is an alternative solution. This would allow you to host the site anywhere because you can point a sub-domain (or new domain) to any IP you want. From an SEO perspective this is not ideal, as link authority is lost because most search engines like Google treat sub-domains uniquely. Technically, this doesn’t qualify as running WordPress in parallel.

Conclusion

This may not be an ideal solution, but it can prove to be the best choice for companies and marketers in situations like the ones discussed in this article. When budgets and timelines come in to play, it is definitely worth weighing, especially for companies that recognize they need to adapt and catch up with best practices. It is important for companies to work with developers who understand the need to be agile to handle the needs of marketers. And yes, for the record, we’ve run into a handful of scenarios where it made sense to help clients go this route, rather than waiting years for a full site overhaul.

The post Can and Should I Run WordPress in Parallel with my Existing Site? appeared first on Portent.

]]>
https://www.eigene-homepage-erstellen.net/blog/design-dev/can-and-should-i-run-wordpress-in-parallel-with-my-existing-site.htm/feed 2
GitHub Auto-Deploy Setup Guide https://www.eigene-homepage-erstellen.net/blog/design-dev/github-auto-deploy-setup-guide.htm Tue, 16 Jun 2015 15:57:53 +0000 http://www.eigene-homepage-erstellen.net/?p=28481 In an effort to streamline development updates to a code base in a staging or production environment, we have created an auto-deploy setup guide for any GitHub (github.com) repository. What you will need: A server to host your site. Our example is a Ubuntu (Linux) server running either Apache or Nginx and PHP SSH access… Read More

The post GitHub Auto-Deploy Setup Guide appeared first on Portent.

]]>

In an effort to streamline development updates to a code base in a staging or production environment, we have created an auto-deploy setup guide for any GitHub (github.com) repository.

What you will need:

  • A server to host your site. Our example is a Ubuntu (Linux) server running either Apache or Nginx and PHP
  • SSH access to your server with sudo/root privileges
  • A GitHub.com account and repository

1 – On the web server

Here we install and set up Git on the server. We also create an SSH key so the server can talk to GitHub without using passwords.

Install git

sudo apt-get update
sudo apt install git

If you already had Git installed, make sure it’s a relatively new version – upgrade it to the latest if need be.

git --version

Set up Git

Next we will configure Git. In general these values don’t mean much, but best to make them descriptive.

git config --global user.name "[some user-name]"
git config --global user.email "[your github email]"

Create an SSH directory for the Apache/Nginx user

You can find out what user controls the Apache or Nginx processes by looking in their respective config files. We’re using the www-data user and group in our example.

sudo mkdir /var/www/.ssh
sudo chown -R www-data:www-data /var/www/.ssh/

Generate a deploy key for Apache/Nginx user

Next we want to generate a deploy key that we can add to the GitHub repo. We will be using the ssh-keygen command. The command below instructs the system to create an RSA key that belongs to the www-data user. You don’t need a passphrase, so be sure to leave it blank during the process of generating the key.

sudo -Hu www-data ssh-keygen -t rsa

Once generated, print out the key and copy it to your clipboard.

sudo cat /var/www/.ssh/id_rsa.pub

2 – On your origin (github.com)

Here we add the SSH key to the origin to allow your server to talk without passwords.

Add the SSH key to the repo

  1. https://github.com/[githubname]/[repo]/settings/keys
  2. Create a new key and name it appropriately
  3. Paste the deploy key you generated on the server and save

3 – On the web server

Now that we have the deploy key installed, we are ready to clone the repo on our web server.

Clone the repo

Here we clone the repo into a chmodded /var/www/[site_dir] folder. Note that we switch to the www-data user before running the git clone command. This is an important step because the deploy key we generated is owned by the www-data user and it will only work for that user, even if you are on the root.

cd /var/www

sudo chown -R www-data:www-data /var/www/[site_dir]

sudo su www-data

git clone git@github.com:[githubuser]/[gitrepo].git /var/www/[site_dir]
- or for branch -
git clone -b [branch_name] git@github.com:[githubuser]/[gitrepo].git /var/www/[site_dir]

exit

 

4 – Auto-Deployment

If you’ve made it this far, you are almost ready. All the steps up to this point should have your web server properly communicating with your GitHub repo. You should be able to drill into your site directory, switch to your Apache/Nginx user (i.e. www-data), and run Git commands like you normally would. In fact, this is good practice in getting familiar with doing so in case you have to fix any conflicts, etc.

Option 1 – Cron

If the site is password protected a webhook won’t work. Or, if you don’t want to bother with a webhook (instructions below), you can set up a cronjob that will run pull commands as often as you’d like. The instructions below will switch to the Apache/Nginx user (i.e. www-data), traverse to the site directory, and run the git pull command — every minute.

*/1 * * * * su -s /bin/sh www-data -c 'cd /var/www/[site-dir] && git pull'

Option 2 – Webhook

Another option is setting up a webhook, which is a feature that GitHub allows for every repository. A webhook is simply a URL that GitHub will hit anytime an update is pushed to the origin. We can couple this functionality with a deployment script that will run Git commands to PULL from the origin and update your code base on your web server.

Create the deployment script for your site

Copy/paste the PHP code below and save it as ‘deploy.php’ in your site root. You will be adding this file to your repo, so you can do it in your local dev environment or on the live web server — as long as it gets added and is available at [yoursite].com/deploy.php

<?php
    /**
     * GIT DEPLOYMENT SCRIPT
     *
     * Used for automatically deploying websites via GitHub
     *
     */

    // array of commands
    $commands = array(
        'echo $PWD',
        'whoami',
        'git pull',
        'git status',
        'git submodule sync',
        'git submodule update',
        'git submodule status',
    );

    // exec commands
    $output = '';
    foreach($commands AS $command){
        $tmp = shell_exec($command);
        
        $output .= "<span style=\"color: #6BE234;\">\$</span><span style=\"color: #729FCF;\">{$command}\n</span><br />";
        $output .= htmlentities(trim($tmp)) . "\n<br /><br />";
    }
?>

<!DOCTYPE HTML>
<html lang="en-US">
<head>
    <meta charset="UTF-8">
    <title>GIT DEPLOYMENT SCRIPT</title>
</head>
<body style="background-color: #000000; color: #FFFFFF; font-weight: bold; padding: 0 10px;">
<div style="width:700px">
    <div style="float:left;width:350px;">
    <p style="color:white;">Git Deployment Script</p>
    <?php echo $output; ?>
    </div>
</div>
</body>
</html>

Add, commit, and push this to GitHub

git add deploy.php
git commit -m 'Adding the git deployment script'
git push

Set up service hook

Now, in your GitHub.com repo settings, we will set up the webhook which will automatically call the deploy URL, thus triggering a PULL request from the server to the origin.

  1. https://github.com/[githubname]/[repo]/settings/hooks
  2. Click Add webhook to add a service hook
  3. Enter the URL to your deployment script as the Payload URL – http://[yoursite].com/deploy.php
  4. Leave the rest of the options as default. Make sure ‘Active’ is checked on.
  5. Click Add webhook

Ready to Go

At this point, you should be all set! You and your team can make code updates and push them to the origin and they will automatically be pulled to your web server’s code base.

Some notes

  • Some servers may require slightly different instructions. Check out this resource: https://help.github.com/articles/generating-ssh-keys
  • Navigate the deployment script to trigger a pull and see the output:
    • http://[yoursite].com/deploy.php (this is useful for debugging)
    • When you push to the origin (GitHub), your site will automatically ping the above url (and pull your code)

The post GitHub Auto-Deploy Setup Guide appeared first on Portent.

]]>
eigene-homepage-erstellen.net: They’ve gone to plaid https://www.eigene-homepage-erstellen.net/blog/design-dev/portent-com-theyve-gone-to-plaid.htm https://www.eigene-homepage-erstellen.net/blog/design-dev/portent-com-theyve-gone-to-plaid.htm#comments Mon, 22 Apr 2013 14:00:05 +0000 http://www.eigene-homepage-erstellen.net/?p=16869 Update! Learn the ins and outs of a faster site in our Ultimate Guide to Site Speed! As you may already know, we’re a little obsessed with page load speed. We wanted our home page to load in under 1 second, and we were close. But close isn’t good enough. So, with some guidance from… Read More

The post eigene-homepage-erstellen.net: They’ve gone to plaid appeared first on Portent.

]]>
Update! Learn the ins and outs of a faster site in our Ultimate Guide to Site Speed!

As you may already know, we’re a little obsessed with page load speed. We wanted our home page to load in under 1 second, and we were close. But close isn’t good enough.

So, with some guidance from Ian, I started out on my quest for sub 1 second page load times. The journey took me about a month, filled with research, converting and building configurations, trial-and-error, performance, and load testing. In the end, it was all worth it because eigene-homepage-erstellen.net is now screaming fast:

eigene-homepage-erstellen.net site speed

Since re-launching in the new environment, eigene-homepage-erstellen.net averages .4 seconds/page (fist bump).

The New Environment

Our old environment was a 2-server setup: 1 dedicated web server with Apache, PHP, and APC and 1 dedicated database server with MySQL. It utilized keep-alives, compression (gzip, image, code), expires headers, a CDN, and caching provided by W3 Total Cache coupled with APC. This setup held its own for quite a while, but it did not accomplish our goal and with big traffic growth in 2012, there was plenty of room for improvement.

Bring in the new players: Varnish, NGINX, PHP-FPM, and APC

We spun up 3 Ubuntu 12.04 servers with help from our new friends at Rackspace:

  • dedicated web server with NGINX, PHP-FPM5 and APC (4 GB RAM)
  • dedicated MySQL database server (4 GB RAM)
  • dedicated Varnish server (4 GB RAM)

NGINX

First, we setup NGINX. NGINX is an HTTP server with modular architecture that serves static and index files, supporting accelerated reverse proxying with caching, simple load balancing, autoindexing, gzipping, FastCGI caching, and much more. It wins high praise for its performance and scalability.

With some help from Tobias Baldauf’s article, I configured NGINX for our WordPress install. I added gzip compression to common file types in /…/nginx/nginx.config, including the custom fonts our site uses. In our domain-specific configuration (ie. /…/nginx/conf.d/portent.conf), I implemented pretty heavy caching for static files:

# Defined default caching of 24h
expires 86400s;
add_header Pragma public;
add_header Cache-Control "max-age=86400, public, must-revalidate, proxy-revalidate";
# Aggressive caching for static files
location ~* \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|
gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|
mpg|mpe|mpp|odb|odc|odf|odg|odp|ods|odt|ogg|ogv|otf|pdf|png|pot|pps|
ppt|pptx|ra|ram|svg|svgz|swf|tar|t?gz|tif|tiff|ttf|wav|webm|wma|woff|
wri|xla|xls|xlsx|xlt|xlw|zip)$ {
   expires 31536000s;
   access_log off;
   log_not_found off;
   add_header Pragma public;
   add_header Cache-Control "max-age=31536000, public";
}

I also added in the necessary directives to utilize PHP-FPM:

set $my_https "off";
if ($http_x_forwarded_proto = "https") {
   set $my_https "on";
}
#Added for php-fpm.
location ~ \.php$ {
   # Customizations for PHP-FPM
   try_files $uri =404;
   fastcgi_split_path_info ^(.+.php)(.*)$;
   fastcgi_pass php5-fpm-sock;
   fastcgi_index index.php;
   fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
   include /etc/nginx/fastcgi_params;
   fastcgi_intercept_errors on;
   fastcgi_ignore_client_abort off;
   fastcgi_connect_timeout 60;
   fastcgi_send_timeout 180;
   fastcgi_read_timeout 180;
   fastcgi_buffer_size 128k;
   fastcgi_buffers 4 256k;
   fastcgi_busy_buffers_size 256k;
   fastcgi_temp_file_write_size 256k;
   fastcgi_param HTTPS $my_https;
   fastcgi_param REMOTE_ADDR $http_x_cluster_client_ip;
}

In the above code, the last two lines defined FastCGI parameters configure SSL terminating load balancing. In order to get the proper value for PHP variables like $_SERVER[‘HTTPS’] and $_SERVER[‘REMOTE_ADDR’], these definitions were required with our load balancing setup.

PHP-FPM

Next, I configured PHP-FPM for our environment. FPM stands for FastCGI Process Manager and is an alternative PHP FastCGI implementation and its features can be found here. Getting these values required performance tuning research. Please note that you should do your own research and testing. Here are some of the main definitions in /…/php*/fpm/pool.d/www.conf:

pm.max_children = 25

; The number of child processes created on startup.
; Note: Used only when pm is set to 'dynamic'
; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2
pm.start_servers = 8

; The desired minimum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.min_spare_servers = 5

; The desired maximum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.max_spare_servers = 15

; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
pm.process_idle_timeout = 60s;

; The number of requests each child process should execute before respawning.
; This can be useful to work around memory leaks in 3rd party libraries. For
; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS.
; Default Value: 0
pm.max_requests = 500

php_flag[display_errors] = off
php_admin_value[error_reporting] = 0
php_admin_value[error_log] = /var/log/php5-fpm.log
php_admin_flag[log_errors] = on
php_admin_value[memory_limit] = 128M
php_admin_value[date.timezone] = America/Los_Angeles

APC

Next, I brought APC into the fold. It’s a HUGE performance booster. APC stands for Alternative PHP Cache. And it works wonders. APC heavily optimizes and caches PHP code, storing it in shared memory and reducing the load on the web server. You can read all about its awesomeness here.

Again, you will want to do research and testing for your own environment, but here are the APC settings at the bottom of our PHP.ini file, /…/php*/fpm/php.ini:

[apc]
apc.max_file_size = "2M"
apc.localcache = "1"
apc.localcache.size = "256"
apc.shm_segments = "1"
apc.ttl = "3600"
apc.user_ttl = "7200"
apc.gc_ttl = "3600"
apc.cache_by_default = "1"
apc.filters = ""
apc.write_lock = "1"
apc.num_files_hint= "500"
apc.user_entries_hint="4096"
apc.shm_size = "256M"
apc.mmap_file_mask=/tmp/apc.XXXXXX
apc.include_once_override = "0"
apc.file_update_protection="2"
apc.canonicalize = "1"
apc.report_autofilter="0"
apc.stat_ctime="0"
apc.stat = "1"

You can boost your performance even further by setting apc.stat to “0”, but it will require you to flush the APC opcode every time you upload a new version of a PHP file. Because we are constantly working on our site, this wasn’t a very practical option. When apc.stat is set to “1” (on), it will check the file/code being requested against the cached version and update the cache automatically if there is a difference. A slight performance hit, but in my testing, not enough to warrant the hassle of turning it off.

Varnish

Lastly, we setup Varnish on its dedicated server. Varnish is a reverse proxy HTTP accelerator developed for dynamic, content-heavy web sites. Varnish caches pages in virtual memory, leaving the operating system to decide what gets written to disc or stored in RAM. Varnish becomes the top layer of the web stack. All traffic routes through it. Because Varnish keeps static content stored in RAM for fast access, the web server makes many fewer PHP and MySQL calls.

Challenges

Converting our Apache .htaccess file to NGINX configuration syntax nearly drove me nuts. This was my first time working with NGINX so there was a lot of research and trial-and-error testing, but NGINX config can handle anything that Apache can, so it was a matter of problem solving.

Another challenge was getting PHP to properly define variables in the new load balanced environment — mainly HTTPS and REMOTE_ADDR. The definitions for PHP-FPM found in our site-specific NGINX configuration file did the trick.

The last big challenge was getting the hang of Varnish. After a few days of testing we came across an issue where some of our pages were being cached with our mobile styles, regardless of being viewed on a desktop, tablet, or mobile device. When a page’s Varnish cache had expired, the next request would get cached. Occasionally, that first post-expired request came from a mobile device, thus caching the request with the mobile settings. The solution is to configure Varnish to keep your mobile cache separate from your main cache. I added this vcl_hash function to our Varnish config located in /…/varnish/default.vcl:

sub vcl_hash {
   hash_data(req.url);
   if (req.http.host) {
      hash_data(req.http.host);
   } else {
      hash_data(server.ip);
   }
   # ensure separate cache for mobile clients (WPTouch workaround)
   if (req.http.User-Agent ~ "iP(hone|od)" || req.http.User-Agent ~ "Android" || req.http.User-Agent ~ "SymbianOS" || req.http.User-Agent ~    "^BlackBerry" || req.http.User-Agent ~ "^SonyEricsson" || req.http.User-Agent ~ "^Nokia" || req.http.User-Agent ~ "^SAMSUNG" || req.http.User-Agent ~ "^LG") {
      hash_data("touch");
   }
   return (hash);
}

The function above will add ‘touch’ to each data cache being requested if the user-agent meets the conditions of the if statement, thus keeping mobile cache separate.

In Conclusion

It was a lot of work. But our new configuration loads twice as fast, and it doesn’t bog down if we have a big traffic day. We upped our speed to plaid.



The post eigene-homepage-erstellen.net: They’ve gone to plaid appeared first on Portent.

]]>
https://www.eigene-homepage-erstellen.net/blog/design-dev/portent-com-theyve-gone-to-plaid.htm/feed 5
UK & European Cookie Law Solution (Free Script) https://www.eigene-homepage-erstellen.net/blog/design-dev/web-browser-cookie-law.htm https://www.eigene-homepage-erstellen.net/blog/design-dev/web-browser-cookie-law.htm#comments Wed, 01 Jun 2011 20:59:58 +0000 http://www.eigene-homepage-erstellen.net/?p=1707 A European cookie law that regulates the use of web browser cookies is now in effect in the UK. That cookie law is based on guidelines set by the European Union. In a nutshell, the law states that websites must get a user’s consent before storing cookies on their device (computer, mobile phone, iPad, etc).… Read More

The post UK & European Cookie Law Solution (Free Script) appeared first on Portent.

]]>
A European cookie law that regulates the use of web browser cookies is now in effect in the UK. That cookie law is based on guidelines set by the European Union. In a nutshell, the law states that websites must get a user’s consent before storing cookies on their device (computer, mobile phone, iPad, etc). There’s a lot of confusion around the UK law, and the EU regulation:

  • Does it apply to companies based outside the UK?
  • It allows an exception for cookies that are ‘strictly necessary’. What counts? Shopping cart cookies do. But what about analytics? Login cookies?

Browser Cookie Solution Screenshot
There are no clear answers. The UK did promise to phase in enforcement over time. But if this law succeeds, more cookie regulation is on the horizon. If you own a web site, chances are, you’re placing cookies on visitor’s computers, and you need to comply.

So, we’ve built a simple way to comply with the regulation and its free. Read on to get the code and implement it on your own site.

Note: You use this code at your own risk. We’re not responsible if the UK finds you in violation of their law.

The Solution

Browser Cookie Solution Screenshot
Portent has come up with a simple javascript solution to help you comply with the new European cookie laws. Using the help of a javascript location detection script from GeoBytes.com, the script prompts a user in the EU to consent to the site writing cookies to their device. If the user consents, a cookie is written, set to expire in 90 days, giving the user full access to the site and the cookies it normally writes. If the user does not consent, the script will redirect the user to a static cookie consent information page.

Update (8/7/12): We have learned that about 1 in 50 requests to GeoBytes.com may redirect to an undesired page. This is the result of using the free version of the geolocation service by GeoBytes. I never encountered this in my testing, but I never did any stress testing. To avoid this, you will have to sign up for the GeoBytes service, which seems to be somewhere in the range of $10 (USD) per 10,000 geolocation requests.

Browser Cookie Solution Screenshot

The Script: cookieConsent.js

cookieConsent.js code on Pastebin

Copy the script code above and create a file named cookieConsent.js. Then, make sure you include the javascript file on every page of your site that writes cookies. This will most likely be all of them, especially if you have analytics tracking throughout your site. Put this includes in your header:

<script src="cookieConsent.js" type="text/javascript"></script>

To initiate the script, after page load, put this code snippet just above the end body tag on every page that writes cookies:

<script src="http://gd.geobytes.com/gd?after=-1&variables=GeobytesInternet,sGeobytesCountry,sGeobytesMapReference"></script>
<script language="javascript">cookieConsent(sGeobytesInternet,sGeobytesMapReference);</script>

Static Cookie Consent HTML page: cookie-consent.htm

Browser Cookie Solution Screenshot
This is the page that should be created that gets redirected to when a user does NOT consent to allowing cookies on your site. This page should not write any cookies (analytics, etc.), but provide the user with more information about the cookies used on your site and the choice to accept them again. Here is an example of information for a site that uses cookies for tracking a user’s statistics on the site:

cookie-consent.htm code on Pastebin

You must talk to your attorney before you set up this page. Portent is not a law firm, and we’re not giving legal advice.

More Information

For more information regarding the EU Cookie Laws and suggested updates, see the following:

Demo

For an interactive demo, please visit cookieconsent.eigene-homepage-erstellen.net.

The script will require any user located in Europe or North America (for demo purposes) to consent to cookies being written on the site. If you’re having troubles implementing the script, try viewing the source code of the demo. Also, take a look at the comments below for additional assistance!

The post UK & European Cookie Law Solution (Free Script) appeared first on Portent.

]]>
https://www.eigene-homepage-erstellen.net/blog/design-dev/web-browser-cookie-law.htm/feed 15