Portent » Marianne Sweeny http://www.eigene-homepage-erstellen.net Internet Marketing: SEO, PPC & Social - Seattle, WA Thu, 21 Aug 2014 18:57:28 +0000 en-US hourly 1 http://wordpress.org/?v=3.9.2 UX is the New SEO http://www.eigene-homepage-erstellen.net/blog/seo/ux-new-seo.htm http://www.eigene-homepage-erstellen.net/blog/seo/ux-new-seo.htm#comments Wed, 12 Mar 2014 14:00:41 +0000 http://www.eigene-homepage-erstellen.net/?p=23538 Unlike Orange is the New Black or 60 is the new 40, this adage is true. UX is the new way to optimize sites for search engines because Google said so. Yes, that benign search giant has decided that links are no longer as important as they once thought. Out of deep concern for its… Read More

The post UX is the New SEO appeared first on Portent.

]]>
Unlike Orange is the New Black or 60 is the new 40, this adage is true. UX is the new way to optimize sites for search engines because Google said so. Yes, that benign search giant has decided that links are no longer as important as they once thought. Out of deep concern for its users that click on the search ads, Google has decided that user experience (UX) is a much better determinant of relevance. This left the entire SEO is gob-smacked and floundering, at least those that have not already stepped off the ledge are.

Google picked UX because, unlike the laser-focused SEO community, the UX community is diffused, scattered, at odds with allied disciplines Information Architecture (IA) and Interaction Design and all three lusting after Content Strategy’s current fame and fortune. It’s Game of Thrones without the bad food and bloodshed. Jesse James Garrett, one of the founding fathers of IA, disclaimed his child: “There are no information architects. There are no interaction designers. There are only, and only ever have been, user experience designers.” Yes, those black turtleneck-wearing, poetry-slam loving, head-in-the-clouds persona-loving “designers” are now the dominant influence on ranking in search results.

What’s Wrong with Google Using UX for Ranking?

There are a lot of reasons why it is a bad idea for an autocratic tech giant to determine what constitutes a positive human experience. The two best that I can think of are:

Google’s help is not really helping. As their search engine has gotten smarter with personalization and query revision, we’ve gotten dumber at searching.

  • Searchers do not know “how to search” with 56% constructed poor queries. So, Google comes to the rescue with the Hummingbird update where they are now rewriting our queries for us.
  • Ads to the right, images mixed in, knowledge graph placards and the SERP control panel makes for a very busy search results page. Searchers now get lost in this data soup with 33% having problems navigating the search results let alone finding what they need once they get to an actual website.
  • Most tragic of all, searchers today show a lack of interest in their search results. According to Using the Internet: Skill Related Problems in User Online Behavior by van Deursen & van Dijk, 55% selected irrelevant results 1 or more times, 36% did not go beyond the first 3 results on page one, 91% did not go beyond the first page of search results. No one browses any more. Searching is more about answers than information, more about tidbits than the bigger picture.

Data-driven design and engineered experience are not the same as human mediated design and experience. Douglas Bowman described it best upon his departure as head of user experience at Google: “When a company is filled with engineers, it turns to engineering to solve problems. Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data… And that data eventually becomes a crutch for every decision, paralyzing the company and preventing it from making any daring design decisions.” True that!

I don’t know about you but I don’t want to live in with a Web where everything looks the same, where algorithms (often conflicting) determine what kind of a site experience I have based on what other people did for reasons that are not measurable by the algorithm. I believe that technologists and user experience designers, information architects and content strategists must work together to ensure the best possible online environments for information discovery and consumption. And here’s how I think we can do it.

End Discipline-centric Xenophobia

There’s a lot of talk about working across discipline siloes yet it does not translate into action often enough. I hand off my designs to the production team and often find that changes have been made by necessity of the technology stack or platform. Let’s start projects with a shared end goal that consolidates my vision of how users engage with the site with the software constraints Web developers’ face in building out the site. And to the Web developers, just because it is cool, doesn’t mean it is useful.

Let’s stop preaching to the choir and start educating our colleagues. Discipline-centric conferences should have 20% of the presentations on search or user experience issues. In a loose survey: Confab 2013 (major content strategy conference) had 28 sessions with 1 on SEO, IxDA 14 (Interaction Design international conference) had 128 sessions with zero on anything remotely related to search, Convey UX 2014 had 40 sessions also with zero on search-related topics and Search Engine Strategies 2013 had 50+ sessions with 5 on content strategy and zero on UX.

I am fortunate in that my Web career has spanned 3 disciplines. I started out as an information architect, migrated to Web producer and then focused on SEO. Google’s switch from a link-based to a user-experience-based ranking model has been good for my career longevity. I manage to keep my intellectual claws somewhat sharp in the Web production arena with the help of my very smart and very much more technical colleagues here at Portent. I subscribe to blogs on Analytics, follow software developers on twitter and read news and online journals outside of my core discipline. This helps me to ask the right questions and that’s where cross-discipline work starts.

I created a 4-hour workshop; UX is the New SEO, to fill this education gap. It debuted at EuroIA 2013, is on the schedule for the IA Summit 2014 and Enterprise Search Summit 2014 and is offered here at the Portent offices on occasion. The goal of the workshop is to educate my user experience colleagues on the inner workings of search engines and how this technology “calculates” experience and to educate my SEO on how behavior influences engagement and my content strategy colleagues on how search engines determine what constitutes quality content.

Be Accountable

User Experience is a bit harder to measure for return on investment than keyword ranking. However, it can be done. Google’s Panda updates focus on content quality measured by click-through (does the user select the page from the search results), bounce rate (does the user do anything on the page), and conversion (does the user indicate that their information need has been solved). Relevant user experience data points from most site analytics programs would be: unique visitors, their social actions, the number of pages visited, the average time on page (exclude the bounces), bounce rate, exit rate, top content and top landing pages.

Before signing the Declaration of Independence, Ben Franklin is believed to have remarked: “We must all hang together, or assuredly we shall all hang separately.” So it is with us, our clients and their users. As the thought-processing bipeds of the Web experience SEO, user experience, information architecture, interaction design, content strategy and Web Development must work together or capitulate and let Google decide what we find and why.

I’m for taking back relevance. You?

The post UX is the New SEO appeared first on Portent.

]]>
http://www.eigene-homepage-erstellen.net/blog/seo/ux-new-seo.htm/feed/ 2
Link Juice est Mort http://www.eigene-homepage-erstellen.net/blog/seo/link-juice-est-mort.htm http://www.eigene-homepage-erstellen.net/blog/seo/link-juice-est-mort.htm#comments Thu, 24 Oct 2013 14:00:26 +0000 http://www.eigene-homepage-erstellen.net/?p=21856 One of my favorite comedy bits is Lewis Black educating us on the difference between milk and water. Black tells us that there is no such thing as soy milk because soy beans do not have breasts (from which milk comes). Soy milk is a made-up name to sell a product. “Link juice” – a… Read More

The post Link Juice est Mort appeared first on Portent.

]]>
Police-line-Do-Not-Cross-616x420

One of my favorite comedy bits is Lewis Black educating us on the difference between milk and water. Black tells us that there is no such thing as soy milk because soy beans do not have breasts (from which milk comes). Soy milk is a made-up name to sell a product.

“Link juice” – a term made up by Greg Boser, president of BlueGlass Interactive – is a misrepresentation of a small part of what used to be and is no longer a significant component of PageRank. Don’t believe me? Look up “link juice” in Wikipedia and you’re redirected to the SEO page.

Screencap of Link Juice redirect to SEO page on Wikipedia

Over the years, the term has been used by the SEO community to mislead clients into thinking that Google is able to monitor a flow of authority status between sites – status that diminishes in proportion to the number of links on the page.

Even when the Web was a mere 15 million pages, this would have been quite a feat. At its current trillion-plus page level, however, the ability to monitor anything across the entire Web defies logic. More importantly, we no longer need SEO-binkies like link juice to legitimize our craft. Link juice has to go and here’s why.

Link juice is SEO pixie dust

The SEO community latched on to the term early on as a poor substitute for what turns out to be a complex part of the PageRank algorithm. Search results for “link juice” on any search engine produce reams and reams of self-referential fear, uncertainty, and doubt (FUD) from various SEO practitioners who lack a clear understanding of the Hilltop Algorithm. This algorithm introduced the concept of authority status for certain pages based on links from other authority pages on the same subject.

The closest Google has come to acknowledging something that looks like link juice is from front man Matt Cutts: “Probably the most popular way to envision PageRank is as a flow that happens between documents across outlinks” (Matt Cutts’ blog, June 15, 2009). Even Rand Fiskin at Moz backs away from the juice flow element of links: “ the idea of a ‘leak’ of juice through adding additional links to a page may not be accurate (at least, according to the original Google PR formula)…”

Authority and not the juicy kind

From the get-go, PageRank was a flawed model due to Larry Page’s assumption that Web citations would be as altruistically awarded as research citations in academia. Not so for a commercialized Web.

Even the inclusion of a Random Surfer dampening factor could not deter the SEO community from “gaming” the PageRank system early on. This began the Road Runner/Wiley Coyote relationship between the search engines and the SEO community.

  1. Google launches an update that includes the Hilltop Algorithm and introduces the concept of page authority. SEO community responds with refined link acquisition schemes from search engine-defined “expert” sites.
  2. Google refines Hilltop with Topic Sensitive PageRank that augments PageRank with the capacity to match the topicality between pages, e.g. authority within a certain concept area. SEO community responds with content farms and publishing thin content on trending search topics.
  3. Google launches Caffeine, a complete shift in how Google crawls and indexes the Web. No more monthly dances. Lackluster response from SEO community who does not care about indexing or dancing for that matter.
  4. February 2010 gets off to a Le Mans start when J.C. Penney and Overstock.co are outed by a competitor for mind-blowingly egregious link acquisition and parasite hosting schemes. An extremely peeved Google responds with an early release of the Panda update, the first ginormous step away from a link-based relevance model to one that is user experience-based. Evidently they are sensitive when an international publication like the New York Times exposes their lack of complete control over the Web and search results.  Who knew? The SEO Community FREAKS OUT as websites plummet from ranking for no apparent reason. OK, crappy thin content that searchers don’t like is the reason but who wants to admit to that? There is much hysteria as some SEOs try to find a user experience professional to talk to. The powers that be down in Mountain View do the happy dance.
  5. Google puts the final stake in the link-driven relevance model with Penguin. The SEO community is gob smacked and unable to come up with a workaround better than bended knee pleading to be re-included in Google’s index. Like a phoenix rising from the ashes, a cottage industry of bended knee re-inclusion experts emerges.
  6. Just to make sure the link-based vampire is really dead, Google shuts off any keyword referral information from Google Analytics (KAPOW!) and launches Hummingbird, a diabolical turn towards query reformulation, semantic mapping, and content strategy. Completely baffled, the SEO community talks among itself about the impact of Hummingbird while still trying to figure out the pizza place/restaurant example that Google used in the announcement. Even if link juice were real (which it isn’t), the now desiccated corpse of links no longer has juice.

Let’s put away our SEO binkies

We don’t need to make up stuff to make ourselves sound legit. SEO is a known and highly desired service. We do need to start reaching across silos and working with our user experience and content strategy brethren because optimizing websites is now a team effort.

There is no such thing as link juice. There is page authority – of which links are a part of a long list of contributors.  And, while we are getting used to a link juice-less world, let’s put keyword optimization on that boat. It is not about keywords any more. It’s about concepts and context.

And let’s ditch the too-many-links-on-the-page-bleeding-link-juice chestnut. The search engines are all over that. In their yearly conference on Adversarial Information Retrieval (IR), they have discussed this and now designate global and footer navigation links as nepotistic links, recognize them as spam, and ignore them.

Let’s banish link juice from the lexicon of SEO.  Like the binkies of our childhood, we don’t need it anymore.  Caffeine, Panda, and the most recent Hummingbird infrastructure changes have brought about a more dynamic, contextual, and semantic search landscape.  We have a lot of work ahead of us.

The post Link Juice est Mort appeared first on Portent.

]]>
http://www.eigene-homepage-erstellen.net/blog/seo/link-juice-est-mort.htm/feed/ 1
Save Time, Money and Bloodshed with Soft System Discovery [VIDEO WEBINAR] http://www.eigene-homepage-erstellen.net/blog/video/save-time-money-and-bloodshed-with-soft-system-discovery-video-webinar.htm http://www.eigene-homepage-erstellen.net/blog/video/save-time-money-and-bloodshed-with-soft-system-discovery-video-webinar.htm#comments Sat, 05 Oct 2013 14:00:41 +0000 http://www.eigene-homepage-erstellen.net/?p=21763 This webinar was given September 19. Transcript: Interviewer: Hello. And welcome, everyone, to the next installment of the Portent Webinar Series. My name is Ariana, and I will be your moderator for today’s webinar, which is breaking down barriers to drive success with Marianne Sweeny, our very own senior search strategist here at Portent. We… Read More

The post Save Time, Money and Bloodshed with Soft System Discovery [VIDEO WEBINAR] appeared first on Portent.

]]>
This webinar was given September 19.

Transcript:

Interviewer: Hello. And welcome, everyone, to the next installment of the Portent Webinar Series. My name is Ariana, and I will be your moderator for today’s webinar, which is breaking down barriers to drive success with Marianne Sweeny, our very own senior search strategist here at Portent.

We would love it if you would join in on this webinar. There are a couple of ways you can do that. You can ask questions within the GoToWebinar questions window, or you can tweet your questions using the hashtag #PortentU. So that’s hashtag P-O-R-T-E-N-T and the letter “U.”

And just so you know, in case you miss out on any of today’s webinar or want to review it later – don’t worry. You will receive a follow-up email, which will contain a link to this recorded webinar, a SlideShare link to the presentation slides, and a Bit.ly link bundle, which will contain all of the links to the resources that Marianne will be referencing in her webinar today.

So without further ado, please join me in welcoming Marianne. Hey, Marianne.

Marianne Sweeny: Hey. Hi, Ariana. Hello, everyone. Thanks for joining us for this somewhat unique, in terms of previous webinars from Portent, presentation. There’s a little schizophrenia going on here. You’ll notice there have been a couple of titles.

We’ve gone back and forth about how best to describe this. And I think both titles do. What we’re going to talk about instead of sort of specific PPC or analytics or SEO, as we’ve discussed in the past – we’re going to talk about the project itself, which encompasses all of those factors.

This is the hashtag that Ariana mentioned. So please tweet your questions in – or comments. And this will be the link bundle that you can download. You’ll find three resources that will give further information about how to do a client discovery using soft system methodology.

And this is me, Marianne. This is where I work at Portent. And they have given me a wonderful home, at which I am able to indulge in the magical thinking that led me to create this presentation.

I think we’ve all been involved in projects that have not gone quite as expected. And so the goal of this presentation and the goal of my introducing you to soft system methodology and client discovery is to make sure that we alleviate as much of the pain from project work as possible.

So let’s get started. In the early days of exploration, when mapmakers came to a part of the world that they did not understand, what they would do is just draw a dragon and say, “Here be dragons.” And that was basically a warning to everyone that they shouldn’t go there. There was danger there – and pain and suffering.

We might find that that is how we encounter many of our projects. There is a similar amount of unknown, and we move forward through that. Most projects have three players. There’s us – us being the delivery team. There’s them – them being the clients. And there’s their target, which is their customer or whomever they want to reach with whatever we are designing for them.

And every project starts, ostensibly, with a plan. But in actuality, what we find is that projects also have unstructured problems. And it’s hard to plan around unstructured problems for the simple reason stated here: they tend to not be static. They tend to have multiple individuals feeding into them, multiple stakeholders.

Oftentimes, these stakeholders have conflicting interests, which are not always revealed or articulated to us at the beginning of the project. There are uncertainties of many types, way too many to articulate in a slide. And on top of all of that are the various intangibles that we pick up along the way.

This is where client discovery comes in handy. It is during these sessions that you are able to sit down with your client and find out as much as possible about not just the project – “We want to build a new website. We want to redesign our homepage” – but the influences and the issues that led up to the decision to take the steps that brought you into the room.

Client discovery today oftentimes is limited to a sales contact, where your sales team will go out and make the initial contact and develop the client, and ultimately land the project. There might be a questionnaire that individuals on your team or in your company contributed questions to, that are submitted to someone at the client end, who will fill it out.

You’re lucky if there’s a client meeting. And this oftentimes takes the form of a kickoff meeting, where the delivery team will articulate what their process is and how the project will progress to completion. And then after that, there are emails and phone calls and more emails.

There might be a couple of internal meetings – standup meetings. There might a client meeting or two. Always more phone calls, more meetings, and meetings about the meetings. And then there’s email, email, email.

And so client discovery in these situations oftentimes looks like this: everybody sort of standing around in the field, in their own place, doing their own thing – not quite sure where we’re going, but waiting for instructions or moving ahead without instructions.

And the outcome of the kickoff meeting, the discovery meeting, whatever you refer to it as, is the finished questionnaire, a static document that is often placed on a share, that people can refer to throughout the history of the project, even though time may have passed, and the individual who filled out that questionnaire may no longer be involved in the project.

You also might get minutes from the meeting that you had, whether you call it a kickoff meeting or a client discovery meeting. Oftentimes, there’s the PowerPoint that the delivery team put together, that articulated their end of the project and how it would progress – in their minds, how it will progress. And there’s always that wonderful, expensive box lunch, where you get the big cookie.

Roadmap? Not so much. There’s rarely, if ever, a roadmap that comes out of a kickoff meeting, or at least one that provides the level of detail and shared understanding that a true client discovery would produce. This is exactly what I’ve asked myself many a time: “Where did I go wrong? Where can I find this?”

And that’s why a discovery session is so important. It’s important because engineers, those individuals who are directly involved in the project and oftentimes responsible for the platform or the environment, the technology environment in which the end product will reside and through which we have to work – engineers think that user behavior is predictable. And they have to think that way because that is how coding works best – is if you have a set of consistent and predictable scenarios, that you can then develop a solution for that particular situation or scenario.

Engineers believe that processes can be automated, and they also believe that the end result is the discreet and individual transmission of an output of some sort. And that is usually information, as many of us are working in the world of information technology.

But in the real world, we know that information behavior is not consistent; it’s interpretive. I take what I have in my surroundings, in my emotional state, in my past behavior, and I move that forward, gathering information, so that it changes in relation not only to the unique situation, but to what I am building as I go along.

User behavior changes according to the context – where they are, what constraints they have. And the information is not just the discreet output from the system. It is what I had going in that led me to engage with the system, and it is how I process that information, upon having finished with my engagement.

So system thinking sees the world as very linear, sequential – if then, if then, if then. It is object-oriented, meaning it’s usually encapsulated or contained within a certain set of: “This is what we want. These are the requirements. This is the deliverable. And we’re done.”

It assumes that the users are very mechanistic and passive, and so there will be served data or information or experience, and that these will be discreet items in a contained situation. Well, outside of Star Trek, the real world is chaotic and subjective. You do not have a mechanistic, passive user. You have a constructivist, active user. It may not seem that way within the element that you are designing or within the website that you are working on.

And this user has a holistic view that encompasses elements and criteria that are not articulated by the user, or may exist outside of the environment or your project. And there’s also the internal cognition that goes on with users, and the fact that their experience with what you are doing can span many situations.

This is where soft system methodology comes in best. Soft system methodology is an idea that we would develop better systems, and we would develop better projects if we had a better understanding of the scope, the nature, and the impact of the system for us, the user experience professionals.

And the IT end would understand better the users that they are developing for, if they had a closer connection with that mindset. Soft system methodology was designed by Peter Checkland about 30 years ago. And what he noticed, as exemplified here, is that engineers kept struggling with how to incorporate this human element into their design system, and found that the best way to do so would be to integrate the technologists and the user experience people and the client and the stakeholders – to integrate them very, very early in the project.

So the idea behind soft system methodology is that you use this process – and we’re going to get to the specifics of that in a minute – you use that to get information about various system components for various members and teams that are part of the execution. You identify all of the systems that are involved in the environment that you’re working in, and in delivering what will be considered project success.

And that doesn’t just mean technology systems; it means user systems also and client systems. And then you use this information to structure an analysis and design a process that facilitates mapping the information that you’ve gathered to the issues that you’re setting out to solve.

So Stage 1 of soft system methodology would be to build a consensus model. And the way that you do that is to get all parties in a room. This is not always possible, but strongly encouraged, and beyond the scope of an hour-long meeting of the delivery team saying, “This is how it’s going to do it.”

What you want is to get people in a room and really talking about why this project is moving forward, what their role is – and then find out how their role, how their participation is going to integrate with other individuals on the client side and on the project team side. You want to express these issues in a way that everyone can understand and consume.

So that means discussion, hopefully reaching consensus, and then building a model. And it can be those little Lego models that we’re all familiar with, or it can be a sketching model, or it can be a PowerPoint model, or a Visio model – whatever works for everyone, so that everyone has a basic understanding, and can refer back, consume, and change, as the project iterates.

You come to an agreement with regard to: “What is it that we’re going to work on? How is it going to change? And what is it going to look at, at the end?” And with that agreement, you now can design action plans that will carry out the ultimate end result transformation.

And this is all done through a process of accommodation in the room with everyone, not the accommodation that comes from when I deliver my user experience designs to the IT department, and the IT department looks at them and says, “We can’t do this” or “Really? Wow. This is going to add another 60 hours of dev work to the project.”

And then you execute. You execute on the action plans, and you develop and then proceed on purposeful activities that everyone is aware of, that map to the same end result.

Soft system methodology uses an acronym CATWOE. And these elements are the customers, who benefit from the system that will be transformed, the actors who are going to facilitate the transformation, and the world views that give the transformation meaning. Those worlds are the individuals who are on the client side, that have initiated the project and have expectations of what the end result is going to look like, and also the world views of the project team, who are bringing expertise to the project that will make it a success, and also, as best possible, the world views of the end users, those people who are going to experience the system, experience the transformation.

It also involves the owner. There is always that individual who is revealed, oftentimes too late in the process, who is the real owner of the project. We always want to make sure that we know who is the person – who is the go/no-go person for the project. There’s one. And that individual hopefully will be made known to everyone on the project, and will be kept aware of what is going on, so there will be no late surprises.

And, finally, the environment – the constraints, both system and people powered, that will influence the outcome and success of the project. “So what,” you say? Hmm. Well, using a soft system approach, using a discovery method early on in a project will take project participation from this – “I am looking at my discreet area of the project” to this – “I am looking at my discreet area of the project in relation to the project as a whole.”

Ian Lurie, my boss here at Portent, said something that resonated with me and really influenced me setting out to learn more about soft system and adapting it into what became this webinar. And the comment that he made, that resonated with me, was: “Sometimes you have to spend $15.00 to save $45.00.”

I did a version of this presentation for the company – my company here at Portent – earlier in the week. And I started it out by asking: “How many individuals in this room have gone over their time budget due to unexpected consequences involved in the project like: I didn’t have all of the information. Or new information became available?” About 80 percent of the hands went up.

So this is why creating some sort of a client engagement early on, whether it’s two hours, four hours, eight hours, two days – whether it’s an all-in person or some on the client side and some associated through Skype or Google Hangout – this is why it is so important. Because I guarantee you that two hours, four hours, eight hours in the beginning of the project is going to save tens, if not hundreds of hours, both on the client side and on the delivery team side down the line.

So let’s take a closer look at soft system methodology and client discovery. What we want to do is – we want to discover the client world views. We want to learn more about the environment from which this project has emerged.

And, more importantly, we want to learn more about the cultural and political influences that are a part of this project, which may not be revealed in a kickoff meeting. It may not be revealed in a sales call. But it will definitely impact the success of the project.

We want to reveal the interacting systems within the organization – both our systems and the clients. I’ve had two experiences where clients – where it has been revealed, quite far into the project, that – in one situation, the client was upgrading to SharePoint 2010 for their website hosting.

That has a very strong influence on my work. And it certainly had an influence on the work done previously and on the work done moving forward. There was another client who it was revealed, down the line, could not move forward on a lot of the recommendations that we were making because their hosting platform was ancient. Their content management system was so ancient.

So basically in that situation, I have been doing a lot of work that was not useful to the client. Because they couldn’t do any of that. Again, client discovery – had we known all of this moving in, then I could have adjusted my focus on their issues, to accommodate their system.

We also want to define the user purposeful activity, whether that user is the IT professional at the client side or the customer for the website that we’re working on. we want to clearly define the various user purposeful activities, and make sure that everyone is aware of the problems that we’re trying to solve, and are part of designing those solutions.

And, finally, we want to shift our thinking. We want to shift our thinking away from optimizing for the technology, whether that’s the rich media components that we’re putting together – the video stream, the platform, the content management system – and we want to start looking at the end user experience. What is going to happen when somebody looks at that page?

What is going to happen when they click on the link, more importantly than what happens from the system end, when the link is clicked. And last, but not least, we want to understand that nothing can be carved in stone when we are working on technology solutions and digital solutions.

I have some friends who commenced on building a house. He’s an engineer at Boeing. They were confident. They were confident, based on the project plan that started, that they would be in their house four months ago. As you can see, they are not in their house.

And, in point of fact, they are not going to be in their house until mid-November. And it has to do with the fact that because one workman slipped, another workman couldn’t start. And he had other jobs backed up.

So this happens throughout many projects, whether it’s home construction or site construction. We have to be aware. We have to be flexible, so that this can become this, so that people are working together, all of the components are working together, and we have a very clear understanding of environments and constraints.

And what are the soft system discovery outcomes? Well, the client specific CATWOE – the outcomes that we want – are: who are the customers? All of the customers – with the client and then with their customers, who are going to be using the project that we are working on – the outcome, whether it’s a website or a product or an application.

We want to be really clear about who is doing the work. And what are the constraints that they are experiencing? And how can we alleviate them, as a team?

We want to make sure that we understand completely all of the world views that are governing this project – not just the project manager and not just the point of contact within the company, and not just the SEO, and not just the content writer, but all of the world views – from the stakeholders, from the go/no-go guy, from the marketing department, from the IT people, and also, as best possible, from the individuals who are going to be using the site, the app, the project outcome.

We want to make sure that we know who owns the project. In previous occupations, I have worked at length with some very large enterprise clients. And there’s one, in particular, where at another agency, we would assign – I wouldn’t call it a penalty. But we would basically boost the job estimate by 15 percent. We would put a 15 percent contingency in there.

Because inevitably with this client, as you got towards the end of the project, when you were getting ready to deliver, some mysterious uber stakeholder would come out of the woodwork and say, “What? I hate that. And I need you to make all of these changes right now” – changes that were not involved in the original bid, and also changes that were likely not going to be approved in a change order. So we would just add a contingency fee.

Not everyone can do this. The best way would be, as I said, to make sure that you are aware of the entire chain of command for your client, and that those individuals are aware of what you’re doing, and that their world view is incorporated.

We also want to make sure that we are fully aware of all the environmental influences, both at the client and our end, that they’re aware of the environmental influences and constraints at ours. We want to make sure that we know all of the system involved – hosting and dev and content and people-powered. And we want to make sure that we are all aware of what inputs are going to be transformed, and what those outputs look like post-transformation.

That is achieved. We can then start with purposeful activity that will result in useful outcomes. We will know that our models of the real world either fit or stand out, that the models that we have developed will fit in the real world scenario that we are going to have to work in.

We will have shared amongst us a list of agreed upon milestones and tasks that contain as few surprises as possible. And we’ll have meaningful action plans that we can revisit and iterate and adjust as a unit.

And what that does is – it will save time. And for our clients and for ourselves, time is something that we cannot get more of. We cannot afford to waste it. We’ll save money because we won’t be having to redo work. We won’t be offering suggestions or guidance that is not useful. We will have very satisfied clients.

And, more importantly, they will have very satisfied customers. And, in the end, we’re going to have an end result where everything lines up perfectly, and you don’t have a number of random issues that tend to derail projects, or surprises that tend to absorb time that could be better spent elsewhere.

And that is how I would do a soft system methodology project, and encourage all of you to do the same. So this is, again, the link bundle. As you can see, there’s a little font conflict going on there. Evidently I didn’t fully understand the system that was going to be displaying the PowerPoint.

Interviewer: Thank you, Marianne. That was very educational. So now don’t forget – all the resources that Marianne referenced will be in the bundle, which you saw in the previous slide. And the bundles are case sensitive, so I think it’s all lowercase on the end part.

We are opening it up for questions and answers. So remember to put your questions in the webinar question box or tweet us at #PortentU.

So let’s see what we’ve got. They’re right here. Okay. Any questions? Let’s see. We’ve got one question here from Rebecca. “How do you envision incorporating soft system methodology across a full marketing team? How would you decide which departments are involved? Is it case by case?”

Marianne Sweeny: You know I think it is case by case. And thank you for the question, Rebecca. I think it is case by case. And you will notice that this presentation was somewhat shy of prescriptive guidance like: “Do this. Have this meeting. Have all of these components in the meeting.” It is really up to the project itself.

The initial contact is made through sales, and they would definitely be able to provide someone who you would then want to talk to about setting up a sustained meeting. I suggest anywhere from two to four hours. And segment that, so that individuals at the client end will be able to come and go as needed.

It’s really hard to get IT guys to stay in a room for four hours. But you can certainly get them for part of the meeting. So starting with the point of contact, you would then roll out and say, “In terms of this project –” Let’s say we’re doing a website redesign. “Who is currently involved in maintaining and supporting the website?”

And that would likely be marketing, corporate communication, IT, Q&A. “And who, at the delivery side, is involved in making this project successful?”

So at the Portent end, it would, again, be content, sales, SEO, analytics, so that we can measure our success. Getting all of those components – taking it from the component level of who is involved – currently involved in the project – who need to be involved. That group then starts to build out and say, “Now who is the individual, the overarching stakeholder for this project? And what are their concerns?”

And start from there, and hopefully get somebody on the client site, or get clients to come to your offices, so that there can be lengthy discussions about their culture, why this project came up. What was behind – what’s the pain point? And why did it get to a certain point, that then made it critical to move forward with the project?

Interviewer: Okay. Thanks so much, Marianne. Question from Sandra. “Are there any companies or organizations that are doing SSM well?”

Marianne Sweeny: I would say that – you know my experience at Microsoft says that there are groups within Microsoft that do soft system methodology very well. It is certainly a commitment. There are other agencies that are offering client discovery, that would be utilizing some of the soft system methodology that we’ve talked about here.

So there are two parts to this. One is there are agencies – the big agencies are usually the ones that have the luxury of it – that can do client discovery meetings that are fairly elaborate and garner this information – the delivery team being able to inform the client, and then being able to extract information from the client.

The soft system methodology part is really, to me, the most critical. Because it incorporates the IT, the systems engineers in, so that you are not only getting a client agency shift in information, but you’re also getting a technologist/non-technologist transfer of information. I don’t know of anyone who’s specifically doing that.

Interviewer: Okay. Next question from Mary. “How does soft system methodology compare or contrast with agile methodology? Seems like it grew out of agile methodology.”

Marianne Sweeny: Soft system, in terms of this, is specific to, as I said, information transfer and gathering. Agile is really a project management technique. So it’s very specific to project management, where the project is divided into discreet, atomistic components that are called “sprints.”

So basically what you say is: “We are going to design the user interface for the homepage.” That’s a two-week sprint, and it involves the SEO. It involves the content people, and it involves the Web devs.

And we’ll have these meetings, and people are going to be working very close proximity, so that they can participate and influence work as it goes – as the work progresses. So agile would certainly be a part or a follow-on for soft system methodology, and it likely would be used by the dev teams.

Once the soft system discovery reveals sort of the path and the action plans and the purposeful activity, then agile, as a form of project management, would come into play. I mean it’s the preferred way of project management now – the other being waterfall, for those of you who are not familiar with agile, where you have sort of a more sequential: “I finish my work; I hand it off to you. You do your work; you hand it off to someone else.”

Interviewer: Whereas agile is teams working in –

Marianne Sweeny: It is. Everyone is working in a sprint. So there’s no hand-off. You’re basically working together, and you complete that one discreet action.

Interviewer: Yeah. That’s great. Elizabeth asks, “What are the barriers to entry?”

Marianne Sweeny: Probably the barrier to entry for many delivery teams for this would be taking the time to build the soft system discovery infrastructure. And there is some time. Using Ian’s maxim, we’re going to spend $15.00 to save $45.00.

Delivery teams would need to spend some time, whether it’s 4, 6, 8, 16 hours, to develop the collateral that they need to carry out a successful soft system client discovery session. That collateral could be repurposed and tailored to further clients. So you would leverage the time and effort expense in there.

But what you save – going back to my experience yesterday, asking my coworkers, “How many of you have gone over your allotted time because you had to accommodate surprises, or information, or a mysterious stakeholder, or work that wasn’t as useful as it should be?” In the end, you end up saving much more.

Interviewer: That makes sense. It’s a lot of the planning piece. If you spend more time planning, you have fewer hiccups later on. Right?

Marianne Sweeny: Exactly.

Interviewer: Another question from Elizabeth. “What resources do you recommend to find out more for implementation? Are there any classes or things?”

Marianne Sweeny: There’s really not. Interesting enough, soft system methodology is sort of coming into a Renaissance. As I mentioned before, this was really developed in the sixties, when they were doing more project development engineering and such. And that’s where Checkland came up with soft system.

Because he said, “We’ve spent so much time developing this product. And then we see customers who are banging it on the desk because it’s not working the way that they want it to work.” And certainly anyone who’s used Windows 8 would have appreciated a soft system approach to the development of that. So I think that has created somewhat of a resurgence in soft system methodology.

Most of the information about it is from information systems development – Bell Systems and such. And so this – the magical thinking that Portent has afforded me to do – is one of the few instances I know of, of taking this and mapping it to something outside of an IT or an information systems environment.

And I’m very excited about it, and I’m hoping that you all will then take it, and incorporate it, and use it, and adapt it, and iterate it for your environment. In the link bundle, I have included an article that was written by Checkland in the year 2000, where he basically sort of revisits, after 30 years, soft system methodology, and applies it to this sort of new environment that we were at then – a technology environment and a Web-based environment.

Interviewer: Okay. Another question here: “I have a small business, and my clients have small budgets. Is there a way to scale this down in time to me and price to them? Or is it better for the big guys?”

Marianne Sweeny: No. I think that it absolutely should be adaptable to minor engagements, and one that you could split out and offer as a standalone. Most of the agencies that I’ve worked with have done that.

They’ve treated it as a standalone – an initial touch, an initial contact with the client, knowing that the client, at the end, would get this collateral, which basically defines what it is that they want to do. It’s a very useful exercise for them.

So for you, if you have a very small business, I would say, again, design the collateral that you need for a successful discovery session. And that would be: “What are you bringing to the project?” That sort of: “This is how I work” and whatever.

And then accommodate a discovery session, whether it’s over the phone, if you can’t afford to travel or don’t want to travel, or over Skype, where you have already developed a series of questions, that you can engage in a discovery with the client, and knowing what we know now about soft system – that you are asking questions about the systems and the subsystems, and the various individuals that are going to be part of the actors, that are going to be part of the project, that you can get all that information out early on and in an efficient way.

Interviewer: Are there any other questions out there in the webinar world? Another question popped up. “So after you’ve gathered all this information, where do I start?”

Marianne Sweeny: Well, I would start with the next project that comes across your desk. I mean literally, if you’re with an agency, I would select a client – an upcoming client – that you can use as a beta test for this, on what works and what doesn’t work, and then start there. It’s never too early to start sort of looking at: “How have I dealt with projects in the past? Where have they gone off the track? And how can I prevent that from happening?”

And then applying the techniques that we have talked about here – the consensus, the modeling, the iteration, and, most importantly, the incorporation of world view. So that’s where I would start. I’ve given you a really good article in the link bundle that sort of goes into more detail.

And then: “How can I do this? How can I make my outcomes more meaningful than just this dead, static questionnaire that was done years ago?” And don’t forget the big cookie and a PowerPoint appl.

It’s all about a continuing sort of group association focus on the end result, and making sure that everybody really understands and keeps in their mind what that end result is, and what role that they are playing, as a brick, in creating what will, in the end result, be a very strong brick wall.

Interviewer: Awesome. If you don’t have any more questions in the next couple of minutes, we’ll go ahead and conclude. But remember, you can put the question in the webinar question window or tweet it to us. Anything else you want to add, Marianne?

Marianne Sweeny: I think that we all want to deliver the best work for our clients’ interests, and do that in an efficient way. And I just believe firmly that this approach, the consolidated approach, the one that includes incorporating everyone’s world view early on, is critical to doing that.

And it’s going to help us deliver better work more efficiently. So I hope that this has been useful for you, and I hope that you all go out and learn more about soft system methodology and encourage your workplace to use it.

Interviewer: Okay. Awesome. Well, thank you, Marianne. Don’t forget – if you have more questions for her, you can tweet them directly at @MSweeny on Twitter. Make sure you use the hashtag #PortentU.

Just a reminder for today’s amazing webinar from Marianne – the presentation slides and all the links will be coming your way in a follow-up email. Join us next month, in October, for a webinar from our director of SEO, Josh Patrice, who will be attending the conversion conference next week or early in October. And he’ll be sharing what he learns from that. So it’ll be very interesting.

Details about the webinar can be found on our webinar tab on our Facebook page, which is Facebook.com/Portent.Marketing. You can also find it on our homepage. Thank you, everyone, and have a great day.

Marianne Sweeny: Thanks, everyone.

[End of Audio]

The post Save Time, Money and Bloodshed with Soft System Discovery [VIDEO WEBINAR] appeared first on Portent.

]]>
http://www.eigene-homepage-erstellen.net/blog/video/save-time-money-and-bloodshed-with-soft-system-discovery-video-webinar.htm/feed/ 0
Get Out Your Hand Sanitizer: The SEO You’ve Come to Love is No More http://www.eigene-homepage-erstellen.net/blog/seo/get-out-your-hand-sanitizer-the-seo-youve-come-to-love-is-no-more.htm http://www.eigene-homepage-erstellen.net/blog/seo/get-out-your-hand-sanitizer-the-seo-youve-come-to-love-is-no-more.htm#comments Wed, 21 Aug 2013 14:00:19 +0000 http://www.eigene-homepage-erstellen.net/?p=21084 The days of sanitary SEO are on the wane. Oh sure, there will always be tools to hide behind and technology barriers to raise. As long as site speed is a factor, we will be able to amuse ourselves in the solitary pursuits of caching, minifying, and optimizing consolidated JS and CSS files and images.… Read More

The post Get Out Your Hand Sanitizer: The SEO You’ve Come to Love is No More appeared first on Portent.

]]>
Hands in plastic gloves on a keyboard next to some hand sanitizer.

The days of sanitary SEO are on the wane. Oh sure, there will always be tools to hide behind and technology barriers to raise. As long as site speed is a factor, we will be able to amuse ourselves in the solitary pursuits of caching, minifying, and optimizing consolidated JS and CSS files and images. However, the serious ranking mojo is found in offering a “good user experience,” whatever the heck that means.

Google knows what it means, but they’re not telling. They are not going to repeat the mistake they made publishing the PageRank paper in 1999. First, they spun the Internet marketing and optimization communities into butter with their super smart algorithms that seem to understand whether two linked pages are really about the same thing. Then Google put a bird on it with Panda, the “user experience” update. Now we have to talk to actual people, stakeholders, and users. No more hiding behind those spreadsheets and keyword tools. Dangnabbit!

It is not all bad news. There is a new way tech-head SEO can cross its own digital divide to collaborate with User Experience without bloodshed or eye-rolling – it is called Soft System Methodology (SSM).  Okay, SSM is actually an old way. Developed by system engineer Peter Checkland in the early 1960s, SSM was intended to better incorporate user behavior at the outset of system design to enhance adoption once the system is in place. Checkland’s SSM is best encapsulated by the following maxims:

  • All systems are made up of subsystems as well as are part of larger systems
  • Understanding subsystems leads to more efficient system development and operation online
  • Users are a subsystem
  • Stakeholders are a subsystem
  • System design and development is an ongoing process

Now, I know what you’re thinking:

“They legalized marijuana in Washington State and that is where this must be coming from.”

Audi Contraire mon ami. This is coming from a brainstorm that I had (thanks to Portent colleague Katie Fetting’s 7.5 Tips for Becoming a Brainstorming Genius webinar). SSM is Google’s Panda update(s). With Panda, Google (a system) incorporates user behavior (a subsystem) into the sort and display of search results (another subsystem) as part of the all-encompassing information retrieval system that is search.

The tools that Checkland gave us are: CATWOE, a checklist of sorts, and the Experience Learning Model. In ensuing blog posts, I will delve more deeply into how the SEO community can apply SSM to form a new, better, and search engine-approved method of optimizing sites. There will be something for the propeller-heads, something for the bespectacled content strategists, something for the black-turtleneck-wearing user experience designers, and something for those digital-Lincoln-log-loving information architects.

Hand Sanitizer 2

The only folks left out are the fly-by-night, SEO-1-hour-a-day carpetbaggers. The sun is setting on that empire and rightfully so.

Stay tuned.

The post Get Out Your Hand Sanitizer: The SEO You’ve Come to Love is No More appeared first on Portent.

]]>
http://www.eigene-homepage-erstellen.net/blog/seo/get-out-your-hand-sanitizer-the-seo-youve-come-to-love-is-no-more.htm/feed/ 2
3 Google Algorithms We Know About & 200 We Don’t http://www.eigene-homepage-erstellen.net/blog/seo/3-google-algorithms-we-know-about-200-we-dont.htm http://www.eigene-homepage-erstellen.net/blog/seo/3-google-algorithms-we-know-about-200-we-dont.htm#comments Wed, 15 May 2013 14:00:48 +0000 http://www.eigene-homepage-erstellen.net/?p=17194 When I meet with clients or present at conferences, I am always asked: “How do I rank high on Google for (insert keyword-phrase-du-jour)?” I give the standard answer: “Only the search engineers and Google can tell you and they aren’t talking.” Inevitably, the questioner looks dejected, mutters a slur on my credentials, and walks away. … Read More

The post 3 Google Algorithms We Know About & 200 We Don’t appeared first on Portent.

]]>
When I meet with clients or present at conferences, I am always asked: “How do I rank high on Google for (insert keyword-phrase-du-jour)?” I give the standard answer: “Only the search engineers and Google can tell you and they aren’t talking.”

Inevitably, the questioner looks dejected, mutters a slur on my credentials, and walks away.  I scream silently in my head: “Don’t kill the messenger because we are all hapless Wile E. Coyotes chasing the Larry and Sergey Road Runner with zero chance of catching them, no matter what we order from ACME!”

Thirteen years ago, before the Cone of Silence dropped on Google’s method of operation, we got a glimpse of the method behind their madness. This, combined with the common knowledge of the foundational tenets of all search engines, gives us some idea of what’s going on behind that not-so-simple box on the white page.

In this post, I am going to explore the 3 algorithms that we know for sure Google is using to produce search results, and speculate about the 200+ other algorithms that we suspect they are using based on patent filings, reverse engineering, and the Ouija board.

What is an algorithm (you might ask)?

There are many definitions of algorithm. The National Institute of Standards and Technology defines an algorithm as “a computable set of steps to achieve a desired result.”  Ask a developer and they will tell you that an algorithm is “a set of instructions (procedures or functions) that is used to accomplish a certain task.” My favorite definition, and the one that I’m going with, comes from MIT’s Kevin Slavin’s TED Talk “How Algorithms Shape Our World”: algorithms are “math that computers use to decide stuff.”

3 Google algorithms we know about

PageRank

The most famous Google algorithm is PageRank, a pre-query value that has no relationship to the search query. In its infancy, the PageRank algorithm used links pointing to the page as an indication of its importance. Larry Page, after whom the algorithm is named, used the academic citation model where the papers citing another were endorsements of its authority. Strangely enough, they do not have citation rings or citation buying schemes as with web links. Warning, scary, eye-bleeding computational math ahead.

Initial PageRank algorithm

Initial PageRank algorithm

To combat spam, a Random Surfer algorithm was added was added to PageRank. This algorithm “imagined” a Random Surfer that traveled the Web and would follow the links on each page. However, sometimes, the Random Surfer would arbitrarily, much like us thought-processing bipeds, not return to the original page and keep going or would stop following links and “jump” to another page. The algorithm steps are:

  1. At any time t, surfer is on some page P
  2. At time t+1, the surfer follows an outlink at random
  3. Surfer ends up on some page Q (from page P)
  4. The process repeats indefinitely

That’s the benefit of algorithms, no overtime and they never get tired or bored.

Hilltop Algorithm

Surf’s up Dude algorithm worked for about 10 minutes before the SEO community found the hole in its wet suit to manipulate rankings. In the early 2000s, processors caught up to computational mathematics and Google was able to deploy the Hilltop Algorithm (around 2001). This algorithm was the first introduction of semantic influence on search results inasmuch as a machine can be trained to understand semantics.

Hilltop is like a linguistic Ponzi scheme that attributes a quality to links based on the authority of the document pointing the link to the page.  One of Hilltop’s algorithms segments the web into a corpus of broad topics. If certain documents in a topic area have lots of links from unaffiliated experts within the same topic area, that document must be an authority. Links from authority documents carry more weight. Authority documents tend to link to other authorities on the same subject and to Hubs, pages that have lots of links to documents on the same subject.

Topic-Sensitive PageRank

The Topic-Sensitive PageRank algorithm is a set of algorithms that take the semantic reasoning a few steps further. Ostensibly the algorithm uses the Open Directory ontology (dmoz.org) to sort documents by topic.

Another algorithm calculates a score for context sensitive relevance rank based on a set of “vectors”. These vectors represent the context of term use in a document, the context of the term used in the history of queries, and the context of previous use by the user as contained in the user profile.

So, I know what you’re thinking. How can they do that for the whole web? They don’t. They use predictive modeling algorithms to perform these operations on a representational subset of the web, collect the vectors, and apply the findings to all of the “nearest neighbors.”

D’oh!

[Added May 16, 2013]
There are a lot of algorithms for indexing, processing and clustering documents that I left out because including them would have many of you face-first-in-your cereal-from-boredom. However, it is NOT OK to leave out the mother of all information retrieval algorithms, TF-IDF, known affectionately to search geeks as Term Frequency-Inverse Document Frequency.

Introduced in the 1970s, this primary ranking algorithm uses the presence, number of occurrences, and locations of occurrence to produce a statistical weight on the importance of a particular term in the document. It includes a normalization feature to prevent long boring documents from taking up residence in search results due to the shear nature of their girth. This is my favorite algorithm because it supports Woody Allen’s maxim that 80% of success is showing up.

The 200+ we don’t know about

All of the search engines closely guard their complete algorithm structure for ranking documents. However, we live in a wonderful country that has patent protection for ideas. These patents provide insight into Google’s thinking and you can usually pinpoint which ones are deployed.

Panda, the most famous update is an evolving set of algorithms that are combined to determine the quality of the content and user experience on a particular website. There are algorithms that apply decision trees to large data sets of user behavior.

These decision trees look at if this/then that:

  • If the page has crossed a threshold a certain ratio of images to text, then it is a poor user experience.
  • If a significant portion of searchers do not engage with anything on the page (links, navigation, interaction points), then the page is not interesting for searchers using that particular query.
  • If the searchers do not select the page from the results set, then it is not relevant to the query.
  • If the searcher returns to the search engine results to select another result or refine the search, then the content was not relevant and not a good user experience.

Complementing the decisions trees could be any one of a number of page layout algorithms that determine the number and placement of images on a page in relation to the amount of content in relation to a searcher’s focus of attention.

Following on the heels of Panda are the Penguin algorithms. These algorithms are specifically targeted at detecting and removing web spam. They use Google’s vast data resources to evaluate the quality of links pointing to a site, measure the rate of link acquisition, the link source relationship to the page subject, shared domain ownership of the linking sites, and relationships between the linking sites.

Once a site passes an established threshold, another algorithm likely flags the site for additional review by a human evaluator or automatically re-ranks the page so that it drops in search results.

Let’s stop, guess, and go with what we know

As with the formula for Coca-Cola or the recipe for Colonel Sanders’ Kentucky Fried Chicken, specifics on what Google uses to decide who gets where in the search results set are a closely guarded secret. Instead of speculating on what we might know, let’s focus on what we do know:

  • In order to rank for a term, that term must be present in the document. Sometimes, a contextual or semantic match for a term will get you into the SERP swimsuit competition for placement. Don’t count on that though.
  • Being picky and realistic about what you want to rank for is the best start.
  • Text on the page drives inclusion in the results for a searcher’s query. Be about some thing instead of many things.
  • Quality content is focused, fresh and engaging.
  • Custom titles that describe the page using contextually relevant keywords are THE low hanging fruit. Pick it for your most important pages.
  • Compelling description text in search results will draw clicks. Meeting the searcher’s expectations with rich content will keep them on the page.
  • Pictures, images, and ads are most effective when used in moderation.
  • Links are important, but only the right kind. For Google, the “right” kinds are links from pages about the same subject and place high in contextually-related searches.

Are there any major algorithms we missed?  Let us know in the comments.

The post 3 Google Algorithms We Know About & 200 We Don’t appeared first on Portent.

]]>
http://www.eigene-homepage-erstellen.net/blog/seo/3-google-algorithms-we-know-about-200-we-dont.htm/feed/ 20
Get Found, Make Money: Optimizing Apps for Search http://www.eigene-homepage-erstellen.net/blog/seo/optimizing-apps-for-search.htm http://www.eigene-homepage-erstellen.net/blog/seo/optimizing-apps-for-search.htm#comments Thu, 04 Apr 2013 14:00:20 +0000 http://www.eigene-homepage-erstellen.net/?p=16819   State of App Search: Needle in a Haystack Have you ever tried to meet someone at Grand Central Station or some other incredibly crowded venue? It sounds so doable and yet it is so not. There are too many people, too many distractions and too much stuff in the way. App (tablet and phone… Read More

The post Get Found, Make Money: Optimizing Apps for Search appeared first on Portent.

]]>
 

State of App Search: Needle in a Haystack

Have you ever tried to meet someone at Grand Central Station or some other incredibly crowded venue? It sounds so doable and yet it is so not. There are too many people, too many distractions and too much stuff in the way.

App (tablet and phone applications) stores, both iTunes and Google Play Store have the same problem. Both have in excess of 750,000 files to paw through, download, try, remove and lather, rinse and repeat. It is likely true that for any problem “there’s an app for that,” but not an app to find the app.

Now I know what you’re thinking. App stores are online and I can use the handy search feature to find the app of my dreams or needs. Well….maybe.  I ran a search using keyword productivity on both sites. In the search department, Google gets extra credit for a more pleasing search results display with icons, star ratings and a nifty description. However, points off for egregious self-promotion by soaking up 3 of the top 10 slots for Google Keep, Google Calendar (a productivity app?) and Google Drive.

Apple dispenses with search entirely and favors the top-down directory approach similar to that phone book you’re using to prop open the garage door (and equally as useful). An iTunes app category landing page is as easy to navigate as airport flight information boards. I am uncertain what the organization scheme might be, although it bears a striking resemblance to the one I use for my home office. I guess that Steve Jobs never had to use iTunes search or he certainly would have done something about that display.

Chart comparing Google Play and Apple iTunes app stores.

So, if you want customers to find your app for either location, you will have to “make it so” yourself and here’s how:

On the page app optimization

As with traditional SEO, both app stores start with traditional information retrieval systems that emphasize the presence of the query terms in the body content and the placement on the page to put together a search results page. Position on this results page is based on magical thinking (for Apple) and a variety of algorithms (for Google).

Keywords

Apple encourages the assignment of keywords to your application and makes it as difficult as possible. They must be using some sort of Stone Age search in Cupertino, or they all know where everything is and do not need any sort of search functionality at all.

There is a 100 character limit to the keyword field that includes the required comma separators. There is no phrase matching. If you want your app to appear for a phrase like “business news,” the keywords would appear together, e.g. business, news. Google stopped paying attention to keyword metadata a long time ago and Google Play Store is no exception.

Name

For both the Google Play Store and iTunes, the presence of keywords in the app name is rewarded. The keyword-rich name should also include terms that reference the app’s functionality e.g. Weather+ International Travel Weather Calculator.

Apple limits the app name to 255 characters with full display on that page-o-links that serves as search results. For its Staff Picks on the Google Play homepage, app name display is limited to 17-21 (I have seen more at 17 than 21) characters (including spaces) with anything that follows represented by an ellipsis.

App icon

Icons are a very good idea and eye candy is the purpose. The icon appears on the download page as well as in the Google Play search results.

Details page

In iTunes, the details page is limited to 4,000 characters with 700 cited as a best practice. One screen shot is required with the ability to include four additional screen captures. The description of the application and functionality should be keyword-rich with a compelling call to action as customer interaction is a significant indicator of relevance. Additional components that can be included: an instructional video, customer ratings and reviews.

Chart comparing Evernote Apps in Google Play and iTunes

Categories

In iTunes the app will be listed under a primary category with the opportunity to select a second category for additional customer query option. The best practice recommendation is to use the customer pain point resolved as a guide for the second category.

Off the page factors that influence ranking

There are influences outside of download page text that can influence ranking in iTunes application search results. The primary off the page ranking factors are:

Downloads: the number and rate of downloads are key drivers of results placement.

Installation base: how many customers actually install the app.

Removals: whether customers dislike the app enough to shake off their lethargy to remove it entirely. Both Google Play and iTunes take note of uninstalls as an indicator of relevance.

Customer reviews and ratings: whether customers give you stars or actually write something down on the page. It is very important that you pay attention to the feedback from your users and respond in some way.

Recommendations

Optimize the application itself.

  • Make sure that the performance speed is smokin’ fast
  • Reduce the file usage weights
  • Keep the application fresh with updates and enhancements

Build and sustain support from external assets.

  • Website support: placement of a permanent download icon or graphic on your website
  • Social media support: schedule tweets with a tiny URL, post to Facebook, pin on Pinterest and don’t forget regular updates on Google+ to sustain momentum

Reach out to key influencers.

  • Contact app blogs and promotion networks to get them excited and talking about your swell new app!
  • Encourage customer reviews and ratings

Gotchas for both

You will want to submit the app to both iTunes and Google Play Store at least two weeks before release as it takes time for the files to make their way through the Apple and Google processes. To do this, Google requires that you set up yet another account with the Google Play Developers Console to upload apps. The fee for doing so is $25. At least Apple lets you play without having to pay.

Is it just me or does this sound like the early days of SEO, the way it used to be 10 years ago with keyword sort-of-stuffing, key influencer outreach, and review “acquisition”? We might as well enjoy the waning days of this Luddite approach to SEO. No doubt Google is working on Panda-app as I write this post. Look for the icon below at the Google Play store. Or, more likely, it will come looking for you.

 

The post Get Found, Make Money: Optimizing Apps for Search appeared first on Portent.

]]>
http://www.eigene-homepage-erstellen.net/blog/seo/optimizing-apps-for-search.htm/feed/ 0
The Evolution of Advanced Keyword Research http://www.eigene-homepage-erstellen.net/blog/seo/advanced-keyword-research.htm http://www.eigene-homepage-erstellen.net/blog/seo/advanced-keyword-research.htm#comments Thu, 21 Mar 2013 14:00:54 +0000 http://www.eigene-homepage-erstellen.net/?p=16597   Keyword research used to be so easy. You picked terms that the client wanted to rank high for, stuffed them onto the page, shake, bake and voila, the site is ranking in the top 3 search results for query. Bad SEOs stuffed keywords in all of the wrong places, all over the page, and… Read More

The post The Evolution of Advanced Keyword Research appeared first on Portent.

]]>
 

Keyword research used to be so easy. You picked terms that the client wanted to rank high for, stuffed them onto the page, shake, bake and voila, the site is ranking in the top 3 search results for query.

Bad SEOs stuffed keywords in all of the wrong places, all over the page, and in same font color as the page background so that only the search spiders saw them. Good SEOs stuffed them into anchor text, heading and subheadings, and <title> tags.

And then things began to change: first in a good way, then in a not-so-good-way, and now maybe in a good way again.

A Brief History of Search Engines

Keyword research was easy because search engines were so dumb – I mean “fleece your little brother for his paper route money” dumb.

All they had going for them was the flimsy Term Frequency/Inverse Document Frequency formula that rewarded documents that had the most instances of the query terms in body text, along with some lightweight suppression clause so that long boring documents did not always get the top spots.  Yay early information retrieval.

The good days of gaming search engines came to a screeching halt with Google’s buzzkill PageRank algorithm. PageRank was based on the academic model that stipulated papers cited by other papers had to be better than those not so cited. Applied to the decidedly non-academic public Web, pages that had a lot of links pointing to them had to be more relevant than the others, right?

Peace and relevant search results lasted until the SEO community was able to figure out how to game this system with begging, borrowing and buying links. Google was shocked! Evidently, this did not happen in the hallowed halls of academia, at least not in the Stanford Graduate Computer Science program of the 1990s.

 

Waaaaaahlll Pilgrim, Google was not going to take that sitting down. (I have it on good authority that they take nothing sitting down because those stand-up treadmill desks are standard issue at the Googleplex.) They fire off the Hilltop Algorithm. Hilltop was one of the first algorithms to introduce the concept of machine-mediated “authority” to combat the human manipulation of results for commercial gain.

With Hilltop:

  • Pages are ranked according to the number of non-affiliated “experts” that point to it, i.e. not in the same site or directory
  • Authorities have lots of unaffiliated expert document on the same subject pointing to them
  • Affiliation is transitive [if A=B and B=C then A=C]

The beauty of Hilltop is that unlike PageRank, it is query-specific and reinforces the relationship between the page and the user’s query. You don’t have to be big or have a thousand links from auto parts sites to be an “authority” and float to the top. And, to the rejoicing of searchers, it was soon adopted by the other search engines across the land.

Giddy with the success of contextual mapping, the search engines followed up with Topic-Sensitive PageRank.  This involved the geekiest of information retrieval methods, use of predictive analytics, and vector space modeling on a subset of the Web to analyze the context of query phrase term use in a document, in the history of all queries, and in the history of the user who submits the query.

As if Christmas in July was not enough for searchers, the search engines also laid down ontology (an organized schema of subject categories) supposedly derived from the Open Source Directory.  I don’t know about you but that looks a lot like the Semantic Web to me.

Ironically, as search engines got smarter, searchers got dumber. Most of them started to construct poor queries (56%), select irrelevant results (55%), and become disoriented and overwhelmed by the amount of information in search results (38%). Hmmm…maybe it is time to start taking a user-centered approach to optimizing websites for users?

User-Centered Keyword Research

User-centered keyword research lives up to its name by starting with what prospective customers would likely use to find the site. And the best place to find that information is your client.  Familiarize yourself with the client’s product space and vocabulary, ask questions, and look at their competitors.

Then, turn to Google Analytics to find out what is sending traffic and how it is performing. I look at what page they land on, whether they engage or bounce, and if they convert. If there is one tail, long or short, in SEO that is supported by data, “longer query = more likely to convert” is it.

Finally, swing by Google Webmaster tools and see how the search engine currently views site relevance by studying queries, impressions, AVERAGE (important distinction there) position, and click-through rate.

Next, compare actual site behavior with general search behavior using any one or all of my favorite tools:

Google Trends is not nearly a big enough return for the egregious and persistent invasion of our privacy, but it comes pretty close. Google Trends is a view into the long, dark, deep Google data mine of search behavior with the capacity to filter by geography or time.

The true delight lies in seeing Top and Rising search queries to the term phrases in comparison. It provides actual user search behavior. In the comparison below, the phrase “user experience” is more popular than “information architecture.” Note that a significant portion of the search around the general phrase is job-related based on the Top Searches information.

 

Chart of Advanced KW Research Google Trends

Google Trends

Yahoo! Clues offers many of the same data points as Google Trends with demographic information (age, gender) thrown in for good measure. The data is extracted from Yahoo! Search and is aggregated and anonymized. A Yahoo! Clues-specific feature is the Search Flow data that reveals what the user searched for before the term phase comparison and what they searched for after.

 

Chart of Advanced KW Research Yahoo Clues

Yahoo Clues

We’ve all experienced the mostly annoying yet occasionally helpful search suggest, the list of query suggestions that appears as you start typing, and changes to meet the changes in your query.

Ubersuggest provides an easy to navigate, much less annoying, more useful aggregation of search suggestions from Google and other “suggest services.” In looking at the Ubersuggest results for query keyword research, I’d say that most folks want someone or something else to do the work for them.

Advanced KW Research Ubersuggest

Ubersuggest

As you can see from Ubersuggest, if you are looking for a keyword research tool, you are not alone in your quest. Which you choose, however, will be up to you. Some perennial sites in the top search results for “keyword tool” are:

Magical Thinking with Psychographics

At a search conference in July 2012, Marty Weintraub from aimClear delivered a groundbreaking presentation on using Facebook psychographics to develop a new type of user persona that can assist with remarketing.

On the aimClear blog, psychographics are defined as: “…a means of identifying users by interests, occupations, roles in life, predilections, and other personal characteristics” This involves mining social outlets for personal preference data, e.g. a political reporting website that targets individuals who listen to Rachel Maddow, Stephen Colbert and Al Jareeza, like the Muppets, and work for a middle of the road or left-leaning online or print publication.

These preferences are often articulated with term phrases developed by users and potentially reveal what they would use when looking for the client product while facing a search box.

In his book “How Buildings Learn”, architect Stewart Brand recommends waiting a few weeks after a building is finished before putting in the walkways. His reasoning? The footprints in the grass will tell you where people are walking to get in and out. So it is with smarter keyword research. Before stuffing a bunch of term phrases on a page, start with what searchers are using to find your client’s product or service. Then keyword research will be as smart as the search engines, or even smarter.

What keyword research tool do you find most helpful?  Let us know in the comments below.

The post The Evolution of Advanced Keyword Research appeared first on Portent.

]]>
http://www.eigene-homepage-erstellen.net/blog/seo/advanced-keyword-research.htm/feed/ 3
What the Duck? DuckDuckGo Takes on the Big G http://www.eigene-homepage-erstellen.net/blog/seo/what-the-duckduckgo.htm http://www.eigene-homepage-erstellen.net/blog/seo/what-the-duckduckgo.htm#comments Wed, 19 Dec 2012 14:00:23 +0000 http://www.eigene-homepage-erstellen.net/?p=14464 It’s hard to pinpoint when some of us fell out of love with Google. Maybe it was when they went from geeky Stanford students out to “not be evil” to the kinda creepy boyfriend who orders for us at restaurants because “that’s what we always have.” Like it or not, we were trapped. Over the… Read More

The post What the Duck? DuckDuckGo Takes on the Big G appeared first on Portent.

]]>
DuckDuckGo How to Goose Google

It’s hard to pinpoint when some of us fell out of love with Google. Maybe it was when they went from geeky Stanford students out to “not be evil” to the kinda creepy boyfriend who orders for us at restaurants because “that’s what we always have.” Like it or not, we were trapped. Over the years, many other search engines have tried to capture our hearts only to fail, victims of the collective addiction to the presumed-to-be-infallible Google.

Search engine as stalker

Like many, I greeted the arrival of the search engine DuckDuckGo with pleasure and skepticism. Right on the home page, they tell me that they believe in better search and no tracking on any level. DuckDuckGo does not capture your IP, archive your Web history, save your searches or store personal information of any kind. This means no weird “search aftertaste” in the form of remarketing banner ads that follow you around the Web and are based on searches you’ve done in the past. No worries about the search engine caving in to subpoenas requesting access to search data.

Better search it is with a lean and pleasing design that comes with DuckDuck “goodies” on demand. These are pre-populated queries and functions that do calculations, generate strong passwords, provide bartending recipes and produce the Green Day tour schedule. What’s not to like?

Search engine novelty acts

Unrelated Images

I know that we’ve been down this path before. There was Cuil and Viewzi. These were passing flirtations with novelty act search engines where we learned that search glamor is pixel deep. And that trash talking wench posing as a cyber-librarian Mrs. Dewey was just wrong on so many levels.

 

DuckDuckGo has street cred in the partnerships with Zanran, a search engine that pulls structured data from the Deep Web, and Wolfram Alpha, the search engine most like Sheldon from “Big Bang Theory.” A recent venture capital infusion will finance necessary expansion to take on Google, the Apollo Creed of search these days.

Search engines “learn” from use by analyzing the behavior towards results to confirm or tune the ranking algorithms. In order to compete effectively, DuckDuckGo needs to get more people using the search engine. Currently, it is processing nearly 2 million searches a day. Google processes somewhere around 3 billion searches a day.

Level playing fields are a good thing (or so I’m told). They foster a competitive landscape that produces innovation. I am doing my part to make the search engine landscape more competitive by starting my searching at Duckduckgo.com. Most of the time I find what I need and when I do not, I go Bing or Google.  Break the cycle of search addiction. Join me and many other professionals in the search world that use an ABG (anybody but Google) search engine as their default starting point for search.

Google will never know…

The post What the Duck? DuckDuckGo Takes on the Big G appeared first on Portent.

]]>
http://www.eigene-homepage-erstellen.net/blog/seo/what-the-duckduckgo.htm/feed/ 1
SEO: Trading Tactics for Craft http://www.eigene-homepage-erstellen.net/blog/seo/craftsmanship-of-seo.htm http://www.eigene-homepage-erstellen.net/blog/seo/craftsmanship-of-seo.htm#comments Mon, 06 Aug 2012 14:00:05 +0000 http://www.eigene-homepage-erstellen.net/?p=10862 I just finished 3 intensive days at MozCon 2012. Not for intellectual sissies or social introverts, the days were spent listening to presentations, demonstrations and exhortations from a list of luminaries. There was something for everyone: link building is the road to perdition, how-tos on link building, agile all over the place, data modeling, crunching,… Read More

The post SEO: Trading Tactics for Craft appeared first on Portent.

]]>
I just finished 3 intensive days at MozCon 2012. Not for intellectual sissies or social introverts, the days were spent listening to presentations, demonstrations and exhortations from a list of luminaries.

There was something for everyone: link building is the road to perdition, how-tos on link building, agile all over the place, data modeling, crunching, predicting, and wrangling. As many kittens rescued were killed, bullet point by bullet point (You remember that bullet points kill kittens, yes?.

SEO was much discussed, duh! It’s about marketing, it’s about engagement, it’s about content, it’s dead, it’s alive and not-so-well, it’s in the target sites of Google, it’s about marketing (reiterated a number of times over the course of the conference).

A phrase from the Hollywood would describe the conference well: “I laughed, I swore, I wept” about what SEO is and where it is going. Strangely (and sadly), in these conversations no one mentioned the searcher (our customer).

Up until Google’s Penguin and Panda shots across the SEO bow, we did not have to think much about the person behind the keyword phrase query. SEO focused on the search engines in a Wiley Coyote/Road Runner way, tactic meeting tactic. The wailing and teeth grinding from the SEO community over the last year should be an indication that effective optimization for search engines will require a deeper knowledge of our customers along with a more nuanced, strategic approach.

Searcher as Sub Process

Jaron Lanier takes on our diminishing influence on technology in his magnificent manifesto on You Are Not A Gadget. Lanier tells us that the machines we created now treat us as sub-processes.

Search engines are a prime example. We treat them as if they are omniscient. They are not. Search engines are a collection of software components that run a set of instructions in a specific sequence. They cannot make judgments that involve thinking outside of their “orders.” They cannot create. Search engines take input, follow orders and assemble results.

There was a lot of Google smack talking. We loved Google until we made them very powerful. Then Sergey, Larry and Matt started treating us like sub-processes. Google proclaims: “You WILL have a good user experience (Panda) and we won’t give you details on what our algorithm thinks that is.” Or: “You WILL earn links the right way (Penguin) and we won’t tell you what that is until it is too late, hold you accountable for legacy work and make it possible for your competitors to negative SEO your site or sites extort money from you to remove links.” In other words, Google says “jump” and the SEO community asks “how high.”

At the conference, Rand Fishkin said Bing is now his primary search engine – and smartly so. If you don’t like Google, then stop helping them. Discontinue using Google as your default choice. Search engines learn from searcher behavior and mine the hell out of usage data. The slight variance between Google and Bing SERP are the result of Google having 4x the amount of data to mine and that is a lot of data. So, join the subversive fun with Rand and I and give Bing a go first before going over to the dark side.

SEO as Craft

The night before the conference, I saw the Pixar movie “Brave.” As all  Pixar films, the story telling is first rate and the animation complex, multi-layered and nuanced. This is craftsmanship at the highest level and certainly on par with the grand masters of Looney Tunes and Disney if not surpassing them due to technologic advantages.

Flatland animation doesn’t cut it anymore.

I got to thinking that maybe this is what we should focus on as we approach SEO vNext. Let us take a lesson from our animated brethren and return to the days of our own craftsmanship when those publishing to the online space had a clear idea of who they were messaging to, cared about what they were saying, and described it in a way that made sense to the intended audience whom they knew because they were one of them.

We understand user experience in a real and direct way that search engines never will, no matter how many algorithms their developers apply. It is human experience, not logic, that now drives the Google’s algorithms. Selection of a search result is an emotional or intellectual choice, not a logical one. This we inherently understand.

SEO isn’t dead or dying, according to the Moz masses. However, it is a bit punch-drunk from being slapped around by Google for the last two years. When SEO is a strategy that includes content, information architecture and treating the searcher as a customer instead of a sub-process, the search engines become our tool instead of the reverse. When we apply our ability to make decisions based on critical and forward thinking, we trump machine learning with that which it does not have and never will. It will not be that hard:

  • <title> tag optimization after content strategy,
  • sitemap.xml generation after information architecture review,
  • link earning/acquisition/building after thoughtful examination of what the site is about, how customers look for that information and what they find valuable.

When we treat the profession of optimizing for search engines as a craft instead of a trade, then our depth, emotion and intellect become passports to visible rankings and the search engine is the sub-process. And sooner rather than later is key because we’ve already discovered that Google’s understanding of “to serve man” is very much different than ours.

Let’s be brave and give it a shot.

The post SEO: Trading Tactics for Craft appeared first on Portent.

]]>
http://www.eigene-homepage-erstellen.net/blog/seo/craftsmanship-of-seo.htm/feed/ 5
Predatory Online Advertising…and not from Google for a Change http://www.eigene-homepage-erstellen.net/blog/ppc/angies-list-predatory-social-media-advertising.htm http://www.eigene-homepage-erstellen.net/blog/ppc/angies-list-predatory-social-media-advertising.htm#comments Tue, 03 Jul 2012 14:00:54 +0000 http://www.eigene-homepage-erstellen.net/?p=10574 I learned something the hard way awhile ago. There is a big difference between search engine advertising and social site advertising and just because someone looks up front and helpful does not mean that they are. Search engine advertising is affordable, configurable, reports a return on investment, can be turned on and off quickly, is… Read More

The post Predatory Online Advertising…and not from Google for a Change appeared first on Portent.

]]>
Angie's List
I learned something the hard way awhile ago. There is a big difference between search engine advertising and social site advertising and just because someone looks up front and helpful does not mean that they are.

Search engine advertising is affordable, configurable, reports a return on investment, can be turned on and off quickly, is readily available and makes sense for businesses of all sizes.

Social site advertising… not so much. My experience with advertising on Angie’s List is a good example. And, just because Angie looks like a swell person who claims to want the best for us in her TV ads doesn’t mean that trickles over to trying to grow a small business with ads on her very lucrative website. I’m not sure if it should be called advertising or preferred placement. The first clue should have been the use of ad sales people who are not the account managers once the deal is done. So, they can say anything to close the deal.

Either way, when compared to the transparency and flexibility of search engine PPC, Angie’s list falls short on so many levels, starting with:

  1. Poor user experience: At Angie’s List, the small business owner selects the categories for their ads. The ads are presented at the top of the search results (based on keyword search that is limited to preset categories). There is no obvious distinction between paid placement ads and  organic results. I am guessing those at the top of the category list paid for that position. It is hard to tell how or why Angie’s List puts the service provider in a certain position and so hard to tell if they are there because they’re good or because they have money to burn.
  2. Year-long commitment at a fixed cost not tied to performance: The advertiser buys placement at a set amount per category. Once the number of categories is decided upon, the amount is added up for a monthly fee that becomes a year-long contract. The advertiser pays the same amount every month regardless of whether the ad (or placement) generates customer activity.
  3. Tacit rather than explicit agreement acceptance: I guess that I expected a lot here. When asking someone to commit to a one year contract with, what some might think of as an onerous, cancellation fee (a significant percentage of  balance due on the contract), it would also be “cricket” to have the specific terms and conditions of the specific agreement spelled out on a page for final review. Instead, I got the contract for review as a PDF download separated from the specific requirements folded into a long, scrolling Terms & Conditions document that had a  check box to indicate acceptance. How about putting that check box at the end of the actual Terms & Conditions? Sheesh…even Apple provides  “Are you sure?” moments. Both Google and Bing, with lots to hide most of the time, are up front and simple with their online advertising sign up processes.
  4. Lackluster reporting:  There’s not a lot to go on here. The very nice account representative (and I am not being factitious here) did indicate that the category in question was one of their most popular and my client received 3 clicks in 30 days against a total ad spend of $202 for all categories on that month and zero indication if the folks who clicked on the ad (placement) actually did anything.

If you want to use the Web to drive traffic to your small business, use a reputable online resource and….

  1. Call in a professional to help you set up the right campaign in the right channels for your business and your customers. The search engines have directories of certified professionals and there are Linked-In groups where you can ask for other recommendations. You should not need to spend a lot and the money that you spend will come back in strong ROI on the campaign developed by a professional who really knows what they are doing and takes the time to learn what you want to do. There are scads of reasonable programs out there. Portent has one that starts at $250 a month without the hard sell, Don Corleone tactics described above.
  2. Start with search engine advertising, then social media advertising: Bill Murray was asked in an interview how it was to be rich and famous. His response was something along the lines of “try rich first and see if that does not solve your problems.” So it is with online advertising. Try running ads on the search engines first where you can pick the keywords you want to buy, the amount that you want to spend, stop and start when you want to stop and start and get actionable performance data. People go to Google to look for information and they click on the ads to help them make a decision. People go to social sites like Angie’s List to supplement a decision that they’ve likely already made.
  3. Read social site advertising contracts and agreements very carefully, every. single. word. There is no reason to enter into an advertising agreement that does not justify its existence with a pay for action model and the ability to start/stop and adjust at no cost or penalty.
  4. Demand metrics, metrics, metrics. You will never know the return on your investment without rich reporting on who did what. If the channel does not offer that, what are you paying for?

There, you’ve been advised and I feel that I got my money’s worth from the experience just for that.

Now, if only I could buy my way out of Angie’s darn newsletter…That is something I would willingly pay for.

The post Predatory Online Advertising…and not from Google for a Change appeared first on Portent.

]]>
http://www.eigene-homepage-erstellen.net/blog/ppc/angies-list-predatory-social-media-advertising.htm/feed/ 4