This week, we’re staying with the idea of career choice but are going about as far as away as you can get from Holland’s career congruence and person-environment fit — so hold on.
In the 1988 film “Bull Durham,” aging minor league baseball catcher and slugger Crash Davis (Kevin Costner) complains to Annie Savoy (Susan Sarandon) about the inherent unfairness that she, rather than he or Ebby Calvin LaLoosh (Tim Robbins), gets to decide which of the two will receive her personal favors and coaching mentorship for the season. He asks her, “Why do you get to choose?… Why don’t I get to choose? Why doesn’t he get to choose?”
She replies, “Well, actually, nobody on this planet ever really chooses… I mean, it’s all a question of quantum physics, molecular attraction, and timing. Why, there are laws we don’t understand that bring us together and tear us apart.”
Organizational writer Gareth Morgan, in his Images of Organizations (Sage, 1997) explores the use of nine metaphors to examine ways of considering organizations. One of those metaphors, “flux and transformation” (see chapter nine) presents us with four “logics of change,” embracing all of the ideas to which Annie alluded — and much more.
Morgan’s second logic of change, “shifting `attractors;” the logic of chaos and complexity is particularly interesting. Though this book was written with regard to the relationship between organizations and their environments, it’s fun to layer some of these ideas onto individuals and their careers. As we discussed last week, the applicability of choice when considering careers is open to question. A great career fit based on congruence may or may not exist. If it does exist, it may be difficult to discover — or its competitive nature may exclude all but the most skilled and talented. It may be a career that’s gone in 20 or even 10 years, or it may require the careerist to play a role that doesn’t seem quite as attractive a few years down the road.
So, then where else might we look in making career choices?
Drawing from the theories that inform Morgan’s second logic of change, here are some ideas for you ponder.
Chaos theory posits competing attractors – i.e. circumstances or “contexts” that pull a non-linear system toward one situation or the other – for example, away from an existing context and into a new one. In order for the pull to resolve in favor of a new context, a system gets pushed far from its equilibrium into an “edge of chaos” situation, where “bifurcation points” (forks in the road) emerge. These bifurcation points represent different potentials. Inevitably, some sort of new order will emerge, though it cannot be predicted or imposed. Morgan advises that the implication for managers is to “shape and create `contexts’ in which appropriate forms of self-organization can occur.” New contexts, he continues, can be created by generating “new understandings of a situation or by engaging in new actions.” Further, in non-linear systems, it only takes very, very small changes at critical times to trigger “major transforming effects.” Anyone, he continues, who wishes to change the context in which he operates should search for “doable, high-leverage initiatives that can trigger a transition from one attractor to another.”
This is all very esoteric, but what it might really come down to for the individual is being on alert to recognize situations in one’s employment context where competing attractors have the potential to create “edge of chaos” situations. If there is a practical lesson here – other than continually scanning the horizon of one’s employment context – it might just be to think small instead of thinking big.
Here’s a personal example, which only in retrospect makes sense – as I certainly had no idea what I was doing at the time… When I was downsized (made redundant) in 1993, the company I worked for worked very hard to provide helpful support to those of us who had been displaced. It staffed and opened a full-time outplacement center, provided a generous severance package and gave us two weeks to vacate. I had planned to use the career center – but first, went around the building leaving handwritten notes on the doors and desks of people I knew, advising that I would be available to help with projects, if needed, until I figured out what I was going to do. (Broad-based work solicitation wasn’t permitted within the old context). Well, I only made it to the career center once — because that one small series of note-leaving acts resulted in a deluge of consulting work that launched a new career. The downsizing had created an “edge of chaos” situation that led to a new context – one in which my skills could now be used for the benefit of the organization. Through naïvete and uncertainty, I had somehow navigated a bifurcation point in a way that has worked out pretty well – at least so far. I’m a little embarrassed to be using this personal example because there was such an element of luck involved — and this good fortune is not something I take for granted.
Just please take the following away: If you and your career are verging on an edge of chaos situation, are there small actions that you can leverage into major transformations?
If anyone has thoughts or examples, please share.
Till next week. All my best, Jan
Morgan, G. Images of Organization. (1997). Thousand Oaks, London, New Delhi, Sage.
On March 24 a FINDALL search in Google for keywords density optimization returned 240,000 documents. I found many of these documents belonging to search engine marketing and optimization (SEM, SEO) specialists. Some of them promote keyword density (KD) analysis tools while others talk about things like “right density weighting”, “excellent keyword density”, KD as a “concentration” or “strength” ratio and the like. Others even take KD for the weight of term i in document j, while others propose localized KD ranges for titles, descriptions, paragraphs, tables, links, urls, etc. One can even find some specialists going after the latest KD “trick” and claiming that optimizing KD values up to a certain range for a given search engine affects the way a search engine scores relevancy and ranks documents.
Given the fact that there are so many KD theories flying around, my good friend Mike Grehan approached me after the Jupitermedia’s 2005 Search Engine Strategies Conference held in New York and invited me to do something about it. I felt the “something” should be a balanced article mixed with a bit of IR, semantics and math elements but with no conclusion so readers could draw their own. So, here we go.
In the search engine marketing literature, keyword density is defined as
where tfi, j is the number of times term i appears in document j and l is the total number of terms in the document. Equation 1 is a legacy idea found intermingled in the old literature on readability theory, where word frequency ratios are calculated for passages and text windows – phrases, sentences, paragraphs or entire documents – and combined with other readability tests.
The notion of keyword density values predates all commercial search engines and the Internet and can hardly be considered an IR concept. What is worse, KD plays no role on how commercial search engines process text, index documents or assign weights to terms. Why then many optimizers still believe in KD values? The answer is simple: misinformation.
If two documents, D1 and D2, consist of 1000 terms (l = 1000) and repeat a term 20 times (tf = 20), then for both documents KD = 20/1000 = 0.020 (or 2%) for that term. Identical values are obtained if tf = 10 and l = 500.
Evidently, this overall ratio tells us nothing about:
1. the relative distance between keywords in documents (proximity)
2. where in a document the terms occur (distribution)
3. the co-citation frequency between terms (co-occurrence)
4. the main theme, topic, and sub-topics (on-topic issues) of the documents
Thus, KD is divorced from content quality, semantics and relevancy. Under these circumstances one can hardly talk about optimizing term weights for ranking purposes. Add to this copy style issues and you get a good idea of why this article’s title is The Keyword Density of Non-Sense.
The following five search engine implementations illustrate the point:
Linearization is the process of ignoring markup tags from a web document so its content is reinterpreted as a string of characters to be scored. This process is carried out tag-by-tag and as tags are declared and found in the source code. As illustrated in Figure 1, linearization affects the way search engines “see”, “read” and “judge” Web content –sort of speak. Here the content of a website is rendered using two nested html tables, each consisting of one large cell at the top and the common 3-column cell format. We assume that no other text and html tags are present in the source code. The numbers at the top-right corner of the cells indicate in which order a search engine finds and interprets the content of the cells.
The box at the bottom of Figure 1 illustrates how a search engine probably “sees”, “reads” and “interprets” the content of this document after linearization. Note the lack of coherence and theming. Two term sequences illustrate the point: “Find Information About Food on sale!” and “Clients Visit our Partners”. This state of the content is probably hidden from the untrained eyes of average users. Clearly, linearization has a detrimental effect on keyword positioning, proximity, distribution and on the effective content to be “judged” and scored. The effect worsens as more nested tables and html tags are used, to the point that after linearization content perceived as meritorious by a human can be interpreted as plain garbage by a search engine. Thus, computing localized KD values is a futile exercise.
Burning the Trees and Keyword Weight Fights.
In the best-case scenario, linearization shows whether words, phrases and passages end competing for relevancy in a distorted lexicographical tree. I call this phenomenon “burning the trees”. It is one of the most overlooked web design and optimization problems.
Constructing a lexicographical tree out of linearized content reveals the actual state and relationship between nouns, adjectives, verbs, and phrases as they are actually embedded in documents. It shows the effective data structure that is been used. In many cases, linearization identifies local document concepts (noun groups) and hidden grammatical patterns. Mandelbrot has used the patterned nature of languages observed in lexicographical trees to propose a measure he calls the “temperature of discourse”. He writes: “The `hotter’ the discourse, the higher the probability of use of rare words.” (1). However, from the semantics standpoint, word rarity is a context dependent state. Thus, in my view “burning the trees” is a natural consequence of misplacing terms.
In Fractals and Sentence Production, Chapter 9 of From Complexity to Creativity (2, 3), Ben Goertzel uses an L-System model to explain that the beginning of early childhood grammar is the two-word sentence in which the iterative pattern involving nouns (N) and verbs( V) is driven by a rule in which V is replaced by V N (V >> V N). This can be illustrated with the following two iteration stages:
0 N V (as in Stevie byebye)
1 N V N (as in Stevie byebye car)
Goertzel explains, “-The reason N V is a more natural combination is because it occurs at an earlier step in the derivation process.” (3). It is now comprehensible why many Web documents do not deliver any appealing message to search engines. After linearization, it can be realized that these may be “speaking” like babies. [By the way, L-System algorithms, named after A. Lindermayer, have been used for many years in the study of tree-like patterns (4)].
“Burning the trees” explains why repeating terms in a document, moving around on-page factors or invoking link strategies, not necessarily improves relevancy. In many instances one can get the opposite result. I recommend SEOs to start incorporating lexicographical/word pattern techniques, linearization strategies and local context analysis (LCA) into their optimization mix. (5)
In Figure 1, “burning the trees” was the result of improper positioning of text. However in many cases the effect is a byproduct of sloppy Web design, poor usability or of improper use of the HTML DOM structure (another kind of tree). This underscores an important W3C recommendation: that html tables should be use for presenting tabular data, not for designing Web documents. In most cases, professional web designers can do better by replacing tables with cascading style sheets (CSS).
“Burning the trees” often leads to another phenomenon I call “keyword weight fights”. It is a recurrent problem encountered during topic identification (topic spotting), text segmentation (based on topic changes) and on-topic analysis (6). Considering that co-occurrence patterns of words and word classes provide important information about how a language is used, misplaced keywords and text without clear topic transitions difficult the work of text summarization editors (humans or machine-based) that need to generate representative headings and outlines from documents.
Thus, the “fight” unnecessarily difficults topic disambiguation and the work of human abstractors that during document classification need to answer questions like “What is this document or passage about?”, “What is the theme or category of this document, section or paragraph?”, “How does this block of links relate to the content?”, etc.
While linearization renders localized KD values useless, document indexing makes a myth out of this metric. Let see why.
Tokenization, Filtration and Stemming
Document indexing is the process of transforming document text into a representation of text and consists of three steps: tokenization, filtration and stemming.
During tokenization terms are lowercased and punctuation removed. Rules must be in place so digits, hyphens and other symbols can be parsed properly. Tokenization is followed by filtration. During filtration commonly used terms and terms that do not add any semantic meaning (stopwords) are removed. In most IR systems survival terms are further reduced to common stems or roots. This is known as stemming. Thus, the initial content of length l is reduced to a list of terms (stems and words) of length l’ (i.e., l’ < l). These processes are described in Figure 2. Evidently, if linearization shows that you have already “burned the trees”, a search engine will be indexing just that.
Similar lists can be extracted from individual documents and merged to conform an index of terms. This index can be used for different purposes; for instance, to compute term weights and to represent documents and queries as term vectors in a term space.
The weight of a term in a document consists of three different types of term weighting: local, global, and normalization. The term weight is given by
where Li, j is the local weight for term i in document j, Gi is the global weight for term i and Nj is the normalization factor for document j. Local weights are functions of how many times each term occurs in a document, global weights are functions of how many times documents containing each term appears in the collection, and the normalization factor corrects for discrepancies in the lengths of the documents.
In the classic Term Vector Space model
Equation 3, 4 and 5
which reduces to the well-known tf*IDF weighting scheme
where log(D/di) is the Inverse Document Frequency (IDF), D is the number of documents in the collection (the database size) and di is the number of documents containing term i.
Equation 6 is just one of many term weighting schemes found in the term vector literature. Depending on how L, G and N are defined, different weighting schemes can be proposed for documents and queries.
KD values as estimators of term weights?
The only way that KD values could be taken for term weights
is if global weights are ignored and the normalization factor Nj is redefined in terms of document lengths
However, Gi = IDF = 1 constraints the collection size D to be equal to ten times the number of documents containing the term (D = 10*d) and Nj = 1/lj implies no stopword filtration. These conditions are not observed in commercial search systems.
Using a probabilistic term vector scheme in which IDF is defined as
does not help either since the condition Gi = IDF = 1 implies that D = 11*d. Additional unrrealistic constraints can be derived for other weighting schemes when Gi = 1.
To sum up, the assumption that KD values could be taken for estimates of term weights or that these values could be used for optimization purposes amounts to the Keyword Density of Non-Sense.
The Fractal Geometry of Nature, Benoit B. Mandelbrot, Chapter 38, W. H. Freeman, 1983.
A well known conference for an academic getaway is the Euroma conference. An annual event, where academics from around the world, including the UK, gather and discuss important matters around operations management. Included in this brain expanding event is a brain cell destroying gala dinner where some 500 or so Profs and Lecturers drink copious quantities of alcohol out of the sight of prying eyes of the university 🙂
Here is a short clip of the fantastic entertainment put on for the Gala dinner by our hosts. More on the actual content of the conference later.
Here is a nice picture of Stephanie, one of our regular contributors, in an earlier reincarnation as a computer word processing sales manager. The football manager in the picture is Grahame Taylor England’s manager from 1990 to 1993 when he had a bit of a raw deal with a Dutch player scoring a killer goal after a dreadful foul (see here the story http://bbc.in/1QzjJNs). Anyhoo here is Grahame in happier times at Aston Villa.
Editing and hacking your website in the live environment can be risky – a useful approach is to install WAMP server locally on your laptop, load up you website software, then carry out your hacks in a safe environment. Now we have set up correctlyas shown in the last tutorial creating separate logins for each application is very straightforward. Let’s assume I am going to create a local WordPress instance on my laptop.
The next steps creates the logins:
Login to phpmyadmin user the root user and password
From the home administration screen go to the tab ‘Privileges’:
Select ‘Add a new User’ about a quarter of the way down on the left.
The rest of the Add User dialogue opens up as shown.
Enter wordpress for the user name
Select local from the host drop down and localhost will appear in the field to the right.
Enter wordpress for the password (repeat wordpress in the retype box)
Select the checkbox ‘Create database with same name and grant all privileges’
Check All for global privileges if you wish
Scroll down to the bottom of the screen and hit GO as shown on the screen shot.
The direct execution panel for SQL will show the query being run and the creation of the new databases along with the credentials we entered above.
Log out of root and you will go back to the initial login screen. To check all is OK just re login with the username ‘wordpress’ and password ‘wordpress’ to check all is in order.
Download the latest distribution of WordPress from www.wordpress.org (currently 3.1)
Unzip the package and copy the wordpress folder to the httpdocs folder usually c:\wamp\www\httpdocs\
When you have loaded wordpress to the httpdocs directory on the laptop open up localhost from the wampserver services tab navigate to the wordpress directory and the famous wordpress install routine will start up. Just fill in the db user and password as ‘wordpress’ and enter the default user details and the install completes in about two seconds.
When that is complete just create an alias to wordpress or bookmark the local site to complete the process.
Repeat the above for each local application you wish to install.
When installing local applications on your laptop to run on WAMP server it is often useful to create separate databases and logins for the applications so you can keep things in order. There are several ways to do this but I thought I would set down a step by step process that dummies such as me could follow.
So if you wish to install WordPress, Mantis or Limesurvey as local applications this short tutorial shows you how to do it.
When WAMP server is first installed the root user is created with no password and as default no login intermediate screen is available.
The first task is to edit the phpMyAdmin config file to correct this:
Navigate to C:\wamp\apps\phpmyadmin3.3.9
And open config.inc.php in wordpad
Define two passwords yourpasswordA and yourpasswordB
Add the changes to your config .inc.php file as shown in red below
Next go to the WAMP server services panel and select phpMyAdmin
A login screen will appear as shown below – enter your login details as:
You will then be taken to the phpMyAdmin administration screen where you see you have logged in a root@localhost if all has gone well – this is the screen you normally go to when no login routine is in place.
I’m on a roll today so I thought I’d share another of my irrational moans
The question of the first class ticket and the freeloaders. Now, when I feel like being a big shot I buy a first class season ticket from Lingfield to London so as to enjoy the privilege and peace and quiet of the first class ‘cabin’ on your Southern Services Rail. Now leaving aside that the ‘first class cabin’ is in every respect exactly the same as the paupers ‘enjoy’ then only thing you get is a half decent shot at a seat for the whole journey. So you can imagine dear reader that I get rather wound up when one of the lower orders piles into the first area and plonks himself down in a seat – cap wrong way around, reading the Sun and backside hanging out of his jeans – with clearly no first class ticket. Now rather than not worrying about this, as I should be relaxing and putting myself into a tolerant mood ready for the day, I find myself getting rather p****d off that the guard who is hiding in his slot does not come and check the tickets. Even when they do (on ascension day every two years) its – Oh ‘sorry’ sir/madam this is a standard ticket and you have to move – oh is it I did not notice (arghh!!!!) – sorry I’ll move. So it goes on day after day. So I now think what am I paying for if there is no sanction for those not playing the game. Now I may be the only person that thinks this . (you are and that’s enough of this rant ed.) Royston
The best way to find out whether your project is feasible is to complete a Feasibility Study. This process helps you gain confidence that the solution you need to build can be implemented on time and under budget. So here’s how to do it in 5 simple steps… Completing a Feasibility Study A Feasibility Study needs to be completed as early in the Project Life Cycle as possible. The best time to complete it is when you have identified a range of different alternative solutions and you need to know which solution is the most feasible to implement. Here’s how to do it… Step 1: Research the Business Drivers In most cases, your project is being driven by a problem in the business. These problems are called “business drivers” and you need to have a clear understanding of what they are, as part of your Feasibility Study. For instance, the business driver might be that an IT system is outdated and is causing customer complaints, or that two businesses need to merge because of an acquisition. Regardless of the business driver, you need to get to the bottom of it so you fully understand the reasons why the project has been kicked off. Find out why the business driver is important to the business, and why it’s critical that the project delivers a solution to it within a specified timeframe. Then find out what the impact will be to the business, if the project slips. Step 2: Confirm the Alternative Solutions Now you have a clear understanding of the business problem that the project addresses, you need to understand the alternative solutions available. If it’s an IT system that is outdated, then your alternative solutions might include redeveloping the existing system, replacing it or merging it with another system. Only with a clear understanding of the alternative solutions to the business problem, can you progress with the Feasibility Study. Step 3: Determine the Feasibility You now need to identify the feasibility of each solution. The question to ask of each alternative solution is “can we deliver it on time and under budget?” To answer this question, you need to use a variety of methods to assess the feasibility of each solution. Here are some examples of ways you can assess feasibility:
Research: Perform online research to see if other companies have implemented the same solutions and how they got on.
Prototyping: Identify the part of the solution that has the highest risk, and then build a sample of it to see if it’s possible to create.
Time-boxing: Complete some of the tasks in your project plan and measure how long it took vs. planned. If you delivered it on time, then you know that your planning is quite accurate.
Step 4: Choose a Preferred Solution With the feasibility of each alternative solution known, the next step is to select a preferred solution to be delivered by your project. Choose the solution that; is most feasible to implement, has the lowest risk, and you have the highest confidence of delivering. You’ve now chosen a solution to a known business problem, and you have a high degree of confidence that you can deliver that solution on time and under budget, as part of the project. Step 5: It’s now time to take your chosen solution and reassess its feasibility at a lower level. List all of the tasks that are needed to complete the solution. Then run those tasks by your team to see how long they think it will take to complete them. Add all of the tasks and timeframes to a project plan to see if you can do it all within the project deadline. Then ask your team to identify the highest risk tasks and get them to investigate them further to check that they are achievable. Use the techniques in Step 3 to give you a very high degree of confidence that it’s practically achievable. Then document all of the results in a Feasibility Study.
After completing these 5 steps, get your Feasibility Study approved by your manager so that everyone in the project team has a high degree of confidence that the project can deliver successfully.
Search Engine Optimization is a process of choosing the most appropriate targeted keyword phrases related to your site and ensuring that this ranks your site highly in search engines so that when someone searches for specific phrases it returns your site on tops. It basically involves fine tuning the content of your site along with the HTML and Meta tags and also involves appropriate link building process. The most popular search engines are Google, Yahoo, MSN Search, AOL and Ask Jeeves. Search engines keep their methods and ranking algorithms secret, to get credit for finding the most valuable search-results and to deter spam pages from clogging those results. A search engine may use hundreds of factors while ranking the listings where the factors themselves and the weight each carries may change continually. Algorithms can differ so widely that a webpage that ranks #1 in a particular search engine could rank #200 in another search engine. New sites need not be “submitted” to search engines to be listed. A simple link from a well established site will get the search engines to visit the new site and begin to spider its contents. It can take a few days to even weeks from the referring of a link from such an established site for all the main search engine spiders to commence visiting and indexing the new site.
If you are unable to research and choose keywords and work on your own search engine ranking, you may want to hire someone to work with you on these issues.
Search engine marketing and promotion companies, will look at the plan for your site and make recommendations to increase your search engine ranking and website traffic. If you wish, they will also provide ongoing consultation and reporting to monitor your website and make recommendations for editing and improvements to keep your site traffic flow and your search engine ranking high. Normally your search engine optimization experts work with your web designer to build an integrated plan right away so that all aspects of design are considered at the same time.
What if you discovered how to get started making massive money from your coaching program easily? Here are 5 simple steps to get you started.
Step 1 – Help your clients to solve their problems. Step 2 – Be absolutely honest in what you are providing them. Step 3 – Win their trust and establish yourself as an expert in your niche. Step 4 – Make your coaching interesting and interactive. Step 5 – Solve their most pressing questions to get results.
Here are step-by-step details that you can apply quickly and easily…
Step 1 – Help your clients to solve their problems.
To make massive income online from your coaching your main goal should be to help your clients to solve their problems. All you need to do is help them out and be reachable when they are in need. Give them one-on-one support and this will make sure that they are always motivated to stay subscribed to your coaching program. Honesty is the key to massive coaching success…
Step 2 – Be absolutely honest in what you are providing them.
You have to be absolutely honest with your clients and tell them exactly what you can provide out of your coaching and what you cannot. Reason being, they are paying you big money and they will surely expect something more from you in return. Therefore you have to make sure to specify exactly what they will be getting in terms of products and your personal time. Trust and relationship is the key to massive coaching success…
Step 3 – Win their trust and establish yourself as an expert in your niche.
If you are planning to start your coaching program, you have to be sure that you establish a trust factor with your visitors before you go about promoting them your coaching. This is because no one online will be able to shell thousands of dollars for your coaching without knowing and trusting you as an expert in your niche. Therefore make sure that you setup a system wherein your clients are forced to trust you and then you can softly promote your coaching at the backend. The more interesting will be your coaching program, more money you will make…
Step 4 – Make your coaching interesting and interactive.
It is absolutely important that your coaching program is interesting. You can do this easily by making your coaching session interactive and by allowing your clients to participate in your coaching call. The easiest way to do this is that you can tell your clients to ask you questions as soon as you are done with a particular coaching topic; then you can discuss the solution with your client. Question and answer session will make you big money out of your coaching…
Step 5 – Solve their most pressing questions to get results.
Make sure that you setup a teleseminar in your coaching program where you provide a question and answer session for your clients. All you have to do is conduct a weekly teleseminar specifically for your coaching clients and allow them to throw questions on you. Their questions will provide you a bunch of ideas that will allow you to setup your next group coaching call.
Do you want to learn more about how I do it? I have just completed by brand new few guide “How to Generate a Full Time Income Online Selling Your Coaching Services”.
Download it free here: How to Sell Coaching
Do you want to learn how to use articles like this to drive targeted traffic to your site? Click here: Article Writing Guide