- Member Type(s): Expert
- Title:SVP of Enterprise Solutions
- Area of Expertise:Customer Service and Support
To become a ProfNet premium member and receive requests from reporters looking for expert sources, click here.
Tuesday, September 13, 2011, 1:47 PM
I have been in the customer service business for more than two decades and the same discussions continue to rage on and on about customer service – “Look how great Southwest Airlines is,” and “Look how poor Comcast is,” and on and on with new examples of outstanding and poor customer service evolving every few years.
So why is Southwest Airlines so well known for great customer service? It’s because they are transparent – what you see is what you get. Because of this transparency, the people that work for Southwest are empowered to make decisions because they have insight into their customers and what they can and cannot provide to their customers. It is quite simple; when you have information at your fingertips, you can make decisions that impact the outcome of your customer’s experience.
So why can’t everyone deliver a customer experience like Southwest? They could, but most companies have put themselves in a position that doesn’t allow them to execute with the needed level of internal and external transparency to bring this additional value to their customers.
So who’s to blame? The real blame lies with executive leadership, but most often it is pinned on IT. IT has a very difficult job to do. On one side they need strong governance in place to protect their company’s environment and assets, which is a big responsibility. However, they often go too far by mistakenly locking down accessibility to their systems of record as part of a strict governance process. Guess what this does to transparency? It makes transparency inaccessible. A direct result of this access to information lock down has created the proliferation of departmental point solutions that clog the efficiency and effectiveness of most organizations.
The world wide web is a prime example of transparency. Take Facebook for example. If somebody wants to post something about an article they read online, within seconds, if someone else is interested in the same topic, they can see the new information on the web. However, if we add new information to our CRM system on a customer, it might take three – 12 months before this information is replicated and integrated into all the systems that we would need to make this new customer transparent within our organization – if ever.
Let’s look at it from the customer’s view point. At any time they can go online and see everything about your company – what you are marketing, what kind of training you have, who does your services work, where to call for support, who your CEO is, how your stock price is doing, what your latest product version entails, what other customers are saying about your products, and much, much more. The customer has a very transparent view of your company.
To continue reading, please visit: blog.coveo.com/information-access/transp...
Wednesday, June 22, 2011, 4:40 PM
As I mentioned in my previous blog post (part 1 of BI vs. Analytics), the amount of information impacting business operations continues to grow, as markets change and the rate of adoption of new technologies increases. So what’s the next step in making sense of all this data, quickly and efficiently? The answer is combining business intelligence and analytics, driven by Enterprise Search 2.0 platforms, to get the results you need.
Is measuring the variance in predictability really analytics?
Business intelligence as a platform has significantly improved the ability of businesses to gain insight on answering some of their most important performance questions. At a very basic level, here’s how it works:
- The designer of the data warehouse painstakingly sifts through a myriad of information that the business leaders say is important to run their business, looking for the appropriate data that will provide the answers.
- Once found, models are created so that the information is now being captured and monitored.
Now the question is, since this is a planned metric, at what point did analysis take place? If we assume that it occurred at design time, then this metric has become predictable because the only thing it is capable of reporting is what the model was originally designed to tell us. For example, the model may be designed to monitor the relationship between parts and suppliers. If inventory falls below 20%, an alert will appear for someone to come and order new products. Good designers will look for all the possible combinations they can think of to understand why parts would drop below 20% and put in metrics, scorecards, dashboards etc, to show what is happening.
There is a slight problem, however.
The models generated to create the business intelligence warehouse are static in nature. What this means is that if additional information is required in the future, then so is the entire process of rebuilding the model, extracting the data, reloading the data, and republishing the warehouse before the new data is available to analyze the new question that needs to be asked. Often, little sub-warehouses are created to speed up this process by not moving as much data and publishing information faster. Although ideal in theory, these sub-warehouses contribute to the issue of the proliferation of data – duplicating data that then needs to be updated in more than one location.
Our conclusion is that business intelligence is great at static analysis or measuring predictable results of pre-planned conditions. But what do we do when something unexpected happens?
When static analytics are not enough, what’s next?
What’s next is “dynamic analytics.” Let’s take an internet search as an example. The first thing I would do is go to a search box and type in “species of frogs.”I could then count the total number of species, but what if I just want to count bright green frogs? I can type “bright green frogs”, because this data exists on the internet, in no particular structure, further enhancing my search. This is fun: “bright green frogs found in South America,” “bright green frogs in South America that live in trees.” These queries are all possible, each one providing me with more information.
So what is the difference between internet searches and the business intelligence environment? Every day I could type in to the search box “bright green frogs in South America that live in trees,” and every day I could potentially get a different answer – maybe some new data was added due to the fact that destruction of the rainforest caused a species of green frogs to become extinct or scientists discovered a new species of green frogs in another area of South America, etc.
With Enterprise Search 2.0 platforms, this dynamic concept of searching and obtaining relevant information is now possible.
Shifting to Enterprise Search 2.0-powered dynamic analytics for business
Innovative and advanced organizations see the value and power of a unified search platform for their business. Using a series of state-of-the-art data connectors to connect disparate data systems in your information ecosystem allows information to be pulled into a common unified index that can consolidate, correlate and normalize the data in near real time and provide ubiquitous access to it.
Isn’t that what the internet is – a common index of information that is accessible by everyone? Like the internet, Enterprise Search 2.0 platforms can enrich their business environments, providing dynamic mash-ups of key relationships between non-integrated data systems through a search query as opposed to through a warehouse that takes days or weeks to rebuild and recreate by moving all the data. Instead of moving the data, the unified index approach only references it, so when new applications or new entities are added to existing applications they become part of the index and are fully accessible.
Republished with permission of author from original post.
Thursday, June 16, 2011, 3:28 PM
I am sure you have heard it all before: Customer 360° views, Account 360° views and Project 360° views. But has anyone actually delivered on this promise? I am not going to try and sell you that there’s a product out there that can do all this, but what I do want to bring to your attention is a new paradigm shift in the way information is accessed that can make this possible.
Just about every functional application or infrastructure application likely has some kind of dashboard that is marketed or sold claiming that it can show all of your data in a 360 degree view. To some extent, these claims are true. But all too often, the fact is, they can only show you a piece of the data that they have captured only within their own application and displayed in their own dashboard. Doesn’t sound like a true 360 degree view to me. And – if you want to see more you need to go down the expensive and time consuming integration path. Yuck!
The challenge for many organizations is that they are already saturated with applications that claim this capability. I often get asked, “So how does a business manager or IT manager justify buying another solution when they have already invested in all these disparate, but important systems?”
There are a few questions that I ask a client to answer before I give them an answer to this. Try answering these questions yourself:
- Does all the data you want to see in a 360 degree view come from the same application?
- Would a major physical integration project bring together the data that is missing and would it be effective?
- Do you want to modify your best of breed applications that have a specific function to perform, and try and get it to do something else at the risk of disrupting or diluting the purpose that it was purchased?
If any of these questions was answered with a NO, then it is time to let your data do the talking!
The Enterprise Search 2.0 Platform is a platform that securely indexes data from each of your disparate systems into a single, unified index. Once all the systems are indexed, a search for a particular piece of data like an account number “200234594” can be searched for and found in multiple different sources – a CRM system, sales system, case management system, defect tracking system, or maybe a professional services engagement Statement of Work. The result set returned from the query will tell you which sources the data was retrieved from. Armed with this knowledge, logical relationships can be made at the index level giving you guided access to the full subset of data from any query looking for account information. There is more to it than the simple example I gave, but not much more. It’s certainly a lot simpler than a physical integration to do the same thing.
So to answer the original question above, I point out that you want to keep your best of breed solutions as pure as possible and optimized on their principal functionality. Secondly, by leveraging the notion of “letting the data talk”, there is less complex, disruptive and pre-planned integration-style projects that are costly and time consuming. The final point comes down to the fact that in most IT and business infrastructures, the data is never stored in one location, so you really want to access it from where it is stored. Note that the unified index I mentioned earlier does not move the data; the data stays where it belongs within the native application and the unified index simply references it when it is needed.
Leveraging the unified index through an Enterprise Search 2.0 Platform allows you to create those complete 360 dashboards filled with all the information you need without the major overhead of physical integration. In fact, I have seen many of our customer 360 projects take less than a month to implement from start to finish. Can you say that for other attempts at a 360 degree view of information? If not, I think it is worth a closer look at Enterprise Search 2.0.
Republished with permission of author from original post.
Friday, June 3, 2011, 3:29 PM
If you needed further evidence that customer support operations are overwhelmed by data, look no further than the joint research paper released today by the Technology Services Industry Association (TSIA) and Coveo entitled, “Enterprise Search 2.0-Powered Analytics: Transforming Data into Actionable Knowledge.” New data revealed within the report includes this eye-opening statistic: TSIA members receive, on average, 51,000 support incidents per month. These include phone, email, Web chat, and online incidents, each filled with critical information about products and services that could be mined for trends.
I was pleased to have contributed to this report. As the title suggests, the report focuses on Enterprise Search 2.0-powered customer service analytics, a topic relevant to today’s customer service organizations who are awash in oceans of data, and one where we have much expertise to offer. The aim of the report is to help readers understand how support teams are leveraging analytics to deliver real business value in the areas of operational impact, knowledge management, multi-channel management and voice of the customer.
The report outlines how the amount of data flowing through support organizations is increasing every year due to rising interaction volumes and social media activity. The report also reveals interesting figures regarding customer satisfaction scores by channel. In the graph below, you can see the averages follow the same curve as cost: the more human interaction, the higher the satisfaction. The low ratings for self-service are particularly troubling, showing that first-generation knowledgebase and full-text search tools are not keeping pace with customer demand.
The report provides readers with a plan of attack to migrate traffic to the most effective channel to please their customers, which can mean serious financial savings, as well as real-world case studies of organizations who have measured significant business benefits with a unified, 360 view of customer information across multiple channels.
One solution to this data overload is the adoption of analytics in the form of 360-degree views of data centered on what matters most: the customer and the customer base, product and sales information, and customer support performance metrics. The ability to consolidate and correlate data from multiple sources enables the detection of customer trends and the identification of new operational and financial insights.
The full TSIA/Coveo report –“Enterprise Search 2.0-Powered Analytics: Transforming Data into Actionable Knowledge” – can be accessed here: www.coveo.com/TSIAreport.
Republished with permission of author from original post.
Friday, May 27, 2011, 4:51 PM
Information impacting business operations is diverse, complex and growing at staggering rates. Due to unrelenting competition, changing markets, and accelerating rates of adoption for new technology, there is a tremendous strain on IT and business infrastructures. Accessibility to actionable knowledge continually sparks the debate between business intelligence and analytics, questioning the roles each of them play in making informed decisions.
In the past, organizations have struggled to find people willing to sift through mountains of data in order to properly analyze the information needed to make smart decisions. BI made this process easier by introducing analytics as part of the company’s strategic decision making process. Unfortunately, many companies striving to run their entire organization based on BI alone have fallen short for a number of reasons:
- The same people who were sifting through all of the data are now trying to manage the surplus of data required to create an all-encompassing warehouse;
- BI infrastructure and design are faced with a dilemma: as soon as they are completed, they are out of date due to the massive proliferation of data in the business ecosystem. It is almost impossible for organizations to keep up with the veritable explosion of data from new sources;
- The needs of an organization are constantly shifting. In order to respond to these changes, it is necessary (but virtually impossible) to anticipate today what will happen tomorrow.
My guess is that this debate of BI and analytics has been in progress since the inception and branding of BI as a standalone discipline for organizations. BI, as I see it, is a complete end-to-end platform consisting of tools, processes and business models that allow for the retrieval of relevant information in the best format for your business. At this level, analytics is a key part of the BI process. It’s about the predictability of the business – to the extent in which you can predict it – based on potential variances of business norms. The question of what data is being retrieved becomes static in the bigger picture.
One of the biggest questions I hear raised in the debate of BI vs. analytics is: “How dynamic must the access/navigation of information be to really make analytics representative of true business intelligence?” I believe the answer lies in leveraging Enterprise Search 2.0 platforms as a driving source for business intelligence and analytics, and I will explore this idea further in my next blog post.
Republished with permission of author from original post.
Friday, May 20, 2011, 1:18 PM
Where does knowledge really reside in your organization?
If managing knowledge was an easy problem to solve, it would have been solved years ago. Today, however, in this age of information, organizations continue to struggle with the notion of knowledge.
Here’s the knowledge challenge: it is not really about the creation of knowledge, but about where to put it and how to access it once you have housed it somewhere.
To date, the theory of knowledge management has been around identifying the knowledge, categorizing it so we know what it is, storing it somewhere, and having a process to retrieve it again. It all sounds simple, so why has it not been done?
Here are five key reasons why this challenge exists:
Reason #1: Not all information is used or understood by everyone – therefore how can it be accurately categorized?
Reason #2: Only certain people can create the knowledge; therefore, only those people can maintain it.
Reason #3: If the physical format of the information varies within a single organization, then it definitely does when collecting information externally.
Reason #4: Not all creators of knowledge are English or Journalism majors. This means content may vary significantly.
Reason #5: The effort to manage knowledge often outweighs the value, due primarily to the four reasons above.
Knowledge is very important to an organization, and so is finding a way to leverage this knowledge across your ecosystem. Here then, I would like to present five key mechanisms for leveraging this knowledge while returning significant ROI:
Revelation #1: Keep the knowledge close to its owners so they know where it is when they have to create or modify it. If additional processes or procedures are put on top of the creation and modification loop, it will significantly reduce the production and value of that information.
Revelation #2: Having multiple repositories that support and leverage the format of the information is vital for easy storage and retrieval.
Revelation #3: Let the knowledge itself determine for whom it is valuable. Don’t preset the categories that will surely miss candidates who could use this knowledge but did realize the information existed.
Revelation #4: Educate your staff on the value of creating knowledge and sharing it, especially when it is easy to do without overhead.
Revelation #5: Rank the information you use on a daily basis so that your organization can see what information is being used and what information is valuable.
So how can this all come together easily? There are significant benefits to having a unified view of all your disparate data from one focal point without having to move it. It allows you to create methodologies that apply to all your knowledge without having to modify or change the existing systems, and it becomes a behavioral change versus a physical change, which can be very expensive.
Another benefit is that because the information and data is staying close to the owners and within the originating systems, there tends to be more ownership/pride of the content versus shipping it off to another system that has all kinds of rules and regulations on who can and how to access the information. The third main benefit of a unified 360-degree view of your information is you can really see how your information is used, what is really important, and where you might have gaps in your knowledge or accessibility to knowledge.
Republished with permission of author from original post.
Wednesday, April 6, 2011, 11:35 AM
Customer Service continues to be a regular topic of conversation within the leadership ranks of many companies. As companies look for ways to keep customer satisfaction and loyalty at the highest levels possible, the service organization continues to take on a lot of pressure to accomplish these tough goals.
At the core of customer service is the whole topic of self-service. In a recent article by John Ragsdale, VP of Research at the Technology Services Industry Associate (TSIA), he examines plummeting self-service satisfaction rates. The effectiveness of customer self-service has come under heavy scrutiny lately. Is it no-longer doing what it was intended to do? Help the customer whenever, wherever they might be?
Not to date myself, but I was around when customer self-service was the hot topic and a new innovation for companies; the concept of letting your customers service themselves was a significant opportunity to improve customer satisfaction while significantly reducing your costs through case deflection. Over the past fifteen years, self-service has continued to advance, but it now seems that it has hit a plateau, at least in some industries.
Retail and consumer organizations, in my opinion, are still having the best success with their self-service; question-answer profiles come from a finite set of issues and usually are not highly complex. However, in the technology industry, the complexity and diversity has sky rocketed over the past 10 years where the demands are intense on the amount of knowledge required to solve problems. Technology based problems usually now require some form of multi-vendor knowledge or access to information, e.g. “I am having trouble printing on my Brother printer, using Windows 7, on an HP Laptop over a Linksys wireless network, can you help me?” Where do you start?
Looking at the industry of customer service and in particular self-service, I have to believe that there are other factors at play and there are. Customers and customer expectations have changed significantly over the past few years, with the advent and open access to the internet. Consumers are now finding more and more information for themselves from a wider variety of sources. Gas stations have not had to change their self-service model for years because the expectation of the consumer is set; we go when the service station is open, we abide by the rules, and we get our gas. Today, with the explosion of information, and the immediate accessibility to it, our expectations are in a constant flux of change.
Many companies have not taken the time to make adjustments for these new customer expectations in self-service environments. They continue to put single dimensional information into single dimensional storage mechanisms we call knowledge bases, and expect to deliver complex solutions. Maybe that works for high volume-low complexity consumer environments, but it won’t work for highly diverse complex environments.
So what are customers looking for?
1) Mashups of relevant information from a variety of trusted sources.
2) Easy and quick navigation through the multiple channels of complexity with a single vendor, but also across multiple vendors.
3) Real time access to the most relevant information.
4) Recognition. Customers want the vendor to know who they are, what their past experience was, and what their current situation is today.
5) Advanced notification of pending issues they might have, based on their buying history.
Companies need to understand that today, their customers want unified access to the whole knowledge ecosystem both internal and external to the company. Customers expect the vendor to do the hard work and bring together all the information in an easy to use interface for issues that potentially could affect the products they have purchased. Customers want to have a relationship with your organization even if that relationship is just a few seconds long as they use a powerful search solution on your self-service site, to find their answers.
Read more on the Coveo blog.
Friday, April 1, 2011, 11:04 PM
This week we announced new research that reveals some harsh realities for today’s contact center. The survey results indicate the biggest problems are caused by inefficient access to the information needed to solve customer issues, as data continues to proliferate beyond the traditional knowledge. Our survey was conducted in partnership with Omega Management Group – home to the Center for Loyalty Research and a leader in customer experience management (CEM) strategy.
Perhaps the harshest reality contact centers are facing is that the knowledgebase in which they have invested countless dollars and other resources, and which has been the center of their knowledge management strategy, is no longer enough.
While nearly 70% of customer service organizations report they’ve invested in a knowledgebase, that same percentage report that the knowledgebase does not contain the information necessary for agents to solve customer issues. For companies with more than 10,000 employees, 43% report that information that contact center agents need to access to resolve customer issues resides in more than 20 systems.
Other survey findings include the following:
- 70% of survey respondents indicated that they are facing significant challenges as a result of agents not being able to find necessary customer information.
- Respondents listed case handling time (50%), customer satisfaction (49%), and first contact resolution (FCR) (49%) as the top three challenges.
- 30% of participants estimated the impact of knowledge base operational challenges at somewhere between $100,000 to $1 million per year, including six percent who put the cost at $1 to $5 million.
We also created an infographic to depict some of the key survey findings.
Additional survey findings can be found in the official press release.
We’ve seen how the explosion of data is overwhelming practically every company, and customer service organizations are not exempt from the pressure. A negative customer experience directly impacts customer satisfaction, renewal rates, and other important metrics.
Are these challenges that your organization is facing, or that you have overcome?
Tuesday, March 8, 2011, 10:39 AM
Have you ever sat down at your computer in the evening to do a little bit of research on the internet for some topic that is of interest to you, and then the next thing you know it is 1am in the morning? You’re astonished that you spent that much time searching, and you may or may not have found the information you were looking for. Searching for information, especially on the internet, puts many of us into a trance of hitting the next link thinking maybe the answer is there. One topic leads to another, so on and so on. I know you have all been there.
Everyday our workforce goes to the office to perform their job as effectively and efficiently as possible, however for many especially customer service oriented roles the knowledge they need to access is not readily available so they spend time “ALT-TABBING” between applications or searching on the internet to find the answer. Every time a person has to context switch or move between systems to find information to solve their problem it is an opportunity for the distraction demon to pull them into an alternate direction. Each time this occurs there is a little piece of time that gets chewed up never to return again. In time sensitive organizations like customer service these little chunks of times can add up very quickly.
In my last blog post, I talked about the Currency of Customer Service being “Time.” Let’s walk through a basic example that supports this claim.
Suppose you have 50 people in your support organization and in an 8 hour shift they lose 10 minutes of time each day searching or flipping between applications for information. Simple math can calculate that 50 times 10 = 500 minutes in one day, or a total of 8 hours across your entire support team. Now let’s say the fully loaded cost (including salary, benefits, overhead, etc) for a good technical analyst is $100k – this equates to about $48 / hour in costs to your organization. Therefore, since 500 minutes equates to 8.3 hours, we could calculate that we just lost approximately $400 for that day (or an entire shift for one support agent!) and $95K for a full year. That is almost the annual cost of 1 full headcount!
Did you realize that .02% of your agents daily time could cost you that much? Do you know how much time your agents spend...
To continue reading, please visit: blog.coveo.com/?p=173
Tuesday, February 22, 2011, 2:45 PM
Anyone who has worked in or managed a customer service organization will quickly realize that almost all operational metrics collected include some form of “time” component.
First call resolution, call hold time, and time to first contact, are just a few examples of the many time based metrics. When you think about it, the whole premise of customer service is based on time: we have SLAs that force us to respond within a certain amount of time regardless of the customer’s wishes, and we measure how quickly we write a knowledge document following the identification of a solution to a customer issue.
Combine this with our self-induced time pressures with today’s fast-paced society, and we get expectations for immediate responses to requests. This puts a tremendous amount of strain on customer support organizations. Add to this, the fact that information is doubling every year, and the number of repositories to try to manage this data continues to proliferate, the world of customer support becomes a chaotic place. IT is not impressed either.
So if everything related to the delivery of good customer support is related to time, then it is fair to suggest that time is the currency of customer support.
A common fallacy is that great customer support is...
To continue reading, please visit: blog.coveo.com/?p=147