Catering to Your Niche Crowd ~ Boutique Publishing

           
                   This is in response to an interesting article on Forbes,
https://www.facebook.com/forbes



"..Some of the most interesting experiments in publishing are the ones that use an entirely different math. NSFW Corp.(http://www.nsfwcorp.com/) is one of them. Launched by former TechCrunch writer Paul Carr over the summer, it’s a general-interest news and humor site that doesn’t need, doesn’t expect and doesn’t particularly want a ton of readers.

Since its launch, it has amassed — or a-niched — more than 3,000 subscribers who pay $3 a month for access. If it can attract 30,000 subscribers, it will be at or near the breakeven point. “If we can get 50,000, none of us will ever have to do anything else again,” says Carr.

NSFW’s mix of gonzo politics, guerilla journalism and open-mic comedy isn’t for everyone. But that’s the point. It’s because it doesn’t try to be for everyone that it can be what it is. After all, Carr notes, if there’s one thing the Web has proven, it’s that there’s money to be made in catering to niche tastes. “It’s nearer to paying for porn than it is to paying for news,” he says of his site’s appeal.


Smallness also makes it easier to innovate, which NSFW does on a variety of levels."


My response,


            
             There are quiet few select groups, underground networks and circles that offer membership by invite only. Off-course, once you identify and demarcate your niche crowd/select audience things such as scalability and monetization by numbers go out the window because the primary emphasis often is purely on things such as delivering quality and value that is relevant, unique and hard to find. P.S ~ Not an easy model to follow but worthy of consideration once you have matured in the business and tested the waters over the years.

~ Jai Krishna Ponnappan

Ref.https://www.facebook.com/photo.php?fbid=10151170951552509&set=a.10151047526737509.456571.30911162508&type=1&ref=nf

Content is King and So is Leveraging Big Data




" We Offer Content Management "




         

















               
       This is a statement being offered by many but is it really content management or merely storing documents that are linked to a database record?

Questions that immediately arise in my mind when I hear this statement are:

Do you provide audit logs reflecting what happens to the content?
What levels of security do you support? (Repository, folder, document, element)
How do you capture information? (Imaging, Office applications, Outlook)
What capture devices are supported? (Scanners, multi-function peripherals, cameras)
Is there an ability to use workflow to automate some processes?
What is the approach to search? (Keyword, Boolean, full-text, parametric)
Does it support versioning?

        The term content management is being used more and more by suppliers of all types and sizes and while there is some truth in their claims, the question is what does content management mean to you in relation to addressing your business requirements? Peel this back again and the real question becomes, what are your business requirements in relation to content and records management and does the application you are considering deliver what you need now and provide a way to grow in the future?

In my view, there are many good applications on the market today. Some of which are focused on content management and some that are focused on a business application with bolt on content management capabilities and some that combine a good degree of both. This does not mean they are bad or will not work for you, it means that you have to become more knowledgeable in order to make the right decisions and choices. The last thing you want to do is make an investment only to find out it does not provide what you thought it would.
Imagine that you make an investment to manage risk and when an audit comes, you cannot defend your information due to a lack of audit logs that would have presented a historical view of the information in question. When you are asked to show who accessed it, when they accessed it and what they did with it, could you answer these questions confidently? If you are brought to court and asked to provide all of the information related to the case which includes electronic files, emails and even metadata associated with the case, could you do so with a level of confidence that you have found everything or would you have a high level of doubt and uncertainty? In order to make the right choices, you need to identify and define your requirements, understand the technologies and assess what is available to you. It could be that the functionality being offered is suitable to meet your needs but wouldn’t you feel better knowing it for sure?
If you are ready to move forward and are finding yourself stuck or unfocused and are not sure where to begin or what to do next, seek professional assistance and/or training to get you started. Be sure to investigate AIIM's Enterprise Content Management training program.


And be sure to read the AIIM Training Briefing on ECM (authored by yours truly). Click on the image to download and read.
What say you? Do you have a story to tell? What are your thoughts on this topic? Do you have a topic of interest you would like discussed in this forum? Let me know.

Bob Larrivee, Director and Industry Advisor – AIIM
Ref. Posted originally on AIIM at, http://www.aiim.org/community/blogs/expert/We-Offer-Content-Management

An after Thought and Response,

"Content is King 
and So is Leveraging Big Data"
       
         Going a step further from this-> "The question is what does content management mean to you in relation to addressing your business requirements? Peel this back again and the real question becomes, what are your business requirements in relation to content and records management and does the application you are considering deliver what you need now and provide a way to grow in the future?"

...I believe it is important to ask

->"Does your organization recognize and value the power of data/content and the systems/solutions that can leverage it to your advantage?

             Many organizations across several industries know that "Content can be King" even within their organizational setting as well as potentially serve as a key area facilitating advantage, efficiency and growth in their wider industry and market place. As a result the understandable trend translates as a shift towards making this is a vital high level strategic decision that can clearly separate them from organizations with less adaptable, less compatible and poorly tailored and managed Data/content Management Systems. Many of these are experiencing measurable and significant setbacks as a result of it.

           Thank You for posting on this topic, I am quiet sure many of us have come across critical real life business scenarios and systems that serve as great case studies and examples that teach us valuable lessons worth sharing. Have a great day. Best Regards, Jai Krishna Ponnappan :)



Disaster Recovery Virtualization Using the Cloud


                      Disaster recovery is a necessary component in any organizations’ plans. Business data must be backed up, and key processes like billing, payroll and procurement need to continue even if an organization’s data center is disabled due to a disaster. Over time, two distinct approaches to disaster recovery models have emerged: dedicated and shared models. While effective, these approaches often forced organizations to choose between cost and speed.

                      We live in a Global economy that is balanced around and driven by a '24x7' culture. Nobody likes to think about it but in order to thrive and survive Disaster Recovery is a necessary component in any organizations’ plans. Even with a flat IT budget, you need to have seamless failover and failback of critical business applications. The flow of information never stops and commerce in our global business environment never sleeps. With the demands of an around-the-clock world, organizations need to start thinking in terms of application continuity rather than infrequent disasters, and disaster recovery service providers need to enable more seamless, nearly instanta-neous failover and failback of critical business applications. Yet given the reality that most IT budgets are flat or even reduced, these services must be provided without incurring significant upfront or ongoing expenditures. Cloud-based business resilience can provide an attractive alter-native to traditional disaster recovery, offering both the more-rapid recovery time associated with a dedicated infrastructure and the reduced costs that are consistent with a shared recovery model. With pay-as-you-go pricing and the ability to scale up as conditions change, cloud computing can help organizations meet the expectations of today’s frenetic, fast paced environment where IT demands continue to increase but budgets do not. This white paper discusses traditional approaches to disaster recovery and describes how organizations can use cloud computing to help plan for both the mundane interruptions to service—cut power lines, server hardware failures and security breaches—as well as more-infrequent disasters. The paper provides key considerations when planning for the transition to cloud-based business resilience and in selecting your cloud partner.



A Qualitative Trade Off Between Cost & Speed 


            When choosing a disaster recovery approach, organizations have traditionally relied on the level of service required, as measured by two recovery objectives:

●● Recovery time objective (RTO)—the amount of time between an outage and the restoration of operations 
●● Recovery point objective (RPO)—the point in time where data is restored and reflects the amount of data that will be ultimately lost during the recovery process. 

In most traditional disaster recovery models—that are usually dedicated and shared— organizations are forced to make the tradeoff between cost and speed to recovery.


          In a dedicated model, the infrastructure is dedicated to a single organization. This type of disaster recovery can offer a faster time to recovery compared to other traditional models because the IT infrastructure is mirrored at the disaster recovery site and is ready to be called upon in the event of a disaster. While this model can reduce RTO because the hardware and software are pre-configured, it does not eliminate all delays. The process is still dependent on receiving a current data image, which involves transporting physical tapes and a data restoration process. This approach is also costly because the hardware sits idle when not being used for disaster recovery. Some organizations use the backup infrastructure for development and test to mitigate the cost, but that introduces additional risk into the equation. Finally, the data restoration process adds variability into the process.

In a shared disaster recovery model, the infrastructure is shared among multiple organizations. Shared disaster recovery is designed to be more cost effective, since the off-site backup infrastructure is shared between multiple organizations. After a disaster is declared, the hardware, operating system and application software at the disaster site must be configured from the ground up to match the IT site that has declared a disaster, and this process can take hours or even days.


Measuring level of service required by RPO and RTO 




 Traditional disaster recovery approaches include shared and dedicated models 


The pressure for continuous availability 


According to a CIO study, organizations are being challenged to keep up with the growing demands on their IT departments while keeping their operations up and running and making them as efficient as possible. Their users and customers are becoming more sophisticated users of technology. Research shows that usage of Internet-connected devices is growing about 42 percent annually, giving clients and employees the ability to quickly access huge amounts of storage. In spite of the pressure to do more, they are spending a large percentage of their funds to maintain the infrastructure that they have today. They are also not getting many significant budget increases; budgets are essentially flat.1 With dedicated and shared disaster recovery models, organiza-tions have traditionally been forced to make tradeoffs between cost and speed. As the pressure to achieve continuous availability and reduce costs continues to increase, organizations can no longer accept tradeoffs. While disaster recovery was originally intended for critical batch “back-office” processes, many organi-zations are now dependent on real-time applications and their online presence as the primary interface to their customers. Any downtime reflects directly on their brand image and interrup-tion of key applications such as e-commerce, online banking and customer self service is viewed as unacceptable by customers. The cost of a minute of downtime may be thousands of dollars.


Thinking in terms of interruptions and not disasters 


Traditional disaster recovery methods also rely on “declaring a disaster” in order to leverage the backup infrastructure during events such as hurricanes, tsunamis, floods or fires. However, most application availability interruptions are due to more mundane everyday occurrences. While organizations need to plan for the worst, they also must plan for the more likely—cut power lines, server hardware failures and security breaches. While weather is the root cause of just over half of the disasters declared, note that almost 50 percent of the declarations are due to other causes. These statistics are from clients who actually declared a disaster. Think about all of the interruptions where a disaster was not declared. In an around-the-clock world, organizations must move beyond disaster recovery and think in terms of application continuity. You must plan for the recovery of critical business applications rather than infrequent, momentous disasters, and build resiliency plans accordingly.



 Time to recovery using a dedicated infrastructure 



Time to recovery using a shared infrastructure. The data restoration process must be completed as shown, resulting in an average of 48 to 72 hours to recovery. 




Types of Potential business interruptions



Cloud-based Business Resilience is a Welcome New Approach 


Cloud computing offers an attractive alternative to traditional disaster recovery. “The Cloud” is inherently a shared infrastruc-ture: a pooled set of resources with the infrastructure cost dis-tributed across everyone who contracts for the cloud service. This shared nature makes cloud an ideal model for disaster recovery. Even when we broaden the definition of disaster recovery to include more mundane service interruptions, the need for disaster recovery resources is sporadic. Since all of the organizations relying on the cloud for backup and recovery are very unlikely to need the infrastructure at the same time, costs can be reduced and the cloud can speed recovery time. 


Cloud-based business resilience managed services are designed to provide a balance of economical shared physical recovery with the speed of dedicated infrastructure. Because the server images and data are continuously replicated, recovery time can be reduced dramatically to less than an hour, and, in many cases, to minutes—or even seconds. However, the costs are more consistent with shared recovery.




Cloud-based business resilience offers several other benefits over traditional disaster recovery models:


Speed to recovery using cloud computing


• More predictable monthly operating expenses can help you avoid the unexpected and hidden costs of do-it-yourself approaches.
• Reduced up-front capital expenditure requirements, because the disaster recovery infrastructure exists in
the cloud.
• Cloud-based business resilience managed services can more easily scale up based on changing conditions.
• Portal access reduces the need to travel to the recovery site which can help save time and money.

A cloud-based approach to business resilience. Virtualizing disaster recovery using cloud computing

While the cloud offers multiple benefits as a disaster recovery platform, there are several key considerations when planning for the transition to cloud-based business resilience and in selecting your cloud partner. These include:

●● Portal access with failover and failback capability
●● Support for disaster recovery testing
●● Tiered service levels
●● Support for mixed and virtualized server environments
●● Global reach and local presence
●● Migration from and coexistence with traditional disaster recovery

The next few sections describe these considerations in greater detail. Facilitating improved control with portal access.  Disaster recovery has traditionally been an insurance policy that organizations hope not to use. In contrast, cloud-based business resilience can actually increase IT’s ability to provide service continuity for key business applications. Since the cloud-based business resilience service can be accessed through a web portal, IT management and administrators gain a dashboard view to their organization’s infrastructure.

               Without the need for a formal declaration and the ability to fail over from the portal, IT can be much more responsive to the more mundane outages and interruptions. Building confidence and refining disaster recovery plans with more frequent testing. One traditional challenge of disaster recovery is the lack of certainty that the planned solution will work when the time comes. Typically, organizations only test their failover and recovery on average once or twice per year, which is hardly sufficient, given the pace of change experienced by most IT departments. This lost sense of control has caused some organizations to bring
disaster recovery “in house,” diverting critical IT focus for mainline application development. Cloud-based business resilience provides the opportunity for more control and more frequent and granular testing of disaster recovery plans, even at the server or application level.

              Supporting optimized application recovery times with tiered service levels Cloud-based business resilience offers the opportunity for tiered service levels that enable you to differentiate applications based on their importance to the organization and the associated tolerance for downtime. The notion of a “server image” is an important part of traditional disaster recovery. As the complexity of IT departments has increased, including multiple server farms with possibly different operating systems and operating system (OS) levels, the ability to respond to a disaster or outage becomes more complex. Organizations are often forced to recover on different hardware, which can take longer and increase the possibility for errors and data loss. Organizations are implementing virtualization technologies in their data centers to help remove some of the underlying complexity and optimize infrastructure utilization. The number of virtual machines installed has been growing exponentially over the past several years.

               According to a recent survey of Chief Information Officers, 98 percent of respondents either had already implemented virtualization or had plans to implement it within the next 12 months. Cloud-based business resilience solutions must offer both physical-to-virtual (P2V) and virtual-to-virtual (V2V) recovery in order to support these types of environments. Cloud-based business resilience requires ongoing server replication, making network bandwidth an important consideration when adopting this approach. A global provider should offer the opportunity for a local presence, thereby reducing the distance that data must travel across the network.

              While cloud-based business resilience offers many advantages for mission-critical and customer-facing applications, an efficient enterprise-wide disaster recovery plan will likely include a blend of traditional and cloud-based approaches. In a recent study, respondents indicated that minimizing data loss was the most important objective of a successful disaster recovery solution. With coordinated disaster recovery and data
back-up, data loss can be reduced and reliability of data integrity improved.


Cloud computing offers a compelling opportunity to realize the recovery time of dedicated disaster recovery with the cost structure of shared disaster recovery. However, disaster recovery planning is not something that is taken lightly; security and resiliency of the cloud are critical considerations.



Posted by Jai Krishna Ponnappan

Keys to Business Intelligence by Jai Krishna Ponnappan


                 

               Historically, business intelligence has promised a lot of “Yes,” but the reality has been filled with “Nos.” The promises are enormously compelling. Companies collect vast amounts of information about markets, customers, operations, and financial performance. Harnessing this information to drive better business  results can have tremendous impact. Some corporations have achieved impressive gains after investing millions of dollars and multiple years of effort into building traditional analytical systems.



                   However, these success stories are frustratingly few and far between. Traditional BI, long the only option, can be prohibitively costly and complex. For companies without millions of dollars to invest, the options have been few and unattractive.   Further, even when these investments of time and resources can be made, they don’t guarantee success. For too many companies, BI doesn’t deliver on its promises -- it is too costly, too complicated, too difficult to scale and extend. The end result is a reality in which only a small minority of employees have access to BI. According to Gartner, only 20% of employees use BI today. This falls far short of the potential transformative capabilities of BI throughout a company. It’s time for BI that says Yes. Yes to the requirements of your budget, business, and business users. Yes to fewer compromises. This whitepaper first looks at the fundamental requirements that a BI solution should deliver to your company. Next, this whitepaper covers the 11 Key Questions that you should be asking of a future BI technology partner. When  the BI provider can answer Yes to all of these questions, you have BI that is capable of fulfilling your analytical and reporting needs both today and over time – it is flexible, powerful, and efficient. It is BI that says Yes.


Business Insight—Four Foundational Requirements

Any organization investing in business intelligence needs to define the capabilities
that will help them to win against the strongest competitors in their market.
Here are the four bedrock requirements that should define the core capabilities
of your solution:



Historical analysis and reporting.

Fundamentally, BI should give you insight into both business performance and
the drivers of that performance. An understanding of business influencers and
results is the foundation for successful, proactive decision making. Technically,
this capability requires the mapping and analysis of data over multiple years.
This can also often mean the modeling and manipulation of hundreds of millions
of database rows.


Forecasting and future projection.

While understanding historical data is a first step, it is also vital to project those
findings into the future. For example, once you know how different types of
sales deals have progressed in the past, you can examine current opportunities
from that perspective and make future forecasts. The ability to forecast and
align your business resources accordingly are key to success.

Ability to integrate information from multiple business functions.

Strategic insight often requires data from multiple systems. For example, operational
results require a financial perspective to show the full picture. Sales management
benefits from a comprehensive view of the demand funnel. Targeted, customized
marketing efforts require analysis compiled from customer, marketing, and
purchasing data. Your solution needs to be able to easily integrate information
from multiple sources in order to get answers to broad business questions.
Easily explored reporting and analysis.

Decision makers need to understand overarching business views and trends. 

They also need to examine increasing levels of detail to understand what actions can
be taken to achieve further success. It’s not enough to simply have a report; if
that report is not explorable, it might raise critical issues but not satisfy the need
to know more detail in order to make a decision. A full range of drill-down and
drill-across capabilities make it possible for decisionmakers to fully understand
an issue at hand and make critical decisions.

These four capabilities form the foundation of a powerful business intelligence
solution that can answer the critical questions facing your business. If a solution
cannot meet one of these requirements, your solution will not have the full
range of analytical capability that you will need to be competitive.



If the solution that you are considering meets the Four Foundational Requirements,
it is time to delve more deeply. The following twelve questions will help you to
assess your options and ensure that you are getting a robust, powerful solution
that meets your business requirements.

Can I get a comprehensive view of my business?


Even seemingly basic questions can require data from a variety
of operational systems, 3rd party data sources, and spreadsheets.



Even basic business questions such as “Which marketing campaigns generated
the most revenue this year?” or “Did the product redesign have the desired
effect on part inventory levels?” could require data from different operational
systems, 3rd party or partner sources, databases, and individual spreadsheets. As
a result, a core BI requirement is the ability to access, acquire, and integrate data
from multiple sources.
Traditional BI solutions provide this capability, but it can be arduous to implement
and maintain. Traditional BI accesses multiple data sources with complex and
expensive ETL systems that bring data together into one physical database.
Unfortunately, this database is totally disconnected from the world of the
business user. This requires another round of programming to connect the
physical data with the business user model.
A more modern solution enables you to:
• Experience a powerful, usable solution. Traditional solutions build from
the bottom up. A more modern approach starts instead from the top - the
logical business model. It then works downwards to manage the physical
data that is required to deliver these business views. This “top down”
approach manages the complexity that results from integrating multiple
data sources – so that the solution is both powerful and easy to use.
• Analyze information from all types of data assets. Data are provided to the
business in a variety of ways. Your BI solution needs to extract information
from corporate systems, stand-alone databases, flat files, XML files, and
even spreadsheets.
• Access remote or secured databases. Traditional BI uses an ETL process to
extract data out of a source database and place it into a data warehouse,
while some SaaS providers can only access data that is uploaded to their
servers. The more sophisticated SaaS BI providers can both upload data
and access local databases; that is, it allows you to access and analyze data
without actually uploading it. This is accomplished via real-time queries
against the database.
• Manage the required metadata. In addition to the management of data
sources, multi-source BI requires the management of all accompanying
metadata, the information about the data itself.



Does it provide full features at an affordable price?


Modern BI solutions make it possible even for smaller
organizations to afford a comprehensive BI solution.



Traditional BI solutions were often affordable only to the largest companies,
which had the large budget, IT staff, and resources required for initial deployment
and ongoing maintenance. Departments of enterprises and SMBs were
effectively priced out of the market.
Recently, the attractiveness of the midmarket has resulted in new “midsize”
solutions from traditional players and from new vendors. The catch, however, is
that the lower price often only purchases a “crippled” or partial solution.

So how can a smaller organization get a true BI solution? A fully deployed BI
solution must include the following: ETL and scheduling, database, metadata
management, banded/pixel perfect reporting, dashboards, e-mail alerts, and
OLAP slice-and-dice functionality.
Look for a solution that:

• Delivers a full BI solution, not parts of one. The license should include
everything that you need for a true solution: ETL, database management,
meta-data management, OLAP slice-and-dice query generation, banded
reporting, ad hoc reporting, and visual dashboards. A solution that has all
of the necessary components, already integrated for you, will deliver the
fastest, greatest value to your organization.
• Has easy-to-understand, affordable pricing. Traditional solutions have
many cost components – hardware, software, consultants, in-house IT,
support, and ongoing maintenance. As a result, the pricing is both high
and difficult to track fully. Modern solutions, such as ones delivered
software-as-a-service (SaaS), have more transparent and affordable pricing.
SaaS pricing is more comprehensive – the cost of hardware, software, and
support is in one monthly number. SaaS pricing is also more affordable,
since it leverages a shared cost structure, and these lower costs are spread
over time. This makes it easier to deploy and maintain a BI solution.


Can I start seeing value within 90 days?


In order for BI to be effective, it has to be deployed quickly
enough to address the critical issues that you are currently
facing.


Time to value is a prime determinant of the ROI of a business intelligence
deployment. Traditional BI solutions have struggled to deliver value to
stakeholders within a desirable timeframe. Due to challenges such as high
upfront capital expenditures, extensive IT resource requirements, and lengthy
development schedules, many traditional BI projects have taken over 12 to 18
months to complete.

Modern BI solutions can dramatically reduce the time to value by making use of
the following:

• Fully integrated solutions, from ETL to analytical engine to reporting engine
• Automation of standard processes
• Use of templates for typical reporting requirements, such as sales reporting,
financial reporting, etc.
• Software-as-a-service (SaaS) or on-demand, delivery models
• Leveraging of existing data warehousing investments
Modern solutions can also enable processes and approaches for BI deployment
that increase the likelihood of success. These include:
• Proving success incrementally and iteratively – avoiding the “Big
Bang.” In the earlier days of BI, customers were tempted to create a “big
bang” solution, since the cost and effort of creating the initial solution
and updating it over time were so high. Today, a BI solution offering a
fully integrated architecture – one with all of the components already
provided, working together -- allows companies to focus on initial highneed
projects, prove success, and expand or adapt over time. This ability
to iterate over time provides value more quickly, lowers ongoing cost, and
increases the likelihood of success.
• Deploying to the existing infrastructure; avoiding major infrastructure
upgrades. The second major reason that traditional solutions are slow to
deploy is that they often require an additional investment in new hardware or
software. This lengthens timeframes, since a major capital purchase requires
a financial approval process that can take up to a full year of review and
approval. If a solution can leverage the existing infrastructure, this process
step is avoided. Also, if the solution itself is more affordable, or, like SaaS
solutions, offered as a subscription (which can be charged to operating
expenses, not capital budgets), this budgeting process step can be bypassed
or shortened.
• Deploying with the IT team you have. The construction of a traditional
BI solution requires many specialized resources like data modelers and
ETL specialists. Any plan that requires these professionals will confront
resource bottlenecks.


Can I be assured that my data is secure
and available?


Data security and availability are key requirements of a BI
project.


Data security and availability are key requirements for any IT system. You need
a BI solution that matches the same high levels of performance, reliability, and
security that you expect of the other systems in your portfolio.
Security is fundamental, since the data your business uses is critical to competitive
advantage, effective operations, and consumer or patient privacy. Availability
is also critical, since you need to be able to make decisions in a timely manner,
addressing issues as they emerge. Your system needs to be ready to respond
when you need it.
Your BI system should:
• Provide high availability. If the solution that you are considering is a
traditional, on-premise one, how often is it down for maintenance or
updates? How reliably is it available, given your configuration? If the
solution that you are considering is a SaaS solution, what is the uptime
guaranteed in subscription contracts? You will want to be sure that your
solution will be available 99% of the time, if it is on-premise or SaaS.
• Be built on high performance hardware. If you are selecting a SaaS
vendor, make sure that their solution is operating on high performance
hardware that will provide the necessary reliability and availability that you
seek. If you are selecting an on-premise vendor, make sure that you are
making the appropriate investments into the type and quantity of hardware
that will provide high reliability and will also scale over time.
• Provide flexible security models. Most deployments have varying levels
of feature access, depending on the user’s role. Some users may only be
able to view a subset of reports, such as sales reports, while others will have
full access to all data, reports, and administration features. The solution
needs to ensure that users have access appropriate to their role. This will
require features such as defining row and column filters to limit data to
those individuals and groups who require it.
• Have SAS 70 data center certification (SaaS providers only). If you
are reviewing SaaS vendors, be sure that the data center where the
information will be stored has SAS 70 Certification. This represents that a
service organization has been through an in-depth audit of their control
objectives and control activities, which include controls over information
technology and related processes.


Can I proceed with limited IT resources?


Traditional IT solutions can monopolize IT resources. Modern solutions have a
lighter IT footprint, so that IT can focus on higher priorities.



Traditional BI solutions require significant IT resources up front for deployment,
as well as a high level of ongoing resources for maintenance, support, and
report creation and updating.
These intense IT requirements often limited the use of BI by smaller and midsize
organizations, which didn’t have a deep IT bench, or departments of enterprises,
which didn’t get enough allocation of IT resources.
Worse, IT resources were often required for report creation or updating. This led
to long lines outside of the IT department by business managers who wanted new
or better reporting. IT was swamped, and unable to focus on other priorities.

Modern solutions have a lighter IT footprint, which allows IT to focus on high
priority projects, and also ensures that business users get their questions answered
quickly and independently of IT. Look for a solution that:

• Minimizes IT resource requirements. Reducing the upfront and ongoing
IT resource requirements both saves money and increases the speed of
deployment. SaaS based solutions, for example, require less IT resources
since the solution is provided as a service – there is no hardware to buy, no
software components to cobble together. Updates happen automatically,
so IT maintenance burdens are dramatically reduced.
• Respects IT standards and expertise. An organization’s IT team is
fundamental to the company’s ongoing operational success. The solution
should meet IT requirements for security, availability, and compatibility with
other systems.
• Empowers the end users. When end users are more self sufficient, the
demands on IT are lighter, and IT can better prioritize their activities. Ideally,
trained users can define reports, dashboards, and alerts on their own,
without any Java programming or scripting. IT can oversee critical data
management functions without getting bogged down in time consuming
user-facing report definitions.




Does it avoid risky integrations?



A solution that is already integrated has far lower deployment and ongoing risk
than a solution that starts as standalone components.




Another major contributor to the high risk in traditional BI solution development
is the large number of products and technologies that must be bolted together
to get a full solution. To start with, an ETL product is used to manage the
task of extracting data, transforming it for analysis and inserting it into the
warehouse. These tools are very technical and require expensive programmers
with specialized training.
But vendors have menacing gaps within their own product suites. Most BI suites
have been created from acquired technologies with only loose integration
between the capabilities. Most enterprise BI vendors require you to use separate
technologies for OLAP, reporting, dashboards, and even on-line access to data.
These separate products each require configuration and support.
Modern vendors take an entirely different approach to solving the technology
problem. They:
• Deliver all key functionality in one solution. A fully integrated BI
platform means you have one solution to master and all of your metadata
is encapsulated in one place.
• Require your staff to learn one technology and toolset. A single
solution has one set of commands, syntax, and data structures throughout.
Once your users have been quickly trained to develop applications on
the underlying platform, they will be fully equipped to create all types of
customer facing functionality.
• Avoid custom coding. Because Birst connects everything within one
application, you eliminate all of the situations where you would be required
to use custom java code to script data or custom reports.
• Ease vendor management. Birst reduces the number of responsible
parties to the magic number of one. You won’t have to live with finger
pointing and cross vendor diagnostics when you have a problem. Birst provides
a unique answer to the ultimate need that you have for accountability.




Can business users easily create and explore
their own dashboards and reports?


Ideally, the solution can meet your current needs and easily scale to meet
future needs – even when that’s expanding from ten to ten thousand users.



Knowledge and speed are critical to solving business challenges. While BI
provides the information, it is the business manager who provides the timely
response to the new information. When insight is in the hands of business
professionals who can make a difference, organizations can achieve great success.
For this reason, it’s vital for a BI solution to make it easy for business users, not just
IT users, to analyze and explore information. The more BI becomes “pervasive” in
an organization, the more agile and proactive a business can become.
Achieving a solution that is easy for business users to “self serve” is challenging,
however. A solution has to be powerful enough to manage complexity and
make it simple for the end user.

To ensure that you have a solution from which your business users can ”self
serve,” look for one that:

• Is easy to learn and use. Users should be able to come up to speed
on the system within days, not months. The solution itself should take
advantage of user interface standards – dragging and dropping, dropdown
boxes, highlighting – that are already familiar to a web savvy audience. The
vendor should also provide adequate online, webinar, or in-person training
to ensure that your user base can take best advantage of the solution.
• Makes it easy to explore data and new information. A report is of
limited use if you can’t easily dig for more details or find the drivers of
why a result happened as it did. Dashboards and reports that allow you to
“drill” into deeper details, filter information to the exact data set that you
need, or reset information to desired parameters make it possible for you
to truly explore your data.
• Delivers quick responses; allows users to hone in on interesting data.
Even if the solution is analyzing gigabytes of information from across
multiple tables and data sources, answers need to be delivered quickly to
the user. Responsiveness, when combined with easy data exploration,
allows users to continue asking questions, refining them with each answer,
to hone in analyzing the exact issue of interest.
• Makes the complex easy. In order to make BI approachable for business
users, the solution needs to manage complexity to make analysis easier to
conduct. For example, one of the most complicated aspects of BI is dealing
with time variables. Every company has its own approach, and many business
questions include complex time nuances. Modern solutions can simplify
this complexity, allowing users to simply select options from a menu. Rather
than figuring out how to create formulas on their own, time-based reports
can be created with ease.


Can the solution scale to a large, diverse
user base?


The demands of your business can change rapidly and dramatically. Your BI solution
needs to keep pace.



Even BI projects with modest initial goals can eventually become huge deployments,
and you want to make sure that your solution can handle whatever the future
holds. If you are a midsize business with ambitions to grow significantly larger,
or a department of a large organization that realizes that your solution may
become a standard for the entire company – you want to make sure that your
solution can handle large, diverse groups of users, even if that’s not where
you’re starting.
A modern, SaaS architecture is highly flexible and scalable. It allows organizations
to start small, but add users quickly and at large scale. To be future proof, you
want your solution to:
• Quickly and easily scale to thousands of users. If your user base grows
from ten people to thousands in a short period of time, you want to be sure
that you can handle that growth in stride, without a major re-architecting
of the solution or the use of the full efforts of your IT team. This has two
benefits –since you only have to pay for what you need today, and you
only have to pay for what you need tomorrow, too. You don’t have to
pay upfront for “shelfware” that may or may not get used. The solution
should be able to add on users quickly, without a serious degradation in
performance, and without major resource and time requirements.
• Support multiple roles. As deployments get larger, users tend to fall into
different categories – super users, average users, occasional users. They
may have different demands on data, or have different security levels. Your
solution has to be able to easily accommodate these different types of users,
their access patterns, feature needs, and the ability to easily administer
them all.
• Grow without resetting. Scale should be organic and evolutionary, not
disruptive. You should be able to expand easily, without having to make
significant new investments in infrastructure or supporting headcount.
It should be a natural expansion, not a complete reconstruction of the
existing implementation.



Can the solution keep up with my business
as its needs change?


When information can be easily and securely shared with partners, vendors, or
customers, the entire business ecosystem is more efficient and effective.


A changing business landscape can challenge every company’s key systems,
but BI solutions confront even bigger obstacles than most. First, because of
their historical perspective, they must rationalize data across every version of
the business over a period of several years. BI cannot just move on to the next
release—it must accommodate the next release, as well as every prior iteration.
Second, much of the value of BI is to make sense of changing measures of
business effectiveness. Changes in customers, competitors, product offerings,
suppliers, and business units are all the target of your BI effort. A successful
solution must accommodate easily a dynamic business environment, rather than
requiring major reconstruction of data and functionality with each new major
product update.

A successful solution must:

• Add new data sources without requiring a major reset of the
solution. As your BI solution demonstrates its value with initial projects,
demand will increase to analyze more data sources. Your system should
be architected in such a way that it can accommodate this data easily and
seamlessly, without significant IT intervention or recoding of the solution.
• Be able to evaluate changes over time. To be effective, a BI solution
must model the many changes that happen over time. Looking at data
from an historical perspective requires a technology that can provide
meaningful views across data that is constantly changing.
• Offer business users self-service, so that they can answer their own
questions quickly and easily. Successful BI solutions become popular
solutions. If IT intervention is required for every new report request or
report update request, organizations end up with angry business users and
choked up IT request queues. When business users are empowered to
build and update their own reports and dashboards, the business is agile
and the IT agenda is focused on priorities. SaaS BI solutions, which have
the lightest requirements of IT teams, are particularly helpful on this point.
Heavy IT footprint solutions, such as open source software, can create
substantial IT backlogs over time.



Can the solution easily serve my
entire ecosystem?

Increasingly, organizations function by working with a network of suppliers, retailers,
partners, and channel resellers. Empowering these participants in your ecosystem with
timely information and analysis is a key to making this network function smoothly.
Achieving this extended view of information brings additional challenges to
your BI system, however. It requires a solution that can be easily and securely
accessed anywhere in the world. It also requires that information be tailored to
the level of access required – suppliers may have different views from logistics
partners. It may also require the effective delivery of information to a broad
array of devices - not just desktops and laptops, but mobile phones or tablet
computers as well.

A solution that serves your entire ecosystem should:

• Deliver a solution globally. While your direct employees may be
concentrated in one locale, your extended network is probably national or
global. Because of this, your solution must be accessible from any point
in the world where it is needed. While delivering a system like this in the
traditional method is complicated and prohibitively expensive, it can be
achieved fairly easily with SaaS solutions, which are available anywhere
there is an internet connection.
• Provide for multiple levels of access, with high security. The solution
should be able to control for which type and amount of data gets seen,
as well as which partners have the ability to add data or create their own
reports. Users could vary from people who only get alerts, people who can
see reports, and people who have full access to the solution. All should be
protected with the highest level of information security.
• Integrate partner data. Your strategic partners demand higher levels of
data integration. In the same way that your sales and marketing teams
want a unified view of the demand generation funnel, your partners
will want to see how, for example, your finished good inventory level
expectations match with their production capacity or parts inventories.
• Deliver to all types of devices. Supply chain users may be on the factory
floor. Sales users may be in transit, and executive users could be anywhere.
Keeping your ecosystem in synch requires that information be consumed
by the most convenient device, whether this is a desktop, laptop, mobile
phone, or tablet computer. SaaS solutions have another advantage here,
since they are accessed through a modern browser, so they can be easily
adapted to be consumed by small format devices.


Is the solution provider dedicated to my
ongoing success in BI?


A solution provider that is dedicated to providing best in class BI for its customers, 
both now and in the future, is a better bet.



Are the BI provider’s technology, incentives and motivations aligned with your
ongoing needs as a BI customer? Many traditional BI solutions have core
technology developed over two decades ago. These products were architected
in the age of thick clients, mainframe applications, and Unix database servers.
While these products have been updated with veneers of modern technology,
they still retain their older technology foundations.
Also, many traditional BI solutions have a business model that focuses on the
initial sale, not ongoing success. In the traditional software model, the initial
implementation is the largest payment to the software vendor. So completing
the initial sale is paramount, instead of ensuring satisfaction over the full
customer lifetime.

Companies deserve better than this. They deserve a company that is dedicated
to long term customer success. A modern vendor:

• Starts with a modern, standards-based architecture. Unlike traditional vendors
that continue to market what are essentially legacy products, modern vendors have
technology that is fully aligned with the cloud-based realities of today.
• Supports seamless, regular upgrades. Once a traditional BI solution
is deployed, it can be complicated, time consuming, and disruptive to
upgrade the solution, even when the new features are very desirable.
With a SaaS solution, new features and functionality are added regularly
and seamlessly, so that you can quickly experience the benefits of new
development while avoiding downtime and disruptions.
• Lives and dies by BI. The BI product category has come to be dominated
by technology giants that generate the majority of their revenues by doing
other things besides BI. As a result, the focus on business intelligence
innovation and customer satisfaction has declined. A vendor solely focused on
business intelligence is more dedicated to innovation and customer success in BI
• Is successful when the customer is successful – now and in the future.
SaaS vendors have a subscription model. They make their money over
the lifetime of a customer relationship, so their incentive is to ensure that
companies are up and running quickly, and satisfied with the ongoing
solution today, tomorrow, and five years from now. This is a significant
departure from the traditional model, where customers paid a significant
amount up front, but were left to manage deployment and maintenance
themselves; customer satisfaction concerns were left to the customer
themselves, and satisfaction was often low.

~Jai Krishna Ponnappan

Sustainable Success Starts with Agile: Best Practices to Create Agility in Your Organization.

             While it might frustrate us when we can’t control it, change really should be seen as opportunity in disguise. Enterprises everywhere recognize that to turn opportunity into advantage, business and IT agility is more important than ever. Fifty-six percent of IT executives in a recent survey1 put agility as the top factor in creating sustainable competitive advantage—ahead of innovation.

But creating true agility is hard. In the same survey, just one in four respondents said their enterprise’s IT was successful at improving and sustaining high levels of agility. Of the more crucial IT goals—optimization, risk management, agility and innovation—agility had the lowest rate of realization among enterprises.

True agility in today’s Instant-On Enterprise involves reducing complexity on an application and infrastructure level, aggressively pursuing standardization and automation, and getting control of IT data to create a performance-driven culture. While the challenges are great, getting it right enables enterprises to capitalize on change by reducing time to market and lowering costs.

Four drivers for agility

At its simplest, agility refers to an organization’s ability to manage change. But there isn’t always agreement on what agility means to the enterprise. Keith Macbeath, senior principal consultant in HP Software Professional Services, meets frequently with customers seeking to increase IT performance. Typically, he says, drivers for agility include the following:

IT financial management:
Understanding the levers that affect cost allow your enterprise to be more nimble. For instance, if your business is cyclical, moving to a variable-cost model in IT can help sustain profitability even through a down cycle.

Improved time to market:
This gets to the heart of agility: delivering products and services faster.

Ensuring availability:
"Availability isn't a problem ... until it is. Then it's at the top of the CIO's agenda," says Macbeath. Being able to rapidly triage, troubleshoot and restore service is as important to agility as the ability to get the service deployed in the first place.

Responding to a significant business event:
In the merger of HP customer United Airlines with Continental Airlines, the faster the IT integration, the more significant the savings.


Important factors for creating sustainable competitive advantage

The importance of measurement and benchmarking

Business agility starts with a holistic view of IT data, says Myles Suer, senior manager in HP Software’s Planning and Governance product team. "Getting timely access to data that allows you to drive to performance goals and then make adjustments when you don't meet those goals represents the biggest transformation possibility for IT," he says.

For CIOs focused on transforming their business, the key is gaining access to accurate, up-to-date metrics that are based on best-practice KPIs. Trustworthy metrics let CIOs control IT through exception-based management and understand what it will take to improve performance.
  
What to monitor and measure

To achieve greater agility, look to three broad areas of improvement and put in place monitoring programs to track against performance goals.

Standardization:

"That's the first step," Suer says. Ask yourself: What percentage of your applications run on standard infrastructure? How many sets of standard infrastructure do you have? "You want a standard technology foundation to meet 80 percent of your business requirements," says Macbeath. It may seem counterintuitive, but standardization makes a company more efficient and more agile than competitors. Standardization is also a prerequisite for automation. Public cloud providers aggressively pursue both. Taking the same approach for enterprise services means you can begin to realize cloud efficiency gains.

Simplicity:

"Agility is the inverse of complexity," says Macbeath. Your goal is to measure and reduce the number of integration points and interfaces in your architecture. The key is decreasing the number of platforms managed. Application rationalization and infrastructure standardization programs reach for low-hanging fruit, such as retiring little-used and duplicative applications and unusual, hard-to-support infrastructure. Longer term, your enterprise needs to tackle the more difficult task of reducing the number of interfaces between applications. Macbeath recommends driving to a common data model between applications. This reduces support costs and makes it faster and cheaper to integrate new functionality into an existing environment.

Service responsiveness:
This can be as simple as tracking help-desk response time, mean time to repair, escalations and so on. "Once you're systematically tracking responsiveness, you can move to a more sophisticated level of cost-to-quality tradeoffs," Macbeath says.

Translating agility into business success

In working with HP customers, Macbeath sees numerous examples of organizations that are successfully increasing their agility.

For example, a European bank working with HP has established an online system to show internal customers exactly how much it costs for one "unit" of Linux computing to run an application at multiple service levels: silver or gold. "Retail banking is a very cost-competitive business," Macbeath says. "Allowing internal customers to see the cost-performance tradeoffs for themselves and have the numbers right there makes it possible for business and IT to work together to make decisions to benefit the business."

Pursuing agility by way of automation let another HP customer, a global telecom company, reach new sources of revenue. Instantly deploying wireless hotspots provides a key capability for the company; by automating this process, IT was able to push out more than 40,000 new hotspots. The company turned on a new source of revenue almost immediately.

Other organizations are finding the same path to greater agility:

Standardization and automation combined with measuring results to drive performance. Through this process, their IT departments are demonstrating greater value for the business while delivering greater speed and transparency.

Extend the Principles of Agile to Deliver Maximum Business Value.

            The Agile movement has already helped thousands of IT organizations develop applications at light speed. But to maximize Agile’s impact on the business, you have to push its principles further throughout the enterprise until Agile delivery becomes continuous delivery, and continuous delivery leads to true business agility. The four stages that lead from Agile development to the agile enterprise can revolutionize the way business value reaches customers’ hands.

Stage 1: Agile development

Agile initially emerged as a collection of software development practices in which solutions evolve incrementally through the collaboration of self-organizing teams. Because early adoption grew out of grassroots, developer-led initiatives, Agile matured first within the developer space. It manifested in practices designed to shorten the feedback loop and uncover problems earlier in the cycle so they could be addressed sooner. It made coders fast and flexible.

Stage 2: Agile delivery

In many IT shops that had been organized in traditional functional silos, testers initially stood on the sidelines and wondered what Agile meant for them. The effect was a pseudo-Agile process in which code was developed in iterations but then tossed over the wall for QA teams to test the same way they always had.

The IT organizations that have had the best results with Agile have torn down the silos between development and QA to create truly cross-functional teams. To allow testers to keep up with the accelerated development pace and not sacrifice quality for velocity, they have revamped their approach to incorporate more robust test automation that can be run on a near-continuous basis. They have also adopted advanced techniques, such as service level (API) testing and exploratory testing to mitigate risk as early as possible.

Many organizations adopting Agile today are at this level. But as Agile adoption continues to expand and leaders see the value it can bring to delivery, momentum is gathering to break down the next level of organizational silos.
Four stages to an agile enterprise

Stage 3: Continuous delivery

Agile delivery teams have made great strides, but what is the point of rapidly building new features if those features just sit and wait to be released? To capitalize on the advances in delivery and better translate them into business value, it’s time to extend Agile development principles to deployment.

Continuous delivery, which is enabled by the DevOps movement, extends the Agile mind-set to incorporate IT operations. It focuses on what is ultimately important—shorter cycles for actually putting functionality in the hands of users. It relies not only on better collaboration between the two disciplines, but on comprehensive automation of the build, test and deployment process, so that—at the extreme level—every code change that passes automated functional, performance and security testing could be immediately deployed into production. This level of integration and collaboration between delivery and operations allows releases to be driven by business need rather than operational constraints.

Blurring the lines between development and operations isn’t easy, but it is critical. According to Forrester Research, some traditional process models can “encourage handoffs and demarcation of responsibility and ownership rather than collaboration, teaming, and shared goals. This may lead each party to think that the other is trying to pull a fast one, leading to confrontational situations and heavy reliance on paperwork and formal documentation. By sharing process and building a cross-functional team inclusive of operations, IT organizations can remove the disconnects.”1

The best way to start moving toward continuous delivery is to begin embracing DevOps principles. Include operations staff in demo meetings and planning sessions, share assets (test, build and deployment) and automate as much as possible. Examine KPIs and incentives to ensure alignment across the two functions. You’ll know you’re making progress when you see overall cycle times, from idea inception to released functionality, decrease dramatically.

Stage 4: Business agility

Although few organizations have achieved this stage, it’s what all are aiming for. True business agility happens when the entire organization can continuously analyze results and adapt accordingly.

Here’s how it happens: Deploying smaller increments more frequently not only puts features into the hands of the users at a much faster rate, but it also facilitates a steady stream of user feedback. User feedback lets you apply the Agile concept of “inspect and adapt” at the business level. It provides timely insight into shifting user preferences and helps you make precise course corrections and more informed decisions.

Business agility requires more than just IT. 

Achieving true agility will require a cultural shift within your organization before you can fully take advantage of the opportunity it presents. Business stakeholders must be empowered to make decisions quickly without legacy processes—such as annual budgeting—standing in their way. Other functions, such as sales, marketing and customer support, also play vital roles and need to be tied in effectively.

If you’re already operating at Stage 3, it’s time to look ahead. Start by asking yourself these questions:

Which of your core business applications could benefit or gain differentiation from constant user feedback?
What non-IT areas of your organization would need to be better integrated to get full, end-to-end agility?
What legacy processes might need to be revisited and transformed?

When you start to achieve more of the business objectives that drove your projects in the first place—increased revenue, faster ROI, improved customer satisfaction—you’ll know you’re approaching true business agility.

Why You Should Take this Social Enterprise Seriously

Salesforce Presents New Social Enterprise with Chatter, Mobility and Data

Organizations should take this social enterprise seriously, and especially those that see social media as essential to their future and Salesforce as well

At the Dreamforce conference, Salesforce.com (NYSE:CRM) CEO Marc Benioff unveiled the latest evolution of the company’s strategy and supporting technology for cloud computing and mobile technologies.

Its aim is to enable businesses to engage with customers and prospects via social media channels – what Salesforce calls the “social enterprise” – and empower employee and customer social networks to operate individually and together. Note I did not mention CRM, which doesn’t have a role in this platform for basic interactions with prospects and customers and is accompanied by a large ecosystem of partners that provide dedicated marketing and contact center applications. As summarized in its announcement, Salesforce’s strategy is clearly different from that of others in the applications market, including Oracle and SAP, which have products for the cloud computing environment and have made strides into integrating collaboration and social media capabilities into their applications.

Salesforce.com’s social enterprise is a big step forward from the strategy it talked about at last year’s Dreamforce, and is now focused helping companies build social profiles of employees and customers that can managed and augmented with information about the individuals from other social networks. The company’s partners are also working on such capabilities. For example, software from Reachable can present the relationships among individuals in a social graph. This week Roambi introduced analytic and mobile integration with Salesforce Chatter, which is another advance in what my colleague David Menninger calls the consumerization of collaborative BI.

Salesforce says significant technological improvements in the coming Winter 2012 release and later will make Chatter a true social business tool with many methods to chat, share, approve and otherwise enable the collaborative process. Another addition slated for Chatter is the ability to include presence information in the chat the way most instant messaging networks do today in a feature the company calls Chatter Now.

Users will be able to embed and share video, graphics and other kinds of files. Users who have too much traffic in their feeds will be able to filter content based on keywords. Chatter Approvals will be able to handle prompted interactions. With Chatter Customer Groups users will be able to invite and interact with customers or external people in private discussion groups. Those looking to build custom enhancements to the software can employ the Chatter API using REST and streaming that can be embedded with Chatter to interact with applications. Last year iWay Software demonstrated third-party integration to take any systems event and publish it into a Chatter feed.

Surprisingly, iWay, which is an Information Builders company, was not at Dreamforce this year, but it has paved the way for enterprise notifications in Salesforce’s social enterprise efforts. Salesforce also is targeting organizations using Microsoft SharePoint; they will be able to use Chatter instead of Microsoft’s messenger technology. Salesforce with Chatter Connect will be able to integrate its feeds with other environments to make a more seamless social and collaborative environment.

The expanded capabilities for Chatter, and Salesforce’s enhanced profiles of customers and prospects, will be integrated with the Service Cloud in an application called Chatter Service to help improve customer interactions for contact centers. Chatter Service will also be able to integrate into Facebook. This could create a new class of customer self-service for organizations that want to move their initial interactions into social media, and move questions and comments into more formalized customer service channels for resolution.

To address the full needs of a contact center though beyond social media will create other applications for which Salesforce has plenty of partners exhibiting including Contactual, inContact, Interactive Intelligence, Five9 and LiveOps. Salesforce showed how it can add value to the Sales Cloud with its advances in Chatter, but we do not expect to see a more integrated set of methods till 2012. Until then you can use Chatter by itself to interact with your sales team and the new versions will now be a good reason to evaluate them for what they call social sales. If you are looking to address the broader set of sales activities and processes beyond SFA, Salesforce has plenty of partners that should be considered if you care about efficiency and achieving sales quotas and targets.

Salesforce.com’s social enterprise direction will require simpler access to applications from smartphones and tablets. The company has created an engine to transform its applications to operate in an HTML5 environment so they can be utilized on smartphones and tablets from Apple, Android devices, RIM hardware and Microsoft’s, too. Salesforce calls this its Touch approach and will release it sometime in the future. You can sign up to be notified when that happens with Salesforce’s Touch. This will be a significant new option for anyone operating in the Salesforce.com environment.

One of the key pushes by Salesforce is database.com, which is designed to securely store organizations’ data in the cloud; it can be used by applications running on the company’s force.com platform for cloud apps as well as new social enterprise offerings that will come out in 2012. Our upcoming research in business data in the cloud will unveil more challenges and opportunity for improvement to support technology like database.com. This offering makes it easy to provision a database and get started. Its pricing and capabilities suggest that database.com is a transactional centralized data service. It’s not clear whether it will be useful for business analytics, which our research finds to be a major need in organizations today.

Business analytics has not been one of Salesforce strengths which its customers can attest which is why the portfolio of partners providing these capabilities is quite significant. Many of them also depend on their own database technology for analytics that operates in the cloud. If database.com is not able to support the analytics needs within the database its potential and impact to its customers could be hindered.

The challenge with database.com is that if you are trying to do automated data integration efficiently, including migration, synchronization or replication across clouds of data under applications or to the enterprise, you will need a separate product, and while Salesforce has many partners, none are part of the announcement or listed on the database.com website. As David Menninger has pointed out, integrating information from diverse clouds of applications requires work. Our newly completed research in Business Data in the Cloud will enumerate those challenges. If you are looking for help in dealing with integration in cloud and enterprise, consider Dell Boomi, Informatica, Pervasive and SnapLogic, with its dedicated data integration technologies.

Salesforce.com makes the social enterprise interesting, and it is taking the lead in advancing these kinds of interactions, especially with business-to-consumer companies, which need the most help in dealing with social media. Its ecosystem of partners and the ability to integrate with consumer social media give Salesforce an early advantage in the market.

If you are looking for new core applications in marketing, sales and customer service, you will need to invest in the partners to make this happen. It’s not easy to determine how to get the full value in your Salesforce.com investments, but it is worth the effort. If you are in sales or customer service departments or are trying to get a great mobile strategy, come and let us know as we can definitely help get what you need today. Organizations should take this social enterprise seriously, and especially those that see social media as essential to their future and Salesforce as well.


Kinect Turns Any Surface Into a Touch Screen

              Researchers combine a Kinect sensor with a pico projector to expand the possibilities for interactive screens.

A new prototype can transform a notebook into a notebook computer, a wall into an interactive display, and the palm of your hand into a smart phone display. In fact, researchers at Microsoft and Carnegie Mellon University say their new shoulder-mounted device, called OmniTouch, can turn any nearby surface into an ad hoc interactive touch screen.

Hands-on "screen": A proof-of-concept system allows smart phones to use virtually any surface as a touch-based interactive display.


OmniTouch works by bringing together a miniature projector and an infrared depth camera, similar to the kind used in Microsoft's Kinect game console, to create a shoulder-worn system designed to interface with mobile devices such as smart phones, says co-inventor Chris Harrison, a postgraduate researcher at Carnegie Mellon's Human-Computer Interaction Institute in Pittsburgh and a former intern at Microsoft Research. Instead of relying on screens, buttons, or keys, the system monitors the user's environment for any available surfaces and projects an interactive display onto one or more of them.

OmniTouch does this automatically, using the depth information provided by the camera to build a 3-D model of the environment, says Harrison. The camera acquires depth information about the scene by emitting a patterned beam of infrared light and using the reflections to calculate where surfaces are in the room. This eliminates the need for external calibration markers. The system rebuilds the model dynamically as the user or the surface moves—for example, the position of a hand or the angle or orientation of a book—so the size, shape, and position of these projections match those of the improvised display surfaces, he says. OmniTouch "figures out what's in front you and fits everything on to it."

The system also monitors the environment for anything cylindrical and roughly finger-sized to work out when the user is interacting with it, again using depth information to determine when a finger or fingers make contact with a surface. This lets users interact with arbitrary surfaces just as they would a touch screen, says Harrison. Similarly, objects and icons on the ad hoc "screens" can be swiped and pinched to scroll and zoom, much like on a traditional touch screen. In one demonstration art application, for example, OmniTouch used a nearby wall or table as a canvas and the palm of the user's hand as the color palette.


The shoulder-mounted setup is completely impractical, admits Hrvoje Benko, a researcher in Natural Interaction Research group at Microsoft Research in Redmond, Washington, who also worked on the project, along with colleague Andrew Wilson. "But it's not where you mount it that counts," he says. "The core motivation was to push this idea of turning any available surface into an interactive surface." All the components used in OmniTouch are off the shelf and shrinking all the time. "So I don't think we're so far from it being made into a pendant or attached to glasses," says Benko.

Duncan Brumby, a researcher at the University College London Interaction Center, in England, calls OmniTouch a fun and novel form of interaction. The screen sizes of mobile devices can be limiting, he says. "There's a growing interest in this area of having ubiquitous, intangible displays embedded in the environment," he says. And although new generations of smart phones tend to have increasingly higher-quality displays, Brumby reckons users would be willing to put up with lower-quality projected images, given the right applications.

Precisely which applications is hard to predict, says Harrison. "It's an enabling technology, just like touch screens. Touch screens themselves aren't that exciting," he says—it's what you do with them. But the team has built several sample applications; one allows users to virtually annotate a physical document, and another incorporates hand gestures to allow OmniTouch to infer whether the information being displayed should be made public or kept private.

"Using surfaces like this is not novel," says Pranav Mistry a researcher at MIT's Media Labs. Indeed, two years ago, Mistry demonstrated a system called SixthSense, which projected displays from a pendant onto nearby surfaces. In the original version, Mistry used markers to detect the user's fingers, but he says that since then, he has also been using a depth camera. "The novelty here [with OmniTouch] is the technology," he says. "The new thing is the accuracy and making it more robust."

Indeed, the OmniTouch team tested the system on 12 subjects to see how it compared with traditional touch screens. Presenting their findings this week at the ACM Symposium on User Interface Software and Technology in Santa Barbara, the team showed it was possible to display incredibly small buttons, just 16.2 millimeters in size, before users had trouble clicking on them. With a traditional touch screen, the lower limit is typically around 15 millimeters, says Harrison.