Jump to content

Mar 05

Does your role or business require the gathering of data from multiple websites and web portals? Or are you already doing it, but finding the current approach is holding you back from scaling up your data extraction operations? It’s likely you’re searching Google using terms like “web scraping,” “web crawling,” “web harvesting,” or just “web data collection” seeking advice.5 Risks to avoid when harvesting web data (4)

Before selecting a web scraping software solution to support your data collection, let’s consider the potential risk with such solutions that seem like the right fit, but might fail to meet the demands of your business.

1. The risk of dependency on one person to develop and maintain scrapers.

If you rely on one (unique) person to develop your web scrapers, you may be at risk when this person goes on vacation, get ill or leaves the company. You might have hundreds of homegrown scrapers that need to be maintained, enhanced, or new ones that need to be developed. Do you have a backup person? Be sure to use a product, which is not only easy to learn, but also easy to maintain overtime. Train a backup person who can step in with short notice.

2. The risk of not getting data because a web source changes format and it breaks you crawler.

When you scrape data from external websites, websites can suddenly change and break your web scrapers. When this happens, you need to redevelop your web scraper. If you cannot get this done in less than two hours, you have inefficient technology. A better approach is to consider a scalable, easy-to-use product to ensure you get accurate and reliable data, where the technology is resilient to websites changes.

3. The risk of not getting data because your server dies.

The server on which you run your web scrapers lose power, lose internet connectivity or a component goes bad. Unless you are using a web data extraction platform with automated fail-over, your web data stream ceases. Almost all homegrown or open-source web scraping solutions have no fail-over built in, and are not recommended for any serious web harvesting scenarios. Commercial enterprise grade solutions have built-in fail-over and allow you to install the product at no extra charge on multiple servers in a hybrid cloud (on premise and cloud) environment. These solutions will automatically switch the load to other servers when a server dies. It will even do automated load balancing to be sure the computed capacity of the individual servers is used in the most optimal way.

4. The risk of not knowing what and why something went wrong if you don’t get your data.

Do you have logging to keep track of what your scrapers are doing? Which web scraper collected what data and when? What went wrong? If you can’t answer these questions, you will likely not know where to start troubleshooting when your web scraping solution fails to deliver the data you expect. I have seen too many homegrown web scrapers  that don’t provide a good answer and all of which leads to costly problems later on. An enterprise class web data extraction solution has advanced logging and monitoring capabilities, which tracks everything going on in your system. It will have a comprehensive front-end where you can search for errors and monitor timing of data collection. You’ll know if you collected more or less data than before. You can also specify which events you need to escalate, for example sending an email or SMS to a system administrator or business analyst to take immediate action.

5. The risk of not having evidence that the data you collected is accurate.

Let us say you collect product pricing from an affiliate partners web portal or website and your business model makes incremental profit every time you refer a customer who makes a purchase from the affiliate partner. Can you prove that a specific product had a specific price at a specific time? Do you collect evidence information? If you don’t, you could be leaving money on the table. An enterprise class web data extraction solution not only collects the data but also enables you to automatically store screen-shots of each web page exactly when you extracted the information. No more arguing who is right or wrong, you now have the evidence that data collected was accurate.

Are you reconsidering your web scraping approach? If so, click here to download “A Guide to Rethinking Web scraping”

Remember risk-free web data extraction will save you time and money in the long run, and make your business users happy.

Stefan Andreasen, Corporate Evangelist, Kapow & Information Integration


Tagged with:    
Feb 04

When it comes to migrating mountains of information from one system to another, project managers and SharePoint administrators typically have two options:

  1. Hire and train temporary employees to manually migrate business-critical data.
  2. Use automated software.

It’s worthwhile to consider migration software for a number of reasons. Content migration is  surmountable undertaking and there are additional concerns to overcome with using manual migration to move data:

  • data may be lost or corrupted
  • necessary work may have to cease during data migration
  • manually copying-and-pasting takes longer to accomplish and introduces more opportunities for errors to enter into company databases
  • many data migrations require coding knowledge to ensure that information is correctly matched from one source to the next

Using automated solutions reduces the time and risk associated with manual migration. For example, if a company wanted to migrate from one best project management software to another, automation helps keeps the project flowing. There’s no downtime for businesses as they migrate their data, as opposed to some manual migrations that require work to cease. In There’s no need for a content freeze. Businesses can just keep doing business as usual, as the migration takes place.

When asked about other considerations project managers should take into account before tackling a large-scale data migration project, it’s important for managers to know whether the data migration will be a one-to-one migration or a more complex redesign where there’s going to be completely new formats and complex mapping. Of course, the one-to-one migration is easier, but an automation solution can also help with more complex needs.

Kapow Software offers migration solutions for Web Content Management Systems like SharePoint and Adobe CQ, as well as Enterprise Content Management Systems like EMC Documentum.

To learn more, visit Kapow Software at this year’s SPTechCon, connect with Stephen Moore at smoore@kapowsoftware.com, or request a demo at KapowSoftware.com.


This interview was provided by TechnologyAdvice, an Inc. 5000 company that connects buyers and sellers of business technology through meaningful relationships. Interview conducted by Clark Buckner.

Jan 06

How many employees do you have who spend every day copying and pasting data between systems and applications such as Microsoft Excel spreadsheets, IBM 3270 green-screen terminal applications, email, external web portals, and any other application? I ask this question when I meet business and IT leaders in large companies around the world.Rethinking Legacy Integration

Surprisingly often, I hear responses like “hundreds,” or “20% of our staff.” I even had a business executive tell me “100 of my 300 employees spend their day doing repetitive manual tasks like copying and pasting information.” That is 33% of their workforce.

I ask myself, “Why is this still happening?” The reality is, we have found no easy way to integrate all the different systems and tools that businesses use today.

Not only is manual processing very expensive, but it is also slow and error-prone.

Are there any solutions available to help with this? There are solutions available, but they can be difficult to implement and in some situations may not fully solve the problem.

For example, a large software vendor offers a product that is designed to solve the integration with IBM 3270 and 5250 terminal based application, but it requires highly skilled consultants to set it up. One financial institution recently shared how they hired outside consultants to integrate their legacy 3270-based system. After 3 months, they were still unsuccessful.

In another example, I recently visited a large international investment management firm. They create thousands of Excel reports for their thousands of clients worldwide every day. At the visit I asked, “So how do you automate the creation of all these reports?” The person from the company pointed out the window at a separate building and said, “Over there we have a huge IT department who custom writes and maintains Microsoft Visual Basic scripts that can create the Excel reports automatically.” Not only are these Visual Basic programmers expensive, but using Visual Basic scripts for editing spreadsheets takes time to create and maintain.

The automated no coding solution

There’s a better approach to these costly development projects. It can be done with a no coding solution that automates all the interactions with acquiring, transforming, and delivering data from many sources to Excel giving business users instant access to information and ability to focus on activities that are more productive.

I call this rethinking legacy integration and automation. Or should I say, revolutionizing legacy integration?

Here’s how to integrate legacy systems with automation

  • Create the solution the same way you do it manually using a WYSIWYG paradigm directly against the user-interface of the involved applications
  • Base solution on a standard work-flow editing paradigm well known to business
  • Coding is not required during the entire process

The benefits are many, including:

  • No need to translate business problems to IT language (which often fails)
  • Develop it very close to (or within) a line of business for greater agility
  • Deliver the solution in as little as 1-2 hours
  • Solve those aggravating integration problems with fewer resources
  • Eliminate costly manual transcription errors and delays in the process

Ready to rethink data integration? See how Spar Nord  integrated an external third party portal with their lending application process allowing them to reduce the loan offer process from 14 days to 14 minutes.

Tagged with:       
Dec 08

Survey Uncovers Financial Industry Challenges

Financial services organizations rely heavily on information found on public websites, social networks, and web portals to monitor markets, track the competition, identify suspicious fraud activity, maintain sanction lists, automate processes with B2B partners, and listen to what customers are saying. Access to these external sources of both structured and unstructured information typically requires manual integration. This leads to tedious searching, copying and pasting of data into spreadsheets, databases, or applications. This information is very often time sensitive, so organizational reliance on manual processes defeats the notion this gathered information is timely. Time is money. Kapow-FS-survey-infographic-Blog-Post

These organizations also depend on an IT infrastructure to meet these needs and data integration requirements must address the growing need of accessing external data sources. Integrating internal systems with external data sources can be challenging to say the least, especially when organizations are constantly adding new external sources of information to their operations, and these external websites and web portals either don’t provide APIs or the development efforts are too time consuming and costly. Keep in mind, if IT is struggling to keep up with demands of the business to add new sources of information and eliminate manual repetitive activities involving the handling and processing information now, the chance of keeping pace is a target that becomes unobtainable for most financial services organizations.

A recent Computerworld.com survey of 100 financial services professionals highlights the challenges of acquiring and integrating data from multiple data sources, including external websites and web portals. This survey revealed troubling challenges facing the financial industry. Here’s a few highlights:

  • Struggles of integrated external sources: 43% of the participating financial institutions are struggling with a lack of integration between external data sources and internal systems. Of those external sources that financial services organizations need to integrate with, 84% are web portals where business information needs to be extracted and integrated with internal systems and processes.
  • Manual or hard-coded integration: 55% of respondents reported that the integration of data between external sources and internal systems either involves users manually transferring data, or is done through custom integration development that involves a more hard coded approach that does not scale out to support the many external sources of data.
  • Manual data handling: These financial organizations identified the time required to manual import data and perform validations, as the two most costly results challenges when it comes to not being able to integrate external data sources.
  • Deployment delays: Overall, respondents want a solution that quickly adapts to varying data sources, unfortunately integration projects often takes months to complete. Only 8% of the financial services responding indicated that an external information integration project is completed in less than a month, and 31% reported it takes more than 3 months to complete, illustrating the need to find a faster and more efficient way to perform external data integration.

The bottom line is manual processes no longer fit into any financial organization business process. It’s clear these time consuming development projects used to integrate external data sources into an enterprise infrastructure are not a long-term viable strategy.

Financial organizations depend on data, whether it’s being used to transform industries, grow market share, defend brands, or protect customers. It takes an alternative approach to integrating data, one that cannot simply rely on traditional development tools and custom one-off projects. Data integration platforms that are easy to deploy and customize are the next step for external data integration.

Download the complete IDG survey here.

Tagged with:    
Nov 26

Recently, I participated in the yearly CIO Logistics Forum with Henrik Olsen, Head of Business Architecture & Development from DSV, a global supplier of transportation and logistics. The topic: Streamlining Logistics Operations & Automate B2B Processes.

During the presentation, Olsen observed an increasing demand from customers (e.g. manufacturer of goods) for lower prices, improved service and real-time data integration, and freight haulers asking for higher rates to compensate for increased costs.Kapow-Blogpost-diagram-Gross-Profit

With price pressures coming from both customers and freight haulers, one of the few ways to improve profit is to increase operational efficiency through automation of B2B interactions and internal processes.

Typically, automating these processes becomes challenging as more and more customers move away from supporting Electronic Data Interchange (EDI). This is especially true for mid to small-size customers who cannot afford to keep up with the demand for EDI integration.

These smaller customers “dictate” their preferred way of integrating and exchanging information. This integration is typically driven through an email-based solution and/or through a web portal, often using Excel as the interchangeable format.

A typical scenario might go like this:

  • An email with an order is generated directly from the customer’s Enterprise Resource Planning (ERP) or Transportation Management System (TMS).
  • The transportation and logistics supplier receives the email with the shipping order request and, processes the information.
  • The customer requires near real-time update of tracking information posted to their logistics web portal.

Of course, this process is simple for customers since they don’t have to support EDI, and they will choose to only engage transportation and logistics suppliers who allow them to deliver the data in this flexible manner.

For the suppliers, this order becomes more difficult and expensive, as the entire customer integration process becomes manual.

The good news is all these manual processes can be automated, and this is exactly what Henrik explained in his well-received presentation.

DSV plans to automate a considerable amount of these B2B non-EDI interactions, and take advantage of the higher freight prices and better margins that they can obtain from smaller customers. This is what I like to call the long-tail effect (see diagram), where technology like EDI is too expensive and complex to implement for low volume customers, but alternative solutions are available to facilitate the automation and integration between business partners.Kapow-Blogpost-diagram-(1)-Revised

Many transportation and logistics companies all over the world are finding alternatives to EDI where integration costs with smaller customers can be reduced as much as 100 times through complete automation of previous manual B2B processes. The result is a substantial increase in profits and improvements in the bottom line.

Many thanks to Henrik Olsen for presenting on this important topic. If you are curious to see how this works, I recommend you watch this short video.

Stefan Andreasen, Corporate Evangelist Kapow at Kofax.




Tagged with:          
Nov 05

To say pulling data from various internal and external sources is time-consuming is a masterpiece of understatement. Cutting and pasting, using homegrown scripts or applications that record a user’s actions can’t compete with the pace of business. And over time, there will be an increased demand of not only quantity but quality of information.Lots of information is accessible via public websites with more data that’s often hidden beyond firewalls and web portals that require login credentials and ability to navigate the site in order to extract the data. Valuable information is also embedded in PDFs, images, and graphics.Kapow-Blogpost-graphic


From start-ups to enterprise organizations and spanning across a variety of industries from financial, transportation, retail, and healthcare, acquiring external data is critical. Whether you want to stay in compliance, move ahead of the competition or reach new markets- it all requires constant monitoring of web data. Data is extracted, transformed, and migrated into various reports and becomes the foundation business decisions are based upon.

So a web-scraping tool or homegrown web scraping approach seems like a good option, since it looks like it’s a quick and inexpensive way to harvest the data you require. Or can it?

Now comes the uneasy feeling in the back of your mind. Can my homegrown web scraping approach or a web-scraping tool acquire the correct information I need? How do I know the data I received is accurate and formatted correctly? And what if management wants different reporting data, how is that handled?

The short answer: You don’t know.

The right answer begins with an evaluation of your specific data requirements and business needs.

  1. How does web scraping acquire the data?

While product demonstrations can present an initial set of data with colorful dashboards, full of charts and reports, you are better off to ask for a technology demonstration that relates to your specific data collection needs. Write up a list of actual websites you gather data from. Your list should include various types of sites from HTML 5, Flash, JavaScript, and AJAX. Be sure to include websites with firewalls and PDFs. The more scalable, reliable, and faster the web data extraction process performs across various external websites, the better.

  1. What does the data look like?

You have received some data using a web scraper tool, but now you spend all your time trying to transform the data. You notice formatting and quality issues with the data. If the extracted data is not accurately transformed and put into a usable format, such as Microsoft Excel, .csv files, or XML, the data becomes unusable by applications that have specific integration requirements. Now you have lost half the value of your purchased investment. Extracting and auto correcting of specialized data often includes dates, currencies, calculations, conditional expressions, plus the removal of duplicate data are all important considerations.

  1. How difficult is it to make changes?

What happens if a website changes or if you need to monitor and extract data from new websites? Many web-scraping tools have a high propensity to fail when websites change, which then requires resources and in some cases a developer to fix the problem. Unless you have a developer in house to make these fixes, this will add additional time and expense, and the problem only grows bigger as you monitor and extract data from hundreds or even hundreds of thousands of websites. If scalability is important to you, be sure to ask how the technology solution monitors and handles changes to a website, especially if you want to expand beyond your immediate data collection needs.

Extracting and transforming web data is more than just purchasing any web-scraping tool. Think about the data you are collecting and how it’s tied to your business. In all likelihood, there’s a strong set of business drivers for collecting the data, and taking shortcuts will only compromise the success of what your business goals are. And it should never make you feel uneasy about the information you are collecting.

Look beyond the data that’s being extracted, and think about what you are doing with it in the context your customers, creating a competitive advantage, or streamlining processes that rely on data from websites, portals, and online verification services.


Tagged with:          
Sep 17

Customer insights to best practices 

Last week I spoke with John, who leads a web automation team at a Fortune 500 professional staffing company that has been a customer of Kapow for more than 5 years, primarily using Kapow for Customer Relationship Management (CRM) and Human Resource (HR) activities that involve transforming, synchronizing, and delivering information between their Vignette, SharePoint® and Salesforce® applications.

Like almost every enterprise organization their business team’s use Microsoft Excel extensively for data sharing and reporting, and collaboration via Microsoft SharePoint.

As John explains, “Microsoft Excel® is used throughout the organization to capture data within business teams, reporting, or simply for exchanging data, all within SharePoint.”

John elaborates, “Microsoft has really enhanced Excel, which is seen with the improvements in data visualization between Excel 2010 and 2013 versions. Microsoft is also integrating Excel with SharePoint 2013 so you can surface live Excel data directly in web parts in SharePoint. This is the path we are taking and with the new Excel edit feature in Kapow 9.4 we expect to quadruple the use of Kapow over the next year to support it.”


I must admit this is very exciting and great to see the same excitement from customers who see the value in automating activities that involve great amounts of data and the use of Excel.

When you use Kofax Kapow to dynamically update live internal and external data in an Excel spreadsheet, this information can in turn be surfaced in SharePoint, making your entire SharePoint platform a collaborative real-time decision-making platform. Data is unlocked from any data source you can think of, including cloud apps, enterprise apps, web portals, emails, active directory and of course SharePoint itself.

Today the company updates all their Excel data repositories and Excel reports manually, which is not only tedious and not very exciting work, but unavoidably will also include human errors, which could become critical to their business.

Some of the data they capture is coming from other departments. Just managing who has done what is a nightmare. That’s why capabilities like Kapow’s advanced logging are so important when it comes to having a full audit trail.

In one example, John explains how they currently receive a separate email when a person joins a training course, which is sent through a SharePoint workflow but then requires a business user to manually key this information into Excel. All this is automated with Kapow 9.4.

John expects that automating manual Excel driven work will be expanding their need for Kapow into more departments, including HR, payroll and employee development. Finally, by combining Excel and Kapow into SharePoint this will drive adoption within data delivery, data visualization, and data collaboration.

Do you have any insights regarding Excel? We would like to hear from you.

Stay tuned for my next customer interview about using Kapow for Excel automation.

Stefan Andreasen

Corporate Evangelist, Kapow & Information Integration

Tagged with:    
Aug 26

Millions of organizations put up with the inefficiencies and risks associated with running critical parts of their business on spreadsheets, with the vast majority using Microsoft Excel ® as their preferred tool. Spreadsheet software isn’t designed to be used in the manner with which most companies use them today. Spreadsheets are handy for ad-hoc analysis, reporting, data exchange, prototyping and other common tasks. In a corporate setting, the repetitive manual tasks needed to acquire and integrate information from internal and external data sources into spreadsheets can lead to costly errors. In addition, spreadsheets are difficult to audit and clumsy to work with in collaborative repetitive business processes such as budgeting, sales, and operational planning, partner data-exchange and cash management.

Ventana Research’s comprehensive report on “Spreadsheets in Today’s Enterprise – Making Intelligent Use of a Core Technology” provides detailed insight into the use of Excel in the typical corporation. Excel is the de-facto format for reports, data-exchange or financial models. According to a study performed by Ventana Research in 2012, 72% of the participants said that their most important spreadsheet are ones that are shared with others.1

A typical Excel based process involves opening a pre-formatted Excel template, complete with multiple work-sheets, pre-built macros, tables, graphics, and then edit/assemble data from a multitude of sources into this template to create the delivery document. Input data can come from systems such as email servers; business applications such as CRM, HR or ERP; bank portals, business partners portals such as financial partners, supply-chain partners, logistics parts; government public web-sites: and finally internal monitoring applications from departments such as IT, Marketing, Procurement, etc.

These Excel reports are then delivered to stakeholders in departments such as, Finance, Sales, IT or externally to business partners through email, FTP upload or portal upload.

Rather than get rid of spreadsheets, which for most companies would be nearly impossible, there is a modern way to cost-effectively automate the acquisition of the data entered while still preserving the familiarity and ease of use of Excel with greater accuracy, ease of collaboration and elimination of tedious manual processes.

FIGURE 1. Manual Excel based process flow.













Innovative products such as Kofax Kapow allow the business user to define the flow over their complete Excel process with integration directly to all the information sources and destinations. It does not take much longer to create a solution than to perform the work once manually and it can then be repeated over and over again, without human errors. Kofax Kapow also delivers a full audit log of everything that happened and alerts selected persons if anything went wrong.

The value is not only in the automation of the repetitive manual process, but also in increased business revenue from:

  1.  Elimination of human errors.
  2. Near real-time result/delivery for quicker decisions or improved service levels.
  3. Running the process at speeds that would be impossible for a human.












FIGURE 2: Efficient workflow of automated Excel process with Kofax Kapow.


Next steps

When I discuss this topic with industry leaders, I typically recommend a number of steps to discover the use of Excel within an enterprise to understand the potential for Excel Automation. These steps include:

  1. Interview business managers in departments who use Excel.
  2. Estimate amount human time used on manual repetitive work.
  3. Estimate the business value from elimination of human errors.
  4. Estimate the business value of freeing employees to make better business decisions.
  5. Think about improving your business by including more data sources or increasing the frequency you acquire data.

From these simple steps you can determine the ROI.

For most companies Excel Automation is a no-brainer.

A future blog post will go through real-life customer examples, so stay tuned.

Comments are welcome at stefan.andreasen@kofax.com

  1. Ventana Research, Spreadsheets in Today’s Enterprise, January 2013.


Apr 21

Many factors have contributed to SharePoint’s longevity and success, and David Roe pointed out a few of them in his recent CMS Wire article entitled, “SharePoint: A Formidable Enterprise Collaboration Platform” The article, which summarizes a Radicati Group report on SharePoint, mentions that SharePoint’s ecosystem has been a key contributor to its continued success, and I agree completely. SharePoint functionality is also important, of course, and Microsoft has invested heavily to add social and mobile capabilities throughout SharePoint 2013. But business value doesn’t come from a box: it comes from applying technology like SharePoint—and Kapow Enterprise—to the pressing needs that challenge your business. As part of the SharePoint ecosystem, Kapow improves many of our SharePoint customers’ content processes, from capture to creation to enterprise search—just as we do for all the CMS products we support.

If you have any questions about content migration, give us a call and we can help decide whether we’re right for you. You can get the full Kapow Content Migration story from our white paper on the topic. Attending SPTechCon April 22-25 in San Francisco? Bring your requirements by booth 220 at the exhibit hall. 






Authored by: Carol Kimura, Director, Field Marketing at Kapow Software – a Kofax Company

Apr 08

On March 26 I presented at bpmNEXT 2014, an annual event for leaders in the business process management (BPM) industry, analysts, industry influencers and various vendors. There were nearly one hundred attendees from more than 10 countries. This was one of those events where you come back with great new contacts and a ton of inspiration.

Following welcoming remarks by Nathaniel Palmer and Bruce Silver, who are some of the biggest thought leaders in the industry and the team behind the creation and expansion of the BPM.com community, we jumped right in to the 25 presentations all of which delivered cutting-edge new and innovative BPM demos.

The event was very well orchestrated and organized by Nathaniel and Bruce. At the end of the three-day conference, I can say it was definitely one of the best events I’ve ever attended. I was both honored and proud when my presentation “Automation of Manual Process with Synthetic APIs”, was voted Best in Show by the attendees.  Later this month, you’ll be able to watch all of the bpmNext presentations at www.bpmnext.com

So how does Synthetic APIs  help most business processes?

BPM is all about using a workflow engine from one of many vendors to describe, manage, monitor and improve efficiency of business processes. This can be any process, but most companies normally invest in BPM around critical fundamental processes that drive major parts of their business.

Unfortunately BPM does not help much in automating the individual sub-tasks of the process they manage. This is especially true for the ever increasing amount of web-centric processes and processes involving web portals, because those portals more likely than not do not provide a full set of API that reflects the functionality of the portal itself. This is where Kapow Software enabled Synthetic API technology comes in.

Synthetic APIs, which include business rules, data transformations and interactions with multiple applications and data sources can be deployed as REST, SOAP or mini-apps (Kapow Kapplets™) by the click of a button, are easily built with the intuitive and live-data-driven work-flow design environment of Kapow Enterprise. This makes it a breeze to automate all those tedious repeatable sub-processes involving web portals, documents (like Excel), business applications (like ERP) and file systems (local or FTP). In fact it’s so easy with Kapow Enterprise, that Kapow customers implement hundreds of automations per year, that release important knowledge workers from performing repeatable manual data-driven work  to focus on more relevant and gratifying work that substantially adds to the top-line results.

Many of the more than 250 Kapow customers experience such a huge business benefit and a competitive advantage with the Kapow Enterprise platform, that they ask for us to not mention their name in any circumstance. For more details on Synthetic APIs, check out the Synthetic API on-demand webinar. Comments are also welcome at sandreasen@kapowsoftware.com.











Authored By: Stefan Andreasen, Corporate Evangelist, Data Intergration, Kapow Software – a Kofax Company

The Kapow Katalyst Blog is…

... a collection of insights, perspectives, and thought leadership around Application Integration.

Comments, Feedback, Contact Us:

blog at kapowsoftware.com

Get Our RSS Feed