We all know the slogan: measure twice, cut once. What if you do but don’t know the context of your data? It may result in epic failure.
King Gustav Adolf of Sweden couldn’t believe what was unfolding to the eyes of thousands of spectators on 10 August 10, 1628. His pride, the Vasa, the world’s most ambitious naval ship of her time, capsized when hit by the second gust of wind, less than a mile into her maiden voyage. What a disaster.
More than 350 years later, experts say asymmetry is one possible cause for the ship to heel so strongly, as the port side was thicker than the starboard side. This theory is supported by the discovery of four rulers used by the workmen who built the ship. Two were calibrated in Swedish feet, which were 12 inches, while the other two measured Amsterdam feet, which were 11 inches.
History has shown that using out of context, incomplete or inaccurate data has caused problems ever since mankind started to develop different units of measurement.
Now, the question is how can you avoid costly incidents such as the above and successfully conquer your data problems and how can IBM Information Server help you in that journey?
Whether you want to build a bridge, explore the sea, or simply try to identify new markets, you will only be as good as the data you use. This means it must be complete, in context, trusted and easily accessible to drive insights. As if this isn’t challenging enough, your competitiveness also depends on your organizations ability to quickly adapt to changing conditions.
For more than a decade, IBM InfoSphere Information Server has been one of the market-leading platforms for data integration and governance. Users have relied on its powerful and scalable integration, quality and governance capabilities to deliver trusted information to their mission critical business initiatives.
John Muir once wrote, “The power of imagination makes us infinite.” We have applied our power of imagination to once again reinvent the Information Server platform.
As your business agility depends on the flexibility, autonomy, competency and productiveness of the tools that power your business, we have infused Information Server’s newest release with a number of game-changing inventions which include deeper insights into the context and relationships among your data, increased automation for your users to complete their work faster and more reliably, and more flexibility for your workload execution for increased resource optimization. All of those are aimed at making your business more successful when tackling your most challenging data problems.
Here are five of those game changing inventions and how they are going to help your business:
- Contextual search. Asymmetrical construction due to lack of governance and out-of-context use of tools was the likely cause of the Vasa sinking. The new contextual search feature called Enterprise Search provides your users with the context to avoid such costly mistakes. It greatly simplifies and accelerates the understanding, integration and governance of enterprise data. Users can visually search, explore and easily gain insights through an enriched search experience powered by a knowledge graph. The graph provides context, insight and visibility across enterprise information, giving you a much better understanding and awareness of how data is related, linked, and used.
- Cognitive design. Getting trusted data to your users quickly is an imperative. This process starts with your integration design environment. To help address your data integration, transformation or curation needs quickly, Information Server V11.7 now includes a brand new versatile designer, called DataStage™ Flow Designer. It features an intuitive, modern and secure interface accessible to all users through a no-install, browser-based experience, accelerating your users’ productivity through automatic schema propagation, highlighted design errors, powerful type-ahead search as well as full backwards compatibility to the desktop version of the DataStage™ Designer.
- Hybrid execution. Data Warehouse optimization is one of the leading use cases to address growing data volumes while simplifying and accelerating data analytics. Once again, Information Server V11.7 has strengthened its ability to run on Hadoop with a set of novel features to more efficiently operationalize your data lake environment. Among those is an industry unique hybrid execution feature which lets you balance integration workloads across a Hadoop and non-Hadoop environment. It’s aimed at minimizing data movements and optimizing your integration resources.
- Accelerated deployment. Applying agile devOps methodologies becomes a necessity to increase efficiency and time to value, even in the more traditional data integration design. One important cornerstone of a modern devOps process is flexible and fast deployment options such as utilizing container technology. To give you more agility and velocity for your IT operations, starting with V11.7, IBM is including options for Docker-based container deployment for Information Server platform components. Additionally, we have also deepened the integration into Apache Ambari when deploying Information Server on Hadoop, resulting in a deployment that is 10 times faster than the previous process.
- Automation powered by machine learning. Manual work is one of the biggest inhibitors to maintain good data quality. To counter this, Information Server V11.7 is further automating the data quality process by underpinning data discovery and classification with machine learning, so that you can spent your time focusing on your business goals. The two innovative aspects are:
- Automation rules, which let business users define graphical rules, which then automatically apply data rule definitions and quality dimensions to data sets based business term assignments
- One-click automated discovery, which enables discovery and analysis of all data from a connection in one click providing easy and fast analysis of hundreds or thousands of data sets
For smooth sailing, choose IBM Information Server V11.7 for your next data project.
Bigdata and data center