You are now visiting our Global professional lighting website, visit your local website by going to the USA website
You are now visiting the Philips lighting website. A localized version is available for you.

A brand of

Suggestions

    Crucial data analytics

    Areas your company must master to compete

     

    We stand at the cusp of a flood of data.

    According to a Cisco report, the Internet of Things (IoT) will by 2020 annually generate about 600 zettabytes of data, of which 6 zettabytes will enter into storage. (For perspective, a zettabyte is to a gigabyte as the Great Wall of China is to the cup of coffee on your desk.)

    That information can be of incalculable value to companies, opening new frontiers in analysis — if, that is, companies know how to derive that value in the first place. What follows is an overview of fields your company must master if it's to transform that flood from data to information to knowledge to wisdom.


    Data management architecture

     

    Do you have the right data management architecture?

    The right one will ensure that your company's data is easily discoverable, easy to use, and shared to the maximum extent possible. It will give your organization a coherent real-time view of all its data assets, including both traditional enterprise data and IoT data; manage the data lifecycle; provide for data processing, transformation, and enrichment; and generally ensure data quality and value.

    Value emerges, and key insights are generated, when silos break down and data sources combine. To make those things happen, you'll need to create a central cross-function, cross-business-group data governance entity to ensure clear data management roles, responsibilities, and policies.

    Among that entity's tasks will be to use APIs to manage data flows with third party platforms, develop methodologies and metrics for valuating and tracking data assets, and establish KPIs. It will also have to ensure that each of your business groups maps out its data systems and assets and has an integral data roadmap and strategy – all within the framework of the organizational data management architecture as a whole.

    Finally, it will have to stay current on developments in artificial intelligence, not to mention in data management and analytics tools. Your data competencies will have to keep progressing if you want to stay competitive in an IoT world.


    Data structures

     

    Data comes in different flavors, so many that there can be no one-size-fits-all approach to deriving value from it. Your data analytics capabilities have to be able to handle data arriving in different forms and in different speeds and sizes.

     

    Traditional enterprise information management systems deal with structured (or at most semi-structured) data, small and slow data, data at rest, and internal data. This sort of material, residing in well-organized databases, is easy to process. The IoT, on the other hand, generates unstructured big data: all of the information indiscriminately churned out, in various formats, by sensors, satellites, cameras, and other connected devices. Such data, in its chaos and its abundance, represents a challenge.

     

    A first step in confronting that challenge is to impose some discipline on your data flows. If a sensor buried deep inside a ship's engine is generating multiple temperature readings every second, you might do well to set your analytics capabilities so that they're ignoring 95 percent of them: that much information might just slow you down, with no perceptible upside. Recognizing patterns in IoT data will also be important – so that you can subsequent identify deviations from those patterns, deviations that will indicate that some component in a supply chain or an electricity grid, to give two examples, is on the blink.

     

    Other challenges exist as well. To obtain actionable intelligence, you must integrate all of those different types of sensor data with each other. Receiving a data point that tells you that an engine part is overheating will be useful. But it will be more useful if it's accompanied, in real time, by a visual image that depicts what's happening to that part to make it so hot.

     

    Meanwhile, you may well want to integrate all this unstructured data with your organization's structured internal data, a process that could yield great insights but will require you to define your data-related goals and choose the right tools and providers to achieve them.


    Machine learning

     

    Recent years have seen significant advances in machine learning – essentially, algorithms that can learn from and make predictions about data. As computational resources grow we'll see an exponential improvement in so-called “deep learning" algorithms – algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures.

    The result will be cognitive computing, combining artificial intelligence and machine learning algorithms in an attempt to reproduce the human brain's behavior. This will dramatically boost our ability to crunch data in a world where countless devices are endlessly streaming it out, and in which batch processing will by necessity yield to stream processing. Sub-disciplines such as data mining, natural language processing (NLP), and text analytics will find patterns in, and generally interpret, even unstructured data.

    Supervised learning – the use of labelled data to train a model to make predictions about non-labelled items – accounts for the majority of machine learning activity at the moment. (A "label" in this context means a tag that meaningfully identifies a data point.) But decision-makers should be aware of burgeoning machine learning areas, too.

    Unsupervised learning, for example, employs algorithmic power to make inferences about non-labelled data. And reinforcement learning involves using rewards to train an agent to perform complicated tasks in real time. Think Alphabet's AlphaGo, but with real-world applications, such as industrial robotics or financial trading.


    Knowledge-sharing networks

     

    Knowledge-sharing networks – in one definition, “collections of individuals and teams who come together across organizational, spatial, and disciplinary boundaries to invent and share a body of knowledge" – have always been key to progress.

    They'll be just as or even more crucial in the big data age. As organizations begin to discover the power of data in gaining competitive advantage, the usage of the data will broaden from core power users to almost all employees. No one will be left out. The result, of course, will be broader sharing of data between different functions of the organizations – an end to siloing, and the dawning of a new age of mutually beneficial cross-disciplinary information usage.

    Your organization will need to be sure that it's capable of fostering this. To a great extent, this will be a question of running the right platforms. But it will also have to do with instilling the right collaborative mindset among team members and a new sense of how an organization must work in the big data era.

    Broadening internal data usage is one issue. But you'll likely find yourself relying on external data more and more: data from third-party analytics providers, social media, and so on. Are the APIs in place to make this information interchange seamless? Will you have the right mechanisms in place to ensure security?

    These and other questions relating to the big data era that's just dawning can seem daunting. One fact that should reassure organizational decision-makers, though, is that we're still at the beginning. As wonderful as machine learning can appear right now, it's still taking its clumsy baby steps. The first-mover advantage still exists, and it's not at all too late for organizations to take advantage of the new data economy, with all the challenge and promise that it will bring.

    Share this article

    What can Interact do for you?

    Follow us on: