Maggiore  
  • Home
  • Offering
 
     
     
     
     
     
 
  
Infrastructure
Our best-of-breed tool suite contains all tools considered top of the range in their respective field, e.g. the data integration, reporting or analytics – they are scalable and suitable for both the occasional user as well as hardened data scientists.

Our data lake transformation program coordinates and leads your efforts to move from your traditional system landscape into a scalable, flexible and top-notch solution and supports you to effectively make use of it – you may still use existing core systems but selectively complement them and replace expensive or unreliable tools.

Our data lake config & technology selector provides you with a setup ensuring interruption-free interaction of the tools.

Our data model blueprint provides a working scheme to model more complex data entities typical in the energy industry, e.g. irregular time series.

Explore our Maggiore Tool suite

Azure
Hadoop
Snowflake
Talend
Tableau
RapidMiner
Click on the tools to learn how we use them to improve your business
  • Data integration and management tool for cloud solutions, allowing a speedy and cheap data integration and maintenance
  • Gartner study: Talend is leader in areas data integration and data quality
  • Business intelligence tool which allows easy data preparation and visualization
  • Allows accessing several data sources, easy definition and configuration of reports and automated processing and reporting
  • Microsoft Azure is a cloud solution service that provides infrastructure and software as a service (IaaA and SaaS)
  • The services include computational, mobile, storage, data management, messaging and many other services
  • We use Microsoft Azure as a platform on which we operate the other tools
  • Fast data science platform uniting data preparation, machine learning and predictive model development
  • Gartner study: RapidMiner is leader in area of data analytics
  • We particularly like the user friendly handling of data preparation, which typically takes a lot of time, and the way it makes machine learning and model training look easy
  • Framework for distributed processing of large data sets across clusters of computers, so scaling from single servers to many machines
  • Designed to detect and handle failures at the application layer, so delivering a highly-available service
  • SQL data warehouse for the cloud, so providing a complete relational data base for both structured and semi-structured data
  • Requires no administration and provides broad support for ETL and BI tools, and enables developers to build modern data applications
        
People
Our digital role mask provides a typical and generic description of information management and data science related roles and an adjustment and mapping into your organization, complementing existing role concepts and integrating in the existing organization.

Our transformation training schedule provides a methodic tool set and topically and role-wise modular schedule for enhancing existing analytics and other information management related skills, getting to know the Maggiore tool suite and creating awareness concerning the value of information and data security.

Meet the roles within our concept

Chief information manager (1/7)
Central coordinative role
Coordination of analytics activities, linking data exploration with productive operation, provision of infrastructure for data analytics, responsibility for concept for and compliance with information governance
Chief data steward (2/7)
Central coordinative role
Supports the information owner: coordinates with decentral data stewards and data scientists, ensures link between prototyping and operational readiness, and interfaces with Business-As-Usual and processing teams
Analytics use case owner (3/7)
Decentral lead role
Responsibility for explorative activities and actual analytics use cases: initiatives for improvements of methods, creating new information and increasing value of information
Data scientist (4/7)
Central or decentral expert role
Business and data expert with focus on data exploration: executes analytics studies, verifies hypotheses, (dis-) proves feasibility of models and value of information
Information owner (5/7)
Decentral lead role
Responsibility for information entities in operations: definition of entities, gate criteria in information life-cycle, data quality standards, access & usage of the information
Data steward (6/7)
Decentral coordinative role
Leads and coordinates operative data management activities in the data life-cycle such as securing & improving data quality and interfaces with Business-As-Usual and operational teams
Data operator (7/7)
Central support role
Daily processing and Point-of-Contact (PoC) in case of interruptions: data ingestion and provision, running of productive models
  
Information
Our information life-cycle framework covers aspects for governance and processing of information that focus on increasing the value of information and provides a criteria-based gated process for the assessment of business cases and business models.

Our value of information concept uses set criteria and business case information to provide a semi-quantitative measure for creating awareness about the essentiality of any information in your business.

Our extensive data catalogue lists typical information entities used in the energy industry and covers value of information related attributes – it is flexibly adjustable to cover your data entities but also additional attributes you find important to govern.

Our data quality check-up provides criteria along six quality dimensions to make sure you can rely on your information.

Our analytics use cases catalogue provide an overview over the potential of some standard use cases for the use case owners and a training field for data scientists.

Travel along our information life-cycle

  • 1.
    Data
    integration
  • 2.
    Data
    exploration
  • 3.
    Individual
    prototype
  • 4.
    Industrialized
    prototype
  • 5.
    Industrialized
    production
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
Data integration
Quality gate defined, e.g. required data completeness, value of information and/or exploration target
Data quality assessment along six quality dimensions
Responsibility with use case owner and data scientist
Value of information unconfirmed but initial hypotheses provided
Cost of information kept low - generate quick wins or fail fast and cheap
Information entity for integration described
Data catalogue draft produced
Responsbility
Information entity - one off data integration e.g. time series of information
Responsbility
Daily operations not involved
Data exploration
Get in touch with us for more information!
Individual prototype
Get in touch with us for more information!
Industrialized prototype
Get in touch with us for more information!
Industrialized production
Get in touch with us for more information!
  
© 2018 Copyright: The Advisory House AG Impressum Privacy Policy

© 2018 Copyright: The Advisory House AG

Impressum Privacy Policy