Copyright. International Conference on Knowledge, Innovation and Enterprise 2015

knowledge-informed  technology and business innovations & creativityTM

Berlin 2016
21-24 June GermanY
2nd E. Paul Torrance International
Roundtable on
Creative Thinking  
21 June
How To Leverage New Insights
4th European Symposium on
Big Data, Deep learning & Advanced Predictive Analytics
23-24 June



                        























Big Data (Hadoop) Data & Systems Modeling

by Dominique Heger, PhD, Chief Executive, DHTechnologies, Texas, USA


The term Big Data highlights high volumes of data. What describes high data volumes for an organization basically dependent on the organization and its (data) history itself. In a nutshell, data management in an organization is focused on delivering data to the appropriate data consumers (people and/or applications) in the most effective and efficient manner as possible. The goal of data quality and data governance is trusted data. The objective of data integration is available data or in other words, delivering the data to the consumers in the proper format. Big Data data & systems modeling can aid in all these aspects. Modeling can be described as the process of creating a simpler, flexible, mathematical representation of a system that may or may not yet exist. Modeling can be used as a powerful communication tool among the technical and business stakeholders and consumers of any (data) project.  One of the major aspects of modeling is that the technique is not limited to describing a system solely as the system itself, but various modeling based cases (sensitivity studies) can be conducted to communicate different aspects of possible workload and configuration scenarios.

In any design that involves the movement of data among systems, it is paramount to specify the lineage in the flow of data among the physical data structures, including the mapping and transformation rules necessary to accomplish the project's goals. This level of design requires an understanding of both, the physical implementation as well as the business implication of the data. Data and systems modeling is further used to design data structures at various levels of abstraction from the conceptual to the physical stage. While differentiating between modeling and design, the focus is normally on distinguishing between the logical design and design closer to the physical implementation. Hence, data and systems modeling is basically a necessity for any design. During the Big Data Symposium, actual Hadoop (MapReduce) and Data Analytics models will be presented to further highlight the importance of modeling in any Big Data project.





9.00-9.30

Registration

Registration

9.30-11.00

FREE Data Nubes Big Data Classes/ Pre-Symposium Workshop: Topic TBC

Alain Biem,  Vice President (Analytics) Opera Solutions,  USA & Dom Heger, CEO DHT/Data Nubes

Keynote Presentation:  

Alain Biem,  Vice President (Analytics) Opera Solutions,  USA

11.00-11.30

Coffee/Tea Break

Coffee/Tea Break

11.30-13.00

Keynote Presentation:

Ling Shao,  Head of the Computer Vision and Artificial Intelligence Group, Department of Computer Science and Digital Technologies, Northumbria University, UK

Keynote Presentation:

 Dominique Heger, CEO, DHTechnologies/Data Nubes, Texas, USA; follow by Break Out Session

13.00-14.00

Networking/Lunch

Networking/Lunch

14.00-15.00

Breakout session/vendor talk/presentations: TBC

Roundtable - Issues Arising

15.00-16.30


FREE Data Nubes Big Data Classes/ Pre-Symposium Workshop: Topic TBC

Alain Biem,  Vice President (Analytics) Opera Solutions,  USA & Dom Heger, CEO DHT/Data Nubes


Working with Data in Education & Application of Predictive Analytics in Higher Education Roundtable - David Turner & James Ogunleye

16.30-18.30


Keynote Presentation:

Nabil El Kadhi,Deputy Vice-Chancellor for Academic Affairs UoB, Oman.

Keynote Presentation

James Ogunleye, KIE Conference Chairman & Middlesex University, UK

18.30-20.30


Wine Reception/Networking:

No-limit wine & Beer


Roundtable - Issues Arising & Concluding Remarks  by chair

SPEAKERS -


Dominique Heger, PhD, is the Founder & CEO DHTechnologies, / Data Nubes, Texas, USA. He has successfully conducted large-scale projects for companies such as Boeing, AT&T, LLNL, NERSC, Wachovia, Wells Fargo, or CERN. Prior to DHT, Dominique worked for IBM, Hewlett-Packard (at CERN Geneva), and Unisys. Over the years, he has published over 30 papers and books on performance-related topics with IEEE, CMG, ACM, or the IBM Press.

Presentation Plus LIVE DEMO:The Impact of Deep Learning & Quantum Computing on Big Data


One of the goals and objectives of machine learning is that the networks should be teachable. It is rather trivial at a small scale to demonstrate how to feed a series of input examples and expected outputs into a model and to execute a training process to produce more accurate predictions over time. The main problem is how to do the same thing at a large scale while operating on complex problems such as speech or image recognition related projects. In 2012, Hinton et al. published a paper outlining different ways of accelerating that learning process. With most machine learning projects, the main challenge is in identifying the features in the raw input data set. Deep learning aims at removing that manual step by relying on the training process to discover the most useful patterns across the input examples. While the current Big Data ecosystem is considered as being very powerful by today’s standards, to actually increase the computing power of these systems to address the ever-growing Big Data project requirements, it is necessary to add additional transistors into each cluster node. This effort is getting more and more difficult as it is already a daunting task to add additional transistors based on the currently available technology. Ergo, a paradigm shift is necessary to meet future computing demands. Quantum computing refers to the fusion of quantum physics and computer science and represents a paradigm shift where data is represented by qubits. Unlike conventional systems that contain many small transistors that can either be turned on or off to represent 0 or 1, a quantum bit represents any possible superposition of 0 and 1 with complex numbers serving as the coefficients of the superposition. In this presentation, deep learning and quantum computing are introduced and the areas where the 2 technologies will profoundly impact Big Data are elaborated on.

Presentation: Challenges in Operationalising Predictive Analytics

The phenomenon of big data has brought home the importance of predictive analytics as a technology and statistical technique critical to taking the sting out of the big data mayhem. Although predictive analytics has been around for some time, the benefits of predictive analytics have only recently been appreciated due largely to the phenomenon of big data. This new-found appreciation of predictive analytics is coupled with a desire by many corporate organisations not only to inform strategic business decisions with evidence, but also to predict future trends with a high level of confidence. While many organisations are able to use predictive analytics technology with greater success, the outcome for some organisations has been less than successful. Predictive analytics can only achieve so much when organisations and their analytics teams consider the main limiting factors in predictive analytics projects. This presentation suggests a number of measures that analytics teams can take to minimise those limitations. It concludes that although some machine learning algorithms based on artificial intelligence are increasingly being used to minimise aspects of limitations of predictive analytics but with ever present danger of lurking variables or unknown factors, there is no sustainable alternative to good data quality assurance.


James Ogunleye, PhD, is the chairman of 2016 KIE Conference and a professor at Middlesex University United Kingdom. He is also the editor of the International Journal of Developments in Big Data and Analytics, International Journal of Knowledge, Innovation and Entrepreneurship, Research Papers on Knowledge, Innovation and Enterprise, and Studies in Comparative Education, Science and Technology.

Presentation Plus LIVE DEMO: Management of The Analytic Lifecycle for Big Data


The Analytic Lifecycle involves building, deploying, and maintaining a variety of analytic models, on a variety of computing platforms, for a variety of tasks. The Management of the Analytic Lifecycle for Big Data, at rest or in motion, is a challenging endeavor requiring the delicate utilization and leveraging of various Big Data platforms and software assets, as data evolve.   In this presentation, we describe the management of Big Data Analytics lifecycle as an essential part of the data lifecycle and as a pre-requisite in all Big Data viable solutions. We will use the IBM Big Data Platform, which is a stack of software assets, to illustrate specific solutions to issues related analytic lifecycle management.  


Alain Biem, PhD,is vice president of Analytics and chief scientist of advanced solutions delivery at Opera Solutions. A holder of a number of IBM patents and author of many publications in machine learning, he was until recently is a Senior Research Scientist and Project Lead at IBM Research, New York, USA.

DRAFT PROGRAM - Thursday 23-Friday 24 June 2016

Take-aways


(1) Linux Performance Optimizations for Big Data Environments.  Read More.

(2) Workload Dependent Hadoop MapReduce Application Performance Modeling.  Read More.  

(3) Hadoop Ecosystem, Mapreduce Framework and the IT Challenges. Read More.  

(4) Business Analytics and the Big Data. Read more

(5) Business Analytics - taking the sting out of the big data mayhem?  Read More  

(6) Understand the different between business intelligence and business analytics  Read More

(7) Before everyone goes predictive analytics ballistic.Read More.

(8) Small and medium-sized enterprises, big data analytics and the next big thing. Reed More


SECURE YOUR PLACE - REGISTER TODAY


Limited opportunities for vendors to promote themselves. Get details

David Turner, PhD is an emeritus professor at the Faculty of Business and Society, University of South Wales, UK  and immediate past Treasurer of the World Council of Comparative Education Societies.

Presentations: Working with Data in Education &

Application of Predictive Analytics in Higher Education



Summary: To follow

BDS Sponsorship Form_August 2015.pdf

4th Symposium on Big Data, Deep Learning & Advanced Predictive Analytics, June 23-24


Presentation: Discriminative Feature Learning and Image/Video Categorisation for Visual Big Data

In this big data era, visual data such as images and videos are in massive scales. How to efficiently search or classify such visual big data is a challenging and important research area. Previous methods based on handcrafted features such as SIFT and HOG have been successful for many small-scale applications, but are not adaptive and scalable for big data analytics. Recently, feature learning, which automatically learns discriminative and adaptive features from data, has attracted a lot of attention. In this talk, I'll summarise our work on feature learning for visual categorisation targeting big data. Both deep learning and evolutionary computation will be introduced and applied to various applications including image classification, image search and retrieval, and human action recognition and description.

Ling Shao, PhD is a professor and Chair in Computer Vision and Head of the Computer Vision and Artificial Intelligence Group with the  Department of Computer Science and Digital Technologies  at Northumbria University, UK and an Advanced Visiting Fellow with the Department of Electronic and Electrical Engineering at the  University of Sheffield, UK. Ling  an Associate Editor of IEEE Transactions on Image Processing, IEEE Transactions on Cybernetics, Information Sciences.


Nabil El Kadhi, PhD, is professor and Deputy Vice-Chancellor for Academic Affairs at the UoB, Oman. He has research on and  has contributed to several industrial projects on Artificial Intelligence, Automatic Translation, Secure Payment, etc.






PRESENTATION: Information Systems: A Shift from Structured Data to Smart Cities, with Increasing Artificial Intelligence Capabilities


With the computerization, atomization and the 'cloudization' of the processes and data collection and management, Information Systems are playing a bigger role in today’s corporate life and success. This presentation illustrates the various shifts, changes in that regard from simple data, to structured data, to 3D data and finally to big-data and smart processing. Decision making are nowadays shifting towards reactive dynamic ways leading to more efficient and competitive situations.  Managing big data, having a lot of sensors sending information and data in a road, a city, a country create new challenges and new ambiguities. The presentation will tackle such challenges (privacy, secrecy, coherence, legal aspects,) and will focus on the role or artificial intelligence techniques in data-mining, information processing and decision-making support. Smart cities and non-structured data magma: new challenges or a move backward as in fact, Information Systems focused for years on moving toward structured data, and now, we are back to the non-structured data with smart cities, cloud content and natural language-based information extraction.