Select a tab

4 Hours

Brief Description

Currently, Personalized Precision Medicine and Personalize Healthcare required simultaneously accomplished the following four conditions, to be: predictive, preventive, personalize and participatory. Personalized Precision Medicine (PPM) is an emerging approach for disease treatment and prevention that include individual variability in genes, environment and lifestyle for each person. It is also pointed out that PPM gives clinicians tools to better understand the complex mechanism underlying a patient´s health, disease or condition and to better predict which treatment will be most effective. PPM includes several benefits such as: a) ensure that people get the correct treatment every time; b) enable biomarkers guided therapy; c) the emphasis is in prevention not reaction; d) improved patient outcomes; e) may or may not result in drug discovery and f) offer less cost effective and more efficient health care. In parallel, present ongoing challenges from regulation, reimbursement, clinical adoption to the economic value of the data. PPM has the ability to link large scale genomics, other omics, biomedical imaging, data with large scale electronic patient health care records, e-record and big data. This integrative approach is being made in targeted therapies for respiratory, cardiovascular /metabolic and neurological disease. Personalize Healthcare (PH) historically has been used in the pharmaceutical industry focus on treatment and the association between molecular markers, companion or complementary diagnostics and targeted therapies. Big data analytics are supposed to be the cure-all for the healthcare industry, closing the gap between what is expected of clinicians and how well they can actually perform. Despite the proliferation of digital documentation, predictive analytics and risk scoring technologies, healthcare organizations are still struggling with the basics of health information exchange and EHR interoperability where new protocols like FHIR or day-to-day information sharing is still in its infancy. Healthcare is the largest sector in the largest economy in the world and we are at the stage of beginning to integrate the digital record of what has happened to our patients, together to get an end-to-end look across the service line. The demand of Data Science (Ds) in industry, academia, and government is rapidly growing. We already mentioned above how transform PPM and PH. Ds is a multidisciplinary blend of data inference, algorithm development, and technology in order to solve analytically complex problems. Also, Ds is a set of fundamental principles that guide the extraction of knowledge from data. Ds involved principles, process and techniques for understanding the phenomena via (automated) the analysis of data and its goal is to improve the decision making. Data driven decision making (DDD) refers to the practice of basing decision on the analysis of data and is the focal point for PMPM and PH in the present. The adoption of big data analytics, PPM and digital health innovations will change healthcare delivery and increase quality care.


In this Tutorial we will: A) analyze the development of PMM and PH along the world; B) analyze the PPM benefits with specific examples for each of them; C) analyze the challenges from regulation to the economic value of the data in PPM; D) analyze the PPM strategy for products, and services across diagnostics, life science, companies and medical institutions; E) example of integrative PPM approach considering the link between omics, imaging, data with e-record and big data in targeted therapies; F) analyze data driven treatment and predictive medicine incorporating DDD; G) analyze the current situation with health information exchange and EHR interoperability like FHIR; H) analyze the clinical guidelines and standardized processes considering patient engagement, big data, remote care including telehealth or virtual care; I) analyze Ds evolution and the influences on PPM and PH from the Ds point of view; J) defined data mining together with Ds to improve business decision, specifically in PPM and PH; K) analyze the role of What is Machine Learning and Data Munging in PPM and L) analyze how the adoption of big data analytics, PPM and digital health innovations will change healthcare delivery and increase quality care.

This tutorial is directed to: Advisors, Directors, Managers, Researchers and Scientists from Academia, Pharma, Biotech, Government and Healthcare Organizations working in AI, Data Science, Bioinformatics, Personalized Medicine, Precision Medicine Technology, Diagnostics Research, Computational & Systems Biology, R&D and Business Development.


Authors short Vitae

Dr. María Flavia Guiñazu is Primary and Clinical Researcher, Clinician, Senior project manager and Medical Advisor in experimental and translational medicine in neurology. Worked in hospitals, universities, and companies through jobs and projects funded by European Community, Welcome Trust, Medical Research Council, King´s College, University College London (NIMR) all in UK. Also, INSERM in Paris and FONDECYT in Chile, among others. Her work aims to develop new experimental platforms, web platforms, diagnostic tools, protocols and treatments by using Personalize Precision Medicine approach for disease treatment and prevention. Highly experienced using large range sets of data, programs and data mining for scientific, health technology and medical purpose. Since 2011 I have focused on Integral Health Management in public and private sectors, too. She is member of King's Fund y Chatham house. Currently she is visitant researcher in the Web Intelligence Centre in Chile (www.wic.cl).

Dr. Xiaohui Tao is a Senior Lecturer in Faculty of Health, Engineering and Sciences with University of Southern Queensland (USQ), Australia (Website: http://tao-xiaohui.info; email: xiaohui.tao@usq.edu.au). Before joined USQ, he was a Research Associate with the e-Discovery Lab, Faculty of Science and Technology at Queensland University of Technology (QUT), Australia and completed PhD in QUT as well. His research interests include ontology learning and mining, Web Intelligence, Data Mining, Machine Learning, Information Retrieval, and IT in Healthcare. The research results have been published on high-ranking forums such as IEEE TKDE, WWWJ, ICDE, PAKDD, CIKM, and WI-IAT. Xiaohui has served as an organizer or PC Co-Chair or member in many events for the communities of WWW, Web Intelligence, Data Mining, and Healthcare.

Dr. Juan D. Velásquez is Full Professor with the Industrial Engineering Department at University of Chile. During his academic life, he has been adviser for more than 140 reports and theses (master and doctorate), has written more than 120 scientific publications and book chapters, is the author and main editor of four books. He has been visiting professor at the Center for Collaborative Research, University of Tokyo, Japan and the VSB Ostrava Technical University, Czech Republic, besides being a guest lecturer in more than 10 countries. In 2009, he was the General Chair of the International Knowledge Engineering System (KES) Conference and in 2018 General Chair of the International Conference on Web Intelligence held in Santiago de Chile. His research lines are health engineering data science, web mining and knowledge management.

Half day tutorial – 4 hours

Brief Description

Several scholarly studies have analyzed how textual content of News, Analyst Reports, Consumer- generated content on social-media etc. impact financial data like stock indices or revenue. With humongous amount of such content being generated continuously on the web, developing predictive models that can effectively analyze this data to predict the behavior of financial data in real time is a challenging task. Joint modeling of textual and financial data for predicting financial outcomes is an interesting research area which is growing rapidly due to its wide-spread applicability and increasing user interest in this.



In this tutorial we shall be presenting a comprehensive overview of the area which will include the following:

  1. Problem formulation – This section will cover the different ways in which the problem has been formulated along with the different types of unstructured data sources considered and their significances. The sources are diverse in nature consisting of News, Reports, consumer content generated on Social Media, surveys, feedbacks etc.

  2. Information extraction - This section will cover the wide array of methods and techniques based on traditional machine Learning and recent Deep Learning (LSTM, Bi-LSTM, Tree LSTM etc.) that are applied to analyze the unstructured text content to extract information components like entities, relations, topics, intent, events and also derived concepts like relations, sentiments and behavioral information from the source data.

  3. Joint modeling – This section will cover the predictive modeling techniques that are applied for predicting the financial outcomes taking into account the information derived from the unstructured content and fusing it with past financial data for better predictions.

  4. An Architectural Framework - In this section we will be discussing the platform components that are needed to build a real-time event-based predictive system. It will cover the entire Information acquisition, processing, storage and retrieval pipeline using Open source Natural Language Processing (NLP) libraries (Spacy, NLTK) and Machine Learning based predictive models implemented over real-time distributed architecture like Spark.


Lecturers

Dr. Lipika Dey

Principal Scientist and Ishan Verma, Researcher
TCS Innovation Labs,
Tata Consultancy Services, India
Delhi
lipika.dey@tcs.com and lipikadey@gmail.com



Short biography

Dr. Lipika Dey is a Senior Consultant and Principal Scientist at Tata Consultancy Services, India. She heads the Research and Innovation program for TCS Analytics and Insights business. Her focus is on seamless integration of social intelligence and business intelligence using multi-structured data analytics. Her research interests are in the areas of Natural Language Processing, Text and Data Mining, Machine Learning and Semantic Search. She is in the Program Committee of several Data Mining and NLP conferences like KDD, WWW etc. She, along with her colleague Hardik Meisheri had delivered a tutorial at WI 2017, which was very well-received. The tutorial material was later published as an invited paper in IEEE Intelligent Informatics Bulletin.



Lipika did her Integrated M.Sc. in Mathematics, M.Tech in Computer Science and Data Processing and a Ph.D. in Computer Science and Engineering - all from IIT Kharagpur, India. Prior to joining TCS in 2007, she was a faculty at the Department of Mathematics at IIT Delhi, India from 1995 to 2007.



Lipika has been publishing consistently in Tier 1 conferences and has also been invited to speak at several Business Conferences like Sentiment Analysis Symposium, San Francisco and New York, Text Analytics Summit at Boston and Language Technology Accelerate, Brussels.

Lipika and Ishan jointly own several patents on Enterprise Content Analysis.



Short biography

Ishan Verma


Ishan Verma is a Researcher at Tata Consultancy Services, India with over 7 years of experience in Industrial R&D. He is a part of Research and Innovation program for TCS Analytics and Insights business unit at TCS Innovation Labs, Delhi. His research focus is on enterprise context driven multi- structured data analytics. Ishan's research interests are in the areas of text analytics, information extraction and retrieval, web-intelligence, semantic search, machine learning and integrated analytics of structured & unstructured data. Ishan is currently pursuing his M.Tech in Software Systems with specialization in Data Analytics from BITS, Pilani. Ishan did his B.Tech. in Computer Science and Engineering from IET Lucknow.