Search results for: logical schema
24 China Pakistan Economic Corridor: An Unfolding Fiasco in World Economy
Authors: Debarpita Pande
Abstract:
On 22nd May 2013 Chinese Premier Li Keqiang on his visit to Pakistan tabled a proposal for connecting Kashgar in China’s Xinjiang Uygur Autonomous Region with the south-western Pakistani seaport of Gwadar via the China Pakistan Economic Corridor (hereinafter referred to as CPEC). The project, popularly termed as 'One Belt One Road' will encompass within it a connectivity component including a 3000-kilometre road, railways and oil pipeline from Kashgar to Gwadar port along with an international airport and a deep sea port. Superficially, this may look like a 'game changer' for Pakistan and other countries of South Asia but this article by doctrinal method of research will unearth some serious flaws in it, which may change the entire economic system of this region heavily affecting the socio-economic conditions of South Asia, further complicating the complete geopolitical situation of the region disturbing the world economic stability. The paper besets with a logical analyzation of the socio-economic issues arising out of this project with an emphasis on its impact on the Pakistani and Indian economy due to Chinese dominance, serious tension in international relations, security issues, arms race, political and provincial concerns. The research paper further aims to study the impact of huge burden of loan given by China towards this project where Pakistan already suffers from persistent debts in the face of declining foreign currency reserves along with that the sovereignty of Pakistan will also be at stake as the entire economy of the country will be held hostage by China. The author compares this situation with the fallout from projects in Sri Lanka, Tajikistan, and several countries of Africa, all of which are now facing huge debt risks brought by Chinese investments. The entire economic balance will be muddled by the increment in Pakistan’s demand of raw materials resulting to the import of the same from China, which will lead to exorbitant price-hike and limited availability. CPEC will also create Chinese dominance over the international movement of goods that will take place between the Atlantic and the Pacific oceans and hence jeopardising the entire economic balance of South Asia along with Middle Eastern countries like Dubai. Moreover, the paper also analyses the impact of CPEC in the context of international unrest and arms race between Pakistan and India as well as India and China due to border disputes and Chinese surveillance. The paper also examines the global change in economic dynamics in international trade that CPEC will create in the light of U.S.-China relationship. The article thus reflects the grave consequences of CPEC on the international economy, security and bilateral relations, which surpasses the positive impacts of it. The author lastly suggests for more transparency and proper diplomatic planning in the execution of this mega project, which can be a cause of economic complexity in international trade in near future.Keywords: China, CPEC, international trade, Pakistan
Procedia PDF Downloads 17423 Reliability and Availability Analysis of Satellite Data Reception System using Reliability Modeling
Authors: Ch. Sridevi, S. P. Shailender Kumar, B. Gurudayal, A. Chalapathi Rao, K. Koteswara Rao, P. Srinivasulu
Abstract:
System reliability and system availability evaluation plays a crucial role in ensuring the seamless operation of complex satellite data reception system with consistent performance for longer periods. This paper presents a novel approach for the same using a case study on one of the antenna systems at satellite data reception ground station in India. The methodology involves analyzing system's components, their failure rates, system's architecture, generation of logical reliability block diagram model and estimating the reliability of the system using the component level mean time between failures considering exponential distribution to derive a baseline estimate of the system's reliability. The model is then validated with collected system level field failure data from the operational satellite data reception systems that includes failure occurred, failure time, criticality of the failure and repair times by using statistical techniques like median rank, regression and Weibull analysis to extract meaningful insights regarding failure patterns and practical reliability of the system and to assess the accuracy of the developed reliability model. The study mainly focused on identification of critical units within the system, which are prone to failures and have a significant impact on overall performance and brought out a reliability model of the identified critical unit. This model takes into account the interdependencies among system components and their impact on overall system reliability and provides valuable insights into the performance of the system to understand the Improvement or degradation of the system over a period of time and will be the vital input to arrive at the optimized design for future development. It also provides a plug and play framework to understand the effect on performance of the system in case of any up gradations or new designs of the unit. It helps in effective planning and formulating contingency plans to address potential system failures, ensuring the continuity of operations. Furthermore, to instill confidence in system users, the duration for which the system can operate continuously with the desired level of 3 sigma reliability was estimated that turned out to be a vital input to maintenance plan. System availability and station availability was also assessed by considering scenarios of clash and non-clash to determine the overall system performance and potential bottlenecks. Overall, this paper establishes a comprehensive methodology for reliability and availability analysis of complex satellite data reception systems. The results derived from this approach facilitate effective planning contingency measures, and provide users with confidence in system performance and enables decision-makers to make informed choices about system maintenance, upgrades and replacements. It also aids in identifying critical units and assessing system availability in various scenarios and helps in minimizing downtime and optimizing resource allocation.Keywords: exponential distribution, reliability modeling, reliability block diagram, satellite data reception system, system availability, weibull analysis
Procedia PDF Downloads 8422 Malaysian ESL Writing Process: A Comparison with England’s
Authors: Henry Nicholas Lee, George Thomas, Juliana Johari, Carmilla Freddie, Caroline Val Madin
Abstract:
Research in comparative and international education often provides value-laden views of an education system within and in between other countries. These views are frequently used by policy makers or educators to explore similarities and differences for, among others, benchmarking purposes. In this study, a comparison is made between Malaysia and England, focusing on the process of writing children went through to create a text, using a multimodal theoretical framework to analyse this comparison. The main purpose is political in nature as it served as an answer to Malaysia’s call for benchmarking of best practices for language learning. Furthermore, the focus on writing in this study adds into more empirical findings about early writers’ writing development and writing improvement, especially for children at the ages of 5-9. In research, comparative studies in English as a Second Language (ESL) writing pedagogy – particularly in Malaysia since the introduction of the Standard- based English Language Curriculum (KSSR) in 2011 as a draft and its full implementation in 2017; reviewed 2018 KSSR-CEFR aligned – has not been done comparatively. In theory, a multimodal theoretical framework somehow allows a logical comparison between first language and ESL which would provide useful insights to illuminate the writing process between Malaysia and England. The comparisons are not representative because of the different school systems in both countries. So far, the literature informs us that the curriculum for language learning is very much emphasised on children’s linguistic abilities, which include their proficiency and mastery of the language, its conventions, and technicalities. However, recent empirical findings suggested that literacy in its concepts and characters need change. In view of this suggestion, the comparison will look at how the process of writing is implemented through the five modes of communication: linguistic, visual, aural, spatial, and gestural. This project draws on data from Malaysia and England, involving 10 teachers, 26 classroom observations, 20 lesson plans, 20 interviews, and 20 brief conversations with teachers. The research focused upon 20 primary children of different genders aged 5-9, and in addition to primary data descriptions, 40 children’s works, 40 brief classroom conversations, 30 classroom photographs, and 30 school compound photographs were undertaken to investigate teachers and children’s use of modes and semiotic resources to design a text. The data were analysed by means of within-case analysis, cross-case analysis, and constant comparative analysis, with an initial stage of data categorisation, followed by general and specific coding, which clustered the data into thematic groups. The study highlights the importance of teachers’ and children’s engagement and interaction with various modes of communication, an adaptation from the English approaches to teaching writing within the KSSR framework and providing ‘voice’ to ESL writers to ensure that both have access to the knowledge and skills required to make decisions in developing multimodal texts and artefacts.Keywords: comparative education, early writers, KSSR, multimodal theoretical framework, writing development
Procedia PDF Downloads 6821 Determination of Energy and Nutrients Composition of Potential Ready-to-Use Therapeutic Food Formulated from Locally Available Resources
Authors: Amina Sa'id Muhammad, Asmau Ishaq Alhassan, Beba Raymond, Fatima Bello
Abstract:
Severe acute malnutrition (SAM) remains a major killer of children under five years of age. Nigeria has the second highest burden of stunted children in the world, with a national prevalence rate of 32 percent of children under five. An estimated 2 million children in Nigeria suffer from severe acute malnutrition (SAM), and 3.9% of children in northwest Nigeria suffer from SAM, which is significantly higher than the national average of 2.1%. Community-Based Management of Acute Malnutrition (CMAM) has proven to be an effective intervention in the treatment of SAM in children using Ready-to-Use Therapeutic Food (RUTF). Ready-to-use therapeutic food (RUTF) is a key component for the treatment of Severe Acute Malnutrition. It contains all the energy and nutrients required for rapid catch-up growth and used particularly in the treatment of children over 6 months of age with SAM without medical complications. However, almost all RUTFs are currently imported to Nigeria from other countries. Shortages of RUTF due to logistics (shipping costs, delays, donor fatigue etc) and funding issues present a threat to the achievement of the 2030 World Health Assembly (WHA) targets for reducing malnutrition in addition to 2030 SDGs 2 (Zero Hunger), 3 (Good Health and Wellbeing), 12 (Responsible Consumption and Production), and 17 (Partnerships for the Goals), thus undermining its effectiveness in combating malnutrition On the other hand, the availability of human and material resources that will aid local production of RUTF presents an opportunity to fill in the gap in regular RUTF supply. About one thousand Nigerian children die of malnutrition-related causes every day, reaching a total of 361,000 each year. Owing to the high burden of malnutrition in Nigeria, the local production of RUTF is a logical step, that will ensure increased availability, acceptability, access, and efficiency in supply, and at lower costs. Objective(s): The objectives of this study were therefore, to formulate RUTF from locally available resources and to determine its energy and nutrients composition, incommensurate with the standard/commercial RUTF. Methods: Three samples of RUTF were formulated using locally available resources (soya beans, wheat, rice, baobab, brown-sugar, date palm and soya oil); which were subjected to various analysis to determine their energy/proximate composition, vitamin and mineral contents and organoleptic properties were also determined using sensory evaluation. Results: The energy values of the three samples of locally produced RUTF were found to be in conformity with WHO recommendation of ≥ 500 kcal per 100g. The energy values of the three RUTF samples produced in the current study were found to be 563.08, 503.67 and 528.98 kcal respectively. Sample A, B and C had protein content of 13.56% 16.71% and 14.62% respectively, which were higher than that of commercial RUTF (10.9%). Conclusions/recommendations: The locally formulated RUTF samples had energy value of more than 500 kcal per 100g; with an appreciable amount of macro and micro nutrients. The appearance, taste, flavor and general acceptability of the formulated RUTF samples were also commendable.Keywords: energy, malnutrition, nutrients, RUTF
Procedia PDF Downloads 4120 Nurturing Scientific Minds: Enhancing Scientific Thinking in Children (Ages 5-9) through Experiential Learning in Kids Science Labs (STEM)
Authors: Aliya K. Salahova
Abstract:
Scientific thinking, characterized by purposeful knowledge-seeking and the harmonization of theory and facts, holds a crucial role in preparing young minds for an increasingly complex and technologically advanced world. This abstract presents a research study aimed at fostering scientific thinking in early childhood, focusing on children aged 5 to 9 years, through experiential learning in Kids Science Labs (STEM). The study utilized a longitudinal exploration design, spanning 240 weeks from September 2018 to April 2023, to evaluate the effectiveness of the Kids Science Labs program in developing scientific thinking skills. Participants in the research comprised 72 children drawn from local schools and community organizations. Through a formative psychology-pedagogical experiment, the experimental group engaged in weekly STEM activities carefully designed to stimulate scientific thinking, while the control group participated in daily art classes for comparison. To assess the scientific thinking abilities of the participants, a registration table with evaluation criteria was developed. This table included indicators such as depth of questioning, resource utilization in research, logical reasoning in hypotheses, procedural accuracy in experiments, and reflection on research processes. The data analysis revealed dynamic fluctuations in the number of children at different levels of scientific thinking proficiency. While the development was not uniform across all participants, a main leading factor emerged, indicating that the Kids Science Labs program and formative experiment exerted a positive impact on enhancing scientific thinking skills in children within this age range. The study's findings support the hypothesis that systematic implementation of STEM activities effectively promotes and nurtures scientific thinking in children aged 5-9 years. Enriching education with a specially planned STEM program, tailoring scientific activities to children's psychological development, and implementing well-planned diagnostic and corrective measures emerged as essential pedagogical conditions for enhancing scientific thinking abilities in this age group. The results highlight the significant and positive impact of the systematic-activity approach in developing scientific thinking, leading to notable progress and growth in children's scientific thinking abilities over time. These findings have promising implications for educators and researchers, emphasizing the importance of incorporating STEM activities into educational curricula to foster scientific thinking from an early age. This study contributes valuable insights to the field of science education and underscores the potential of STEM-based interventions in shaping the future scientific minds of young children.Keywords: Scientific thinking, education, STEM, intervention, Psychology, Pedagogy, collaborative learning, longitudinal study
Procedia PDF Downloads 6119 Developing Writing Skills of Learners with Persistent Literacy Difficulties through the Explicit Teaching of Grammar in Context: Action Research in a Welsh Secondary School
Authors: Jean Ware, Susan W. Jones
Abstract:
Background: The benefits of grammar instruction in the teaching of writing is contested in most English speaking countries. A majority of Anglophone countries abandoned the teaching of grammar in the 1950s based on the conclusions that it had no positive impact on learners’ development of reading, writing, and language. Although the decontextualised teaching of grammar is not helpful in improving writing, a curriculum with a focus on grammar in an embedded and meaningful way can help learners develop their understanding of the mechanisms of language. Although British learners are generally not taught grammar rules explicitly, learners in schools in France, the Netherlands, and Germany are taught explicitly about the structure of their own language. Exposing learners to grammatical analysis can help them develop their understanding of language. Indeed, if learners are taught that each part of speech has an identified role in the sentence. This means that rather than have to memorise lists of words or spelling patterns, they can focus on determining each word or phrase’s task in the sentence. These processes of categorisation and deduction are higher order thinking skills. When considering definitions of dyslexia available in Great Britain, the explicit teaching of grammar in context could help learners with persistent literacy difficulties. Indeed, learners with dyslexia often develop strengths in problem solving; the teaching of grammar could, therefore, help them develop their understanding of language by using analytical and logical thinking. Aims: This study aims at gaining a further understanding of how the explicit teaching of grammar in context can benefit learners with persistent literacy difficulties. The project is designed to identify ways of adapting existing grammar focussed teaching materials so that learners with specific learning difficulties such as dyslexia can use them to further develop their writing skills. It intends to improve educational practice through action, analysis and reflection. Research Design/Methods: The project, therefore, uses an action research design and multiple sources of evidence. The data collection tools used were standardised test data, teacher assessment data, semi-structured interviews, learners’ before and after attempts at a writing task at the beginning and end of the cycle, documentary data and lesson observation carried out by a specialist teacher. Existing teaching materials were adapted for use with five Year 9 learners who had experienced persistent literacy difficulties from primary school onwards. The initial adaptations included reducing the amount of content to be taught in each lesson, and pre teaching some of the metalanguage needed. Findings: Learners’ before and after attempts at the writing task were scored by a colleague who did not know the order of the attempts. All five learners’ scores were higher on the second writing task. Learners reported that they had enjoyed the teaching approach. They also made suggestions to be included in the second cycle, as did the colleague who carried out observations. Conclusions: Although this is a very small exploratory study, these results suggest that adapting grammar focused teaching materials shows promise for helping learners with persistent literacy difficulties develop their writing skills.Keywords: explicit teaching of grammar in context, literacy acquisition, persistent literacy difficulties, writing skills
Procedia PDF Downloads 15618 Reducing Stunting, Low Birth Weight and Underweight in Anuradhapura District in Sri Lanka, by Identifying and Addressing the Underlying Determinants of Under-Nutrition and Strengthening Families and Communities to Address Them
Authors: Saman Kumara, Duminda Guruge, Krishani Jayasinghe
Abstract:
Introduction: Nutrition strongly influences good health and development in early life. This study, based on a health promotion approach, used a community-based intervention to improve child nutrition. The approach provides the community with control of interventions, thereby building its capacity and empowering individuals and communities. The aim of this research was to reduce stunting, low birth weight and underweight in communities from Anuradhapura District in Sri Lanka, by identifying and addressing the underlying determinants of under-nutrition and strengthening families and communities to address them. Methods: A health promotion intervention was designed and implemented-based on a logical framework developed in collaboration with members of targeted community. Community members’ implements action, so they fully own the process. Members of the community identify and address the most crucial determinants of health including child health and development and monitor the initial results of their action and modify action to optimize outcomes as well as future goals. Group Discussion, group activities, awareness programs, cluster meetings, community tools and sharing success stories were major activities to address determinants. Continuous data collection was planned at different levels. Priority was given to strengthening the ability of families and groups or communities to collect meaningful data and analyze these themselves. Results: Enthusiasm and interest of the mother, happiness of the child/ family, dietary habits, money management, tobacco and alcohol use of fathers, media influences, illnesses in the child or others, hygiene and sanitary practices, community sensitiveness and domestic violence were the major perceived determinants elicited from the study. There were around 1000 well-functioning mothers groups in this district. ‘Happiness calendar’, ‘brain calendar’, ‘money tool’ and ‘stimulation books’ were created by the community members, to address determinants and measure the process. Evaluation of the process has shown positive early results, such as improvement of feeding habits among mothers, innovative ways of providing early stimulation and responsive care, greater involvement of fathers in childcare and responsive feeding. There is a positive movement of communities around child well-being through interactive play areas. Family functioning and community functioning improved. Use of alcohol and tobacco declined. Community money management improved. Underweight was reduced by 40%. Stunting and low birth weight among under-fives also declined within one year. Conclusion: The health promotion intervention was effective in changing the determinants of under-nutrition in early childhood. Addressing the underlying determinants of under-nutrition in early childhood can be recommended for similar contexts.Keywords: birth-weight, community, determinants, stunting, underweight
Procedia PDF Downloads 14617 Formation of Science Literations Based on Indigenous Science Mbaru Niang Manggarai
Authors: Yuliana Wahyu, Ambros Leonangung Edu
Abstract:
The learning praxis that is proposed by 2013 Curriculum (K-13) is no longer school-oriented as a supply-driven, but now a demand-driven provider. This vision is connected with Jokowi-Kalla Nawacita program to create a competitive nation in the global era. Competition is a social fact that must be faced. Therefore the curriculum will design a process to be the innovators and entrepreneurs.To get this goal, K-13 implements the character education. This aims at creating the innovators and entrepreneurs from an early age (primary school). One part of strengthening it is literacy formations (reading, numeracy, science, ICT, finance, and culture). Thus, science literacy is an integral part of character education. The above outputs are only formed through the innovative process through intra-curricular (blended learning), co-curriculer (hands-on learning) and extra-curricular (personalized learning). Unlike the curriculums before that child cram with the theories dominating the intellectual process, new breakthroughs make natural, social, and cultural phenomena as learning sources. For example, Science in primary schoolsplaceBiology as the platform. And Science places natural, social, and cultural phenomena as a learning field so that students can learn, discover, solve concrete problems, and the prospects of development and application in their everyday lives. Science education not only learns about facts collection or natural phenomena but also methods and scientific attitudes. In turn, Science will form the science literacy. Science literacy have critical, creative, logical, and initiative competences in responding to the issues of culture, science and technology. This is linked with science nature which includes hands-on and minds-on. To sustain the effectiveness of science learning, K-13 opens a new way of viewing a contextual learning model in which facts or natural phenomena are drawn closer to the child's learning environment to be studied and analyzed scientifically. Thus, the topic of elementary science discussion is the practical and contextual things that students encounter. This research is about to contextualize Science in primary schools at Manggarai, NTT, by placing local wisdom as a learning source and media to form the science literacy. Explicitly, this study discovers the concept of science and mathematics in Mbaru Niang. Mbaru Niang is a forgotten potentials of the centralistic-theoretical mainstream curriculum so far. In fact, the traditional Manggarai community stores and inherits much of the science-mathematical indigenous sciences. In the traditional house structures are full of science and mathematics knowledge. Every details have style, sound and mathematical symbols. Learning this, students are able to collaborate and synergize the content and learning resources in student learning activities. This is constructivist contextual learning that will be applied in meaningful learning. Meaningful learning allows students to learn by doing. Students then connect topics to the context, and science literacy is constructed from their factual experiences. The research location will be conducted in Manggarai through observation, interview, and literature study.Keywords: indigenous science, Mbaru Niang, science literacy, science
Procedia PDF Downloads 20916 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes
Authors: Igor A. Krichtafovitch
Abstract:
The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.Keywords: supercomputer, biological evolution, Darwinism, speciation
Procedia PDF Downloads 16415 MANIFEST-2, a Global, Phase 3, Randomized, Double-Blind, Active-Control Study of Pelabresib (CPI-0610) and Ruxolitinib vs. Placebo and Ruxolitinib in JAK Inhibitor-Naïve Myelofibrosis Patients
Authors: Claire Harrison, Raajit K. Rampal, Vikas Gupta, Srdan Verstovsek, Moshe Talpaz, Jean-Jacques Kiladjian, Ruben Mesa, Andrew Kuykendall, Alessandro Vannucchi, Francesca Palandri, Sebastian Grosicki, Timothy Devos, Eric Jourdan, Marielle J. Wondergem, Haifa Kathrin Al-Ali, Veronika Buxhofer-Ausch, Alberto Alvarez-Larrán, Sanjay Akhani, Rafael Muñoz-Carerras, Yury Sheykin, Gozde Colak, Morgan Harris, John Mascarenhas
Abstract:
Myelofibrosis (MF) is characterized by bone marrow fibrosis, anemia, splenomegaly and constitutional symptoms. Progressive bone marrow fibrosis results from aberrant megakaryopoeisis and expression of proinflammatory cytokines, both of which are heavily influenced by bromodomain and extraterminal domain (BET)-mediated gene regulation and lead to myeloproliferation and cytopenias. Pelabresib (CPI-0610) is an oral small-molecule investigational inhibitor of BET protein bromodomains currently being developed for the treatment of patients with MF. It is designed to downregulate BET target genes and modify nuclear factor kappa B (NF-κB) signaling. MANIFEST-2 was initiated based on data from Arm 3 of the ongoing Phase 2 MANIFEST study (NCT02158858), which is evaluating the combination of pelabresib and ruxolitinib in Janus kinase inhibitor (JAKi) treatment-naïve patients with MF. Primary endpoint analyses showed splenic and symptom responses in 68% and 56% of 84 enrolled patients, respectively. MANIFEST-2 (NCT04603495) is a global, Phase 3, randomized, double-blind, active-control study of pelabresib and ruxolitinib versus placebo and ruxolitinib in JAKi treatment-naïve patients with primary MF, post-polycythemia vera MF or post-essential thrombocythemia MF. The aim of this study is to evaluate the efficacy and safety of pelabresib in combination with ruxolitinib. Here we report updates from a recent protocol amendment. The MANIFEST-2 study schema is shown in Figure 1. Key eligibility criteria include a Dynamic International Prognostic Scoring System (DIPSS) score of Intermediate-1 or higher, platelet count ≥100 × 10^9/L, spleen volume ≥450 cc by computerized tomography or magnetic resonance imaging, ≥2 symptoms with an average score ≥3 or a Total Symptom Score (TSS) of ≥10 using the Myelofibrosis Symptom Assessment Form v4.0, peripheral blast count <5% and Eastern Cooperative Oncology Group performance status ≤2. Patient randomization will be stratified by DIPSS risk category (Intermediate-1 vs Intermediate-2 vs High), platelet count (>200 × 10^9/L vs 100–200 × 10^9/L) and spleen volume (≥1800 cm^3 vs <1800 cm^3). Double-blind treatment (pelabresib or matching placebo) will be administered once daily for 14 consecutive days, followed by a 7 day break, which is considered one cycle of treatment. Ruxolitinib will be administered twice daily for all 21 days of the cycle. The primary endpoint is SVR35 response (≥35% reduction in spleen volume from baseline) at Week 24, and the key secondary endpoint is TSS50 response (≥50% reduction in TSS from baseline) at Week 24. Other secondary endpoints include safety, pharmacokinetics, changes in bone marrow fibrosis, duration of SVR35 response, duration of TSS50 response, progression-free survival, overall survival, conversion from transfusion dependence to independence and rate of red blood cell transfusion for the first 24 weeks. Study recruitment is ongoing; 400 patients (200 per arm) from North America, Europe, Asia and Australia will be enrolled. The study opened for enrollment in November 2020. MANIFEST-2 was initiated based on data from the ongoing Phase 2 MANIFEST study with the aim of assessing the efficacy and safety of pelabresib and ruxolitinib in JAKi treatment-naïve patients with MF. MANIFEST-2 is currently open for enrollment.Keywords: CPI-0610, JAKi treatment-naïve, MANIFEST-2, myelofibrosis, pelabresib
Procedia PDF Downloads 20114 Extremism among College and High School Students in Moscow: Diagnostics Features
Authors: Puzanova Zhanna Vasilyevna, Larina Tatiana Igorevna, Tertyshnikova Anastasia Gennadyevna
Abstract:
In this day and age, extremism in various forms of its manifestation is a real threat to the world community, the national security of a state and its territorial integrity, as well as to the constitutional rights and freedoms of citizens. Extremism, as it is known, in general terms described as a commitment to extreme views and actions, radically denying the existing social norms and rules. Supporters of extremism in the ideological and political struggles often adopt methods and means of psychological warfare, appeal not to reason and logical arguments, but to emotions and instincts of the people, to prejudices, biases, and a variety of mythological designs. They are dissatisfied with the established order and aim at increasing this dissatisfaction among the masses. Youth extremism holds a specific place among the existing forms and types of extremism. In this context in 2015, we conducted a survey among Moscow college and high school students. The aim of this study was to determine how great or small is the difference in understanding and attitudes towards extremism manifestations, inclination and readiness to take part in extremist activities and what causes this predisposition, if it exists. We performed multivariate analysis and found the Russian college and high school students' opinion about the extremism and terrorism situation in our country and also their cognition on these topics. Among other things, we showed, that the level of aggressiveness of young people were not above the average for the whole population. The survey was conducted using the questionnaire method. The sample included college and high school students in Moscow (642 and 382, respectively) by method of random selection. The questionnaire was developed by specialists of RUDN University Sociological Laboratory and included both original questions (projective questions, the technique of incomplete sentences), and the standard test Dayhoff S. to determine the level of internal aggressiveness. It is also used as an experiment, the technique of study option using of FACS and SPAFF to determine the psychotypes and determination of non-verbal manifestations of emotions. The study confirmed the hypothesis that in respondents’ opinion, the level of aggression is higher today than a few years ago. Differences were found in the understanding of and respect for such social phenomena as extremism, terrorism, and their danger and appeal for the two age groups of young people. Theory of psychotypes, SPAFF (specific affect cording system) and FACS (facial action cording system) are considered as additional techniques for the diagnosis of a tendency to extreme views. Thus, it is established that diagnostics of acceptance of extreme views among young people is possible thanks to simultaneous use of knowledge from the different fields of socio-humanistic sciences. The results of the research can be used in a comparative context with other countries and as a starting point for further research in the field, taking into account its extreme relevance.Keywords: extremism, youth extremism, diagnostics of extremist manifestations, forecast of behavior, sociological polls, theory of psychotypes, FACS, SPAFF
Procedia PDF Downloads 33713 Maternal and Newborn Health Care Program Implementation and Integration by Maternal Community Health Workers, Africa: An Integrative Review
Authors: Nishimwe Clemence, Mchunu Gugu, Mukamusoni Dariya
Abstract:
Background: Community health workers and extension workers can play an important role in supporting families to adopt health practices, encourage delivery in a health care facility, and ensure time referral of mothers and newborns if needed. Saving the lives of neonates should, therefore, be a significant health outcome in any maternal and newborn health program that is being implemented. Furthermore, about half of a million mothers die from pregnancy-related causes. Maternal and newborn deaths related to the period of postnatal care are neglected. Some authors emphasized that in developing countries, newborn mortality rates have been reduced much more slowly because of the lack of many necessary facility-based and outreach service. The aim of this review was to critically analyze the implementation and integration process of the maternal and newborn health care program by maternal community health workers, into the health care system, in Africa. Furthermore, it aims to reduce maternal and newborn mortality. We addressed the following review question: (1) what process is involved in the implementation and integration of the maternal and newborn health care program by maternal community health workers during antenatal, delivery and postnatal care into health system care in Africa? Methods: The database searched was from Health Source: Nursing/Academic Edition through academic search complete via EBSCO Host. An iterative approach was used to go through Google scholarly papers. The reviewers considered adapted Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidance, and the Mixed Methods Appraisal Tool (MMAT) was used. Synthesis method in integrative review following elements of noting patterns and themes, seeing plausibility, clustering, counting, making contrasts and comparisons, discerning commons and unusual patterns, subsuming particulars into general, noting relations between variability, finding intervening factors and building a logical chain of evidence, using data–based convergent synthesis design. Results: From the seventeen of studies included, results focused on three dimensions inspired by the literature on antenatal, delivery, and postnatal interventions. From this, further conceptual framework was elaborated. The conceptual framework process of implementation and integration of maternal and newborn health care program by maternal community health workers was elaborated in order to ensure the sustainability of community based intervention. Conclusions: the review revealed that the implementation and integration of maternal and newborn health care program require planning. We call upon governments, non-government organizations, the global health community, all stakeholders including policy makers, program managers, evaluators, educators, and providers to be involved in implementation and integration of maternal and newborn health program in updated policy and community-based intervention. Furthermore, emphasis should be placed on competence, responsibility, and accountability of maternal community health workers, their training and payment, collaboration with health professionals in health facilities, and reinforcement of outreach service. However, the review was limited in focus to the African context, where the process of maternal and newborn health care program has been poorly implemented.Keywords: Africa, implementation of integration, maternal, newborn
Procedia PDF Downloads 16212 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications
Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino
Abstract:
The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses
Procedia PDF Downloads 18111 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine
Authors: Adriana Haulica
Abstract:
Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics
Procedia PDF Downloads 7010 Assessment and Forecasting of the Impact of Negative Environmental Factors on Public Health
Authors: Nurlan Smagulov, Aiman Konkabayeva, Akerke Sadykova, Arailym Serik
Abstract:
Introduction. Adverse environmental factors do not immediately lead to pathological changes in the body. They can exert the growth of pre-pathology characterized by shifts in physiological, biochemical, immunological and other indicators of the body state. These disorders are unstable, reversible and indicative of body reactions. There is an opportunity to objectively judge the internal structure of the adaptive body reactions at the level of individual organs and systems. In order to obtain a stable response of the body to the chronic effects of unfavorable environmental factors of low intensity (compared to production environment factors), a time called the «lag time» is needed. The obtained results without considering this factor distort reality and, for the most part, cannot be a reliable statement of the main conclusions in any work. A technique is needed to reduce methodological errors and combine mathematical logic using statistical methods and a medical point of view, which ultimately will affect the obtained results and avoid a false correlation. Objective. Development of a methodology for assessing and predicting the environmental factors impact on the population health considering the «lag time.» Methods. Research objects: environmental and population morbidity indicators. The database on the environmental state was compiled from the monthly newsletters of Kazhydromet. Data on population morbidity were obtained from regional statistical yearbooks. When processing static data, a time interval (lag) was determined for each «argument-function» pair. That is the required interval, after which the harmful factor effect (argument) will fully manifest itself in the indicators of the organism's state (function). The lag value was determined by cross-correlation functions of arguments (environmental indicators) with functions (morbidity). Correlation coefficients (r) and their reliability (t), Fisher's criterion (F) and the influence share (R2) of the main factor (argument) per indicator (function) were calculated as a percentage. Results. The ecological situation of an industrially developed region has an impact on health indicators, but it has some nuances. Fundamentally opposite results were obtained in the mathematical data processing, considering the «lag time». Namely, an expressed correlation was revealed after two databases (ecology-morbidity) shifted. For example, the lag period was 4 years for dust concentration, general morbidity, and 3 years – for childhood morbidity. These periods accounted for the maximum values of the correlation coefficients and the largest percentage of the influencing factor. Similar results were observed in relation to the concentration of soot, dioxide, etc. The comprehensive statistical processing using multiple correlation-regression variance analysis confirms the correctness of the above statement. This method provided the integrated approach to predicting the degree of pollution of the main environmental components to identify the most dangerous combinations of concentrations of leading negative environmental factors. Conclusion. The method of assessing the «environment-public health» system (considering the «lag time») is qualitatively different from the traditional (without considering the «lag time»). The results significantly differ and are more amenable to a logical explanation of the obtained dependencies. The method allows presenting the quantitative and qualitative dependence in a different way within the «environment-public health» system.Keywords: ecology, morbidity, population, lag time
Procedia PDF Downloads 819 Multi-Criteria Assessment of Biogas Feedstock
Authors: Rawan Hakawati, Beatrice Smyth, David Rooney, Geoffrey McCullough
Abstract:
Targets have been set in the EU to increase the share of renewable energy consumption to 20% by 2020, but developments have not occurred evenly across the member states. Northern Ireland is almost 90% dependent on imported fossil fuels. With such high energy dependency, Northern Ireland is particularly susceptible to the security of supply issues. Linked to fossil fuels are greenhouse gas emissions, and the EU plans to reduce emissions by 20% by 2020. The use of indigenously produced biomass could reduce both greenhouse gas emissions and external energy dependence. With a wide range of both crop and waste feedstock potentially available in Northern Ireland, anaerobic digestion has been put forward as a possible solution for renewable energy production, waste management, and greenhouse gas reduction. Not all feedstock, however, is the same, and an understanding of feedstock suitability is important for both plant operators and policy makers. The aim of this paper is to investigate biomass suitability for anaerobic digestion in Northern Ireland. It is also important that decisions are based on solid scientific evidence. For this reason, the methodology used is multi-criteria decision matrix analysis which takes multiple criteria into account simultaneously and ranks alternatives accordingly. The model uses the weighted sum method (which follows the Entropy Method to measure uncertainty using probability theory) to decide on weights. The Topsis method is utilized to carry out the mathematical analysis to provide the final scores. Feedstock that is currently available in Northern Ireland was classified into two categories: wastes (manure, sewage sludge and food waste) and energy crops, specifically grass silage. To select the most suitable feedstock, methane yield, feedstock availability, feedstock production cost, biogas production, calorific value, produced kilowatt-hours, dry matter content, and carbon to nitrogen ratio were assessed. The highest weight (0.249) corresponded to production cost reflecting a variation of £41 gate fee to 22£/tonne cost. The weights calculated found that grass silage was the most suitable feedstock. A sensitivity analysis was then conducted to investigate the impact of weights. The analysis used the Pugh Matrix Method which relies upon The Analytical Hierarchy Process and pairwise comparisons to determine a weighting for each criterion. The results showed that the highest weight (0.193) corresponded to biogas production indicating that grass silage and manure are the most suitable feedstock. Introducing co-digestion of two or more substrates can boost the biogas yield due to a synergistic effect induced by the feedstock to favor positive biological interactions. A further benefit of co-digesting manure is that the anaerobic digestion process also acts as a waste management strategy. From the research, it was concluded that energy from agricultural biomass is highly advantageous in Northern Ireland because it would increase the country's production of renewable energy, manage waste production, and would limit the production of greenhouse gases (current contribution from agriculture sector is 26%). Decision-making methods based on scientific evidence aid policy makers in classifying multiple criteria in a logical mathematical manner in order to reach a resolution.Keywords: anaerobic digestion, biomass as feedstock, decision matrix, renewable energy
Procedia PDF Downloads 4628 Construction of an Assessment Tool for Early Childhood Development in the World of DiscoveryTM Curriculum
Authors: Divya Palaniappan
Abstract:
Early Childhood assessment tools must measure the quality and the appropriateness of a curriculum with respect to culture and age of the children. Preschool assessment tools lack psychometric properties and were developed to measure only few areas of development such as specific skills in music, art and adaptive behavior. Existing preschool assessment tools in India are predominantly informal and are fraught with judgmental bias of observers. The World of Discovery TM curriculum focuses on accelerating the physical, cognitive, language, social and emotional development of pre-schoolers in India through various activities. The curriculum caters to every child irrespective of their dominant intelligence as per Gardner’s Theory of Multiple Intelligence which concluded "even students as young as four years old present quite distinctive sets and configurations of intelligences". The curriculum introduces a new theme every week where, concepts are explained through various activities so that children with different dominant intelligences could understand it. For example: The ‘Insects’ theme is explained through rhymes, craft and counting corner, and hence children with one of these dominant intelligences: Musical, bodily-kinesthetic and logical-mathematical could grasp the concept. The child’s progress is evaluated using an assessment tool that measures a cluster of inter-dependent developmental areas: physical, cognitive, language, social and emotional development, which for the first time renders a multi-domain approach. The assessment tool is a 5-point rating scale that measures these Developmental aspects: Cognitive, Language, Physical, Social and Emotional. Each activity strengthens one or more of the developmental aspects. During cognitive corner, the child’s perceptual reasoning, pre-math abilities, hand-eye co-ordination and fine motor skills could be observed and evaluated. The tool differs from traditional assessment methodologies by providing a framework that allows teachers to assess a child’s continuous development with respect to specific activities in real time objectively. A pilot study of the tool was done with a sample data of 100 children in the age group 2.5 to 3.5 years. The data was collected over a period of 3 months across 10 centers in Chennai, India, scored by the class teacher once a week. The teachers were trained by psychologists on age-appropriate developmental milestones to minimize observer’s bias. The norms were calculated from the mean and standard deviation of the observed data. The results indicated high internal consistency among parameters and that cognitive development improved with physical development. A significant positive relationship between physical and cognitive development has been observed among children in a study conducted by Sibley and Etnier. In Children, the ‘Comprehension’ ability was found to be greater than ‘Reasoning’ and pre-math abilities as indicated by the preoperational stage of Piaget’s theory of cognitive development. The average scores of various parameters obtained through the tool corroborates the psychological theories on child development, offering strong face validity. The study provides a comprehensive mechanism to assess a child’s development and differentiate high performers from the rest. Based on the average scores, the difficulty level of activities could be increased or decreased to nurture the development of pre-schoolers and also appropriate teaching methodologies could be devised.Keywords: child development, early childhood assessment, early childhood curriculum, quantitative assessment of preschool curriculum
Procedia PDF Downloads 3627 Extension of Moral Agency to Artificial Agents
Authors: Sofia Quaglia, Carmine Di Martino, Brendan Tierney
Abstract:
Artificial Intelligence (A.I.) constitutes various aspects of modern life, from the Machine Learning algorithms predicting the stocks on Wall streets to the killing of belligerents and innocents alike on the battlefield. Moreover, the end goal is to create autonomous A.I.; this means that the presence of humans in the decision-making process will be absent. The question comes naturally: when an A.I. does something wrong when its behavior is harmful to the community and its actions go against the law, which is to be held responsible? This research’s subject matter in A.I. and Robot Ethics focuses mainly on Robot Rights and its ultimate objective is to answer the questions: (i) What is the function of rights? (ii) Who is a right holder, what is personhood and the requirements needed to be a moral agent (therefore, accountable for responsibility)? (iii) Can an A.I. be a moral agent? (ontological requirements) and finally (iv) if it ought to be one (ethical implications). With the direction to answer this question, this research project was done via a collaboration between the School of Computer Science in the Technical University of Dublin that oversaw the technical aspects of this work, as well as the Department of Philosophy in the University of Milan, who supervised the philosophical framework and argumentation of the project. Firstly, it was found that all rights are positive and based on consensus; they change with time based on circumstances. Their function is to protect the social fabric and avoid dangerous situations. The same goes for the requirements considered necessary to be a moral agent: those are not absolute; in fact, they are constantly redesigned. Hence, the next logical step was to identify what requirements are regarded as fundamental in real-world judicial systems, comparing them to that of ones used in philosophy. Autonomy, free will, intentionality, consciousness and responsibility were identified as the requirements to be considered a moral agent. The work went on to build a symmetrical system between personhood and A.I. to enable the emergence of the ontological differences between the two. Each requirement is introduced, explained in the most relevant theories of contemporary philosophy, and observed in its manifestation in A.I. Finally, after completing the philosophical and technical analysis, conclusions were drawn. As underlined in the research questions, there are two issues regarding the assignment of moral agency to artificial agent: the first being that all the ontological requirements must be present and secondly being present or not, whether an A.I. ought to be considered as an artificial moral agent. From an ontological point of view, it is very hard to prove that an A.I. could be autonomous, free, intentional, conscious, and responsible. The philosophical accounts are often very theoretical and inconclusive, making it difficult to fully detect these requirements on an experimental level of demonstration. However, from an ethical point of view it makes sense to consider some A.I. as artificial moral agents, hence responsible for their own actions. When considering artificial agents as responsible, there can be applied already existing norms in our judicial system such as removing them from society, and re-educating them, in order to re-introduced them to society. This is in line with how the highest profile correctional facilities ought to work. Noticeably, this is a provisional conclusion and research must continue further. Nevertheless, the strength of the presented argument lies in its immediate applicability to real world scenarios. To refer to the aforementioned incidents, involving the murderer of innocents, when this thesis is applied it is possible to hold an A.I. accountable and responsible for its actions. This infers removing it from society by virtue of its un-usability, re-programming it and, only when properly functioning, re-introducing it successfullyKeywords: artificial agency, correctional system, ethics, natural agency, responsibility
Procedia PDF Downloads 1886 Fuzzy Data, Random Drift, and a Theoretical Model for the Sequential Emergence of Religious Capacity in Genus Homo
Authors: Margaret Boone Rappaport, Christopher J. Corbally
Abstract:
The ancient ape ancestral population from which living great ape and human species evolved had demographic features affecting their evolution. The population was large, had great genetic variability, and natural selection was effective at honing adaptations. The emerging populations of chimpanzees and humans were affected more by founder effects and genetic drift because they were smaller. Natural selection did not disappear, but it was not as strong. Consequences of the 'population crash' and the human effective population size are introduced briefly. The history of the ancient apes is written in the genomes of living humans and great apes. The expansion of the brain began before the human line emerged. Coalescence times for some genes are very old – up to several million years, long before Homo sapiens. The mismatch between gene trees and species trees highlights the anthropoid speciation processes, and gives the human genome history a fuzzy, probabilistic quality. However, it suggests traits that might form a foundation for capacities emerging later. A theoretical model is presented in which the genomes of early ape populations provide the substructure for the emergence of religious capacity later on the human line. The model does not search for religion, but its foundations. It suggests a course by which an evolutionary line that began with prosimians eventually produced a human species with biologically based religious capacity. The model of the sequential emergence of religious capacity relies on cognitive science, neuroscience, paleoneurology, primate field studies, cognitive archaeology, genomics, and population genetics. And, it emphasizes five trait types: (1) Documented, positive selection of sensory capabilities on the human line may have favored survival, but also eventually enriched human religious experience. (2) The bonobo model suggests a possible down-regulation of aggression and increase in tolerance while feeding, as well as paedomorphism – but, in a human species that remains cognitively sharp (unlike the bonobo). The two species emerged from the same ancient ape population, so it is logical to search for shared traits. (3) An up-regulation of emotional sensitivity and compassion seems to have occurred on the human line. This finds support in modern genetic studies. (4) The authors’ published model of morality's emergence in Homo erectus encompasses a cognitively based, decision-making capacity that was hypothetically overtaken, in part, by religious capacity. Together, they produced a strong, variable, biocultural capability to support human sociability. (5) The full flowering of human religious capacity came with the parietal expansion and smaller face (klinorhynchy) found only in Homo sapiens. Details from paleoneurology suggest the stage was set for human theologies. Larger parietal lobes allowed humans to imagine inner spaces, processes, and beings, and, with the frontal lobe, led to the first theologies composed of structured and integrated theories of the relationships between humans and the supernatural. The model leads to the evolution of a small population of African hominins that was ready to emerge with religious capacity when the species Homo sapiens evolved two hundred thousand years ago. By 50-60,000 years ago, when human ancestors left Africa, they were fully enabled.Keywords: genetic drift, genomics, parietal expansion, religious capacity
Procedia PDF Downloads 3415 Narratives of Self-Renewal: Looking for A Middle Earth In-Between Psychoanalysis and the Search for Consciousness
Authors: Marilena Fatigante
Abstract:
Contemporary psychoanalysis is increasingly acknowledging the existential demands of clients in psychotherapy. A significant aspect of the personal crises that patients face today is often rooted in the difficulty to find meaning in their own existence, even after working through or resolving traumatic memories and experiences. Tracing back to the correspondence between Freud and Romain Rolland (1927), psychoanalysis could not ignore that investigation of the psyche also encompasses the encounter with deep, psycho-sensory experiences, which involve a sense of "being one with the external world as a whole", the well-known “oceanic feeling”, as Rolland posed it. Despite the recognition of Non-ordinary States of Consciousness (NSC) as catalysts for transformation in clinical practice, highlighted by neuroscience and results from psychedelic-assisted therapies, there is few research on how psychoanalytic knowledge can integrate with other treatment traditions. These traditions, commonly rooted in non -Western, unconventional, and non-formal psychological knowledge, emphasize the individual’s innate tendency toward existential integrity and transcendence of self-boundaries. Inspired by an autobiographical account, this paper examines narratives of 12 individuals, who engaged in psychoanalytic therapy and also underwent treatment involving a non-formal helping relationship with an expert guide in consciousness, which included experience of this nature. The guide relies on 35 yrs of experience in Psychological, multidisciplinary studies in Human Sciences and Art, and demonstrates knowledge of many wisdom traditions, ranging from Eastern to Western philosophy, including Psychoanalysis and its development in cultural perspective (e.g, Ethnopsychiatry). Analyses focused primarily on two dimensions that research has identified as central in assessing the degree of treatment “success” in the patients’ narrative accounts of their therapies: agency and coherence, defined respectively as the increase, expressed in language, of the client’s perceived ability to manage his/her own challenges and the capacity, inherent in “narrative” itself as a resource for meaning making (Bruner, 1990), to provide the subject with a sense of unity, endowing his /her life experience with temporal and logical sequentiality. The present study reports that, in all narratives from the participants, agency and coherence are described differently than in “common” psychotherapy narratives. Although the participants consistently identified themselves as responsible agentic subject, the sense of agency derived from the non-conventional guidance pathway is never reduced to a personal, individual accomplishment. Rather, the more a new, fuller sense of “Life” (more than “Self”) develops out of the guidance pathway they engage with the expert guide, the more they “surrender” their own sense of autonomy and self-containment. Something, which Safran (2016) identified as well talking about the sense of surrender and “grace” in psychoanalytic sessions. Secondly, narratives of individuals engaging with the expert guide describe coherence not as repairing or enforcing continuity but as enhancing their ability to navigate dramatic discontinuities, falls, abrupt leaps and passages marked by feelings of loss and bereavement. The paper ultimately explores whether valid criteria can be established to analyze experiences of non-conventional paths of self-evolution. These paths are not opposed or alternative to conventional ones, and should not be simplistically dismissed as exotic or magical.Keywords: oceanic feeling, non conventional guidance, consciousness, narratives, treatment outcomes
Procedia PDF Downloads 384 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems
Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana
Abstract:
Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP
Procedia PDF Downloads 1983 A Risk-Based Comprehensive Framework for the Assessment of the Security of Multi-Modal Transport Systems
Authors: Mireille Elhajj, Washington Ochieng, Deeph Chana
Abstract:
The challenges of the rapid growth in the demand for transport has traditionally been seen within the context of the problems of congestion, air quality, climate change, safety, and affordability. However, there are increasing threats including those related to crime such as cyber-attacks that threaten the security of the transport of people and goods. To the best of the authors’ knowledge, this paper presents for the first time, a comprehensive framework for the assessment of the current and future security issues of multi-modal transport systems. The approach or method proposed is based on a structured framework starting with a detailed specification of the transport asset map (transport system architecture), followed by the identification of vulnerabilities. The asset map and vulnerabilities are used to identify the various approaches for exploitation of the vulnerabilities, leading to the creation of a set of threat scenarios. The threat scenarios are then transformed into risks and their categories, and include insights for their mitigation. The consideration of the mitigation space is holistic and includes the formulation of appropriate policies and tactics and/or technical interventions. The quality of the framework is ensured through a structured and logical process that identifies the stakeholders, reviews the relevant documents including policies and identifies gaps, incorporates targeted surveys to augment the reviews, and uses subject matter experts for validation. The approach to categorising security risks is an extension of the current methods that are typically employed. Specifically, the partitioning of risks into either physical or cyber categories is too limited for developing mitigation policies and tactics/interventions for transport systems where an interplay between physical and cyber processes is very often the norm. This interplay is rapidly taking on increasing significance for security as the emergence of cyber-physical technologies, are shaping the future of all transport modes. Examples include: Connected Autonomous Vehicles (CAVs) in road transport; the European Rail Traffic Management System (ERTMS) in rail transport; Automatic Identification System (AIS) in maritime transport; advanced Communications, Navigation and Surveillance (CNS) technologies in air transport; and the Internet of Things (IoT). The framework adopts a risk categorisation scheme that considers risks as falling within the following threat→impact relationships: Physical→Physical, Cyber→Cyber, Cyber→Physical, and Physical→Cyber). Thus the framework enables a more complete risk picture to be developed for today’s transport systems and, more importantly, is readily extendable to account for emerging trends in the sector that will define future transport systems. The framework facilitates the audit and retro-fitting of mitigations in current transport operations and the analysis of security management options for the next generation of Transport enabling strategic aspirations such as systems with security-by-design and co-design of safety and security to be achieved. An initial application of the framework to transport systems has shown that intra-modal consideration of security measures is sub-optimal and that a holistic and multi-modal approach that also addresses the intersections/transition points of such networks is required as their vulnerability is high. This is in-line with traveler-centric transport service provision, widely accepted as the future of mobility services. In summary, a risk-based framework is proposed for use by the stakeholders to comprehensively and holistically assess the security of transport systems. It requires a detailed understanding of the transport architecture to enable a detailed vulnerabilities analysis to be undertaken, creates threat scenarios and transforms them into risks which form the basis for the formulation of interventions.Keywords: mitigations, risk, transport, security, vulnerabilities
Procedia PDF Downloads 1652 Successful Optimization of a Shallow Marginal Offshore Field and Its Applications
Authors: Kumar Satyam Das, Murali Raghunathan
Abstract:
This note discusses the feasibility of field development of a challenging shallow offshore field in South East Asia and how its learnings can be applied to marginal field development across the world especially developing marginal fields in this low oil price world. The field was found to be economically challenging even during high oil prices and the project was put on hold. Shell started development study with the aim to significantly reduce cost through competitively scoping and revive stranded projects. The proposed strategy to achieve this involved Improve Per platform recovery and Reduction in CAPEX. Methodology: Based on various Benchmarking Tool such as Woodmac for similar projects in the region and economic affordability, a challenging target of 50% reduction in unit development cost (UDC) was set for the project. Technical scope was defined to the minimum as to be a wellhead platform with minimum functionality to ensure production. The evaluation of key project decisions like Well location and number, well design, Artificial lift methods and wellhead platform type under different development concept was carried out through integrated multi-discipline approach. Key elements influencing per platform recovery were Wellhead Platform (WHP) location, Well count, well reach and well productivity. Major Findings: Reservoir being shallow posed challenges in well design (dog-leg severity, casing size and the achievable step-out), choice of artificial lift and sand-control method. Integrated approach amongst relevant disciplines with challenging mind-set enabled to achieve optimized set of development decisions. This led to significant improvement in per platform recovery. It was concluded that platform recovery largely depended on the reach of the well. Choice of slim well design enabled designing of high inclination and better productivity wells. However, there is trade-off between high inclination Gas Lift (GL) wells and low inclination wells in terms of long term value, operational complexity, well reach, recovery and uptime. Well design element like casing size, well completion, artificial lift and sand control were added successively over the minimum technical scope design leading to a value and risk staircase. Logical combinations of options (slim well, GL) were competitively screened to achieve 25% reduction in well cost. Facility cost reduction was achieved through sourcing standardized Low Cost Facilities platform in combination with portfolio execution to maximizing execution efficiency; this approach is expected to reduce facilities cost by ~23% with respect to the development costs. Further cost reductions were achieved by maximizing use of existing facilities nearby; changing reliance on existing water injection wells and utilizing existing water injector (W.I.) platform for new injectors. Conclusion: The study provides a spectrum of technically feasible options. It also made clear that different drivers lead to different development concepts and the cost value trade off staircase made this very visible. Scoping of the project through competitive way has proven to be valuable for decision makers by creating a transparent view of value and associated risks/uncertainty/trade-offs for difficult choices: elements of the projects can be competitive, whilst other parts will struggle, even though contributing to significant volumes. Reduction in UDC through proper scoping of present projects and its benchmarking paves as a learning for the development of marginal fields across the world, especially in this low oil price scenario. This way of developing a field has on average a reduction of 40% of cost for the Shell projects.Keywords: benchmarking, full field development, CAPEX, feasibility
Procedia PDF Downloads 1581 Leveraging Digital Transformation Initiatives and Artificial Intelligence to Optimize Readiness and Simulate Mission Performance across the Fleet
Authors: Justin Woulfe
Abstract:
Siloed logistics and supply chain management systems throughout the Department of Defense (DOD) has led to disparate approaches to modeling and simulation (M&S), a lack of understanding of how one system impacts the whole, and issues with “optimal” solutions that are good for one organization but have dramatic negative impacts on another. Many different systems have evolved to try to understand and account for uncertainty and try to reduce the consequences of the unknown. As the DoD undertakes expansive digital transformation initiatives, there is an opportunity to fuse and leverage traditionally disparate data into a centrally hosted source of truth. With a streamlined process incorporating machine learning (ML) and artificial intelligence (AI), advanced M&S will enable informed decisions guiding program success via optimized operational readiness and improved mission success. One of the current challenges is to leverage the terabytes of data generated by monitored systems to provide actionable information for all levels of users. The implementation of a cloud-based application analyzing data transactions, learning and predicting future states from current and past states in real-time, and communicating those anticipated states is an appropriate solution for the purposes of reduced latency and improved confidence in decisions. Decisions made from an ML and AI application combined with advanced optimization algorithms will improve the mission success and performance of systems, which will improve the overall cost and effectiveness of any program. The Systecon team constructs and employs model-based simulations, cutting across traditional silos of data, aggregating maintenance, and supply data, incorporating sensor information, and applying optimization and simulation methods to an as-maintained digital twin with the ability to aggregate results across a system’s lifecycle and across logical and operational groupings of systems. This coupling of data throughout the enterprise enables tactical, operational, and strategic decision support, detachable and deployable logistics services, and configuration-based automated distribution of digital technical and product data to enhance supply and logistics operations. As a complete solution, this approach significantly reduces program risk by allowing flexible configuration of data, data relationships, business process workflows, and early test and evaluation, especially budget trade-off analyses. A true capability to tie resources (dollars) to weapon system readiness in alignment with the real-world scenarios a warfighter may experience has been an objective yet to be realized to date. By developing and solidifying an organic capability to directly relate dollars to readiness and to inform the digital twin, the decision-maker is now empowered through valuable insight and traceability. This type of educated decision-making provides an advantage over the adversaries who struggle with maintaining system readiness at an affordable cost. The M&S capability developed allows program managers to independently evaluate system design and support decisions by quantifying their impact on operational availability and operations and support cost resulting in the ability to simultaneously optimize readiness and cost. This will allow the stakeholders to make data-driven decisions when trading cost and readiness throughout the life of the program. Finally, sponsors are available to validate product deliverables with efficiency and much higher accuracy than in previous years.Keywords: artificial intelligence, digital transformation, machine learning, predictive analytics
Procedia PDF Downloads 160