Search results for: post processing kinematics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7867

Search results for: post processing kinematics

5857 'Explainable Artificial Intelligence' and Reasons for Judicial Decisions: Why Justifications and Not Just Explanations May Be Required

Authors: Jacquelyn Burkell, Jane Bailey

Abstract:

Artificial intelligence (AI) solutions deployed within the justice system face the critical task of providing acceptable explanations for decisions or actions. These explanations must satisfy the joint criteria of public and professional accountability, taking into account the perspectives and requirements of multiple stakeholders, including judges, lawyers, parties, witnesses, and the general public. This research project analyzes and integrates two existing literature on explanations in order to propose guidelines for explainable AI in the justice system. Specifically, we review three bodies of literature: (i) explanations of the purpose and function of 'explainable AI'; (ii) the relevant case law, judicial commentary and legal literature focused on the form and function of reasons for judicial decisions; and (iii) the literature focused on the psychological and sociological functions of these reasons for judicial decisions from the perspective of the public. Our research suggests that while judicial ‘reasons’ (arguably accurate descriptions of the decision-making process and factors) do serve similar explanatory functions as those identified in the literature on 'explainable AI', they also serve an important ‘justification’ function (post hoc constructions that justify the decision that was reached). Further, members of the public are also looking for both justification and explanation in reasons for judicial decisions, and that the absence of either feature is likely to contribute to diminished public confidence in the legal system. Therefore, artificially automated judicial decision-making systems that simply attempt to document the process of decision-making are unlikely in many cases to be useful to and accepted within the justice system. Instead, these systems should focus on the post-hoc articulation of principles and precedents that support the decision or action, especially in cases where legal subjects’ fundamental rights and liberties are at stake.

Keywords: explainable AI, judicial reasons, public accountability, explanation, justification

Procedia PDF Downloads 126
5856 Sensitivity to Misusing Verb Inflections in Both Finite and Non-Finite Clauses in Native and Non-Native Russian: A Self-Paced Reading Investigation

Authors: Yang Cao

Abstract:

Analyzing the oral production of Chinese-speaking learners of English as a second language (L2), we can find a large variety of verb inflections – Why does it seem so hard for them to use consistent correct past morphologies in obligatory past contexts? Failed Functional Features Hypothesis (FFFH) attributes the rather non-target-like performance to the absence of [±past] feature in their L1 Chinese, arguing that for post puberty learners, new features in L2 are no more accessible. By contrast, Missing Surface Inflection Hypothesis (MSIH) tends to believe that all features are actually acquirable for late L2 learners, while due to the mapping difficulties from features to forms, it is hard for them to realize the consistent past morphologies on the surface. However, most of the studies are limited to the verb morphologies in finite clauses and few studies have ever attempted to figure out these learners’ performance in non-finite clauses. Additionally, it has been discussed that Chinese learners may be able to tell the finite/infinite distinction (i.e. the [±finite] feature might be selected in Chinese, even though the existence of [±past] is denied). Therefore, adopting a self-paced reading task (SPR), the current study aims to analyze the processing patterns of Chinese-speaking learners of L2 Russian, in order to find out if they are sensitive to misuse of tense morphologies in both finite and non-finite clauses and whether they are sensitive to the finite/infinite distinction presented in Russian. The study targets L2 Russian due to its systematic morphologies in both present and past tenses. A native Russian group, as well as a group of English-speaking learners of Russian, whose L1 has definitely selected both [±finite] and [±past] features, will also be involved. By comparing and contrasting performance of the three language groups, the study is going to further examine and discuss the two theories, FFFH and MSIH. Preliminary hypotheses are: a) Russian native speakers are expected to spend longer time reading the verb forms which violate the grammar; b) it is expected that Chinese participants are, at least, sensitive to the misuse of inflected verbs in non-finite clauses, although no sensitivity to the misuse of infinitives in finite clauses might be found. Therefore, an interaction of finite and grammaticality is expected to be found, which indicate that these learners are able to tell the finite/infinite distinction; and c) having selected [±finite] and [±past], English-speaking learners of Russian are expected to behave target-likely, supporting L1 transfer.

Keywords: features, finite clauses, morphosyntax, non-finite clauses, past morphologies, present morphologies, Second Language Acquisition, self-paced reading task, verb inflections

Procedia PDF Downloads 108
5855 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects

Authors: Karan Sharma, Ajay Kumar

Abstract:

Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.

Keywords: EEG signal, Reiki, time consuming, epileptic seizure

Procedia PDF Downloads 406
5854 Land Use Change Detection Using Satellite Images for Najran City, Kingdom of Saudi Arabia (KSA)

Authors: Ismail Elkhrachy

Abstract:

Determination of land use changing is an important component of regional planning for applications ranging from urban fringe change detection to monitoring change detection of land use. This data are very useful for natural resources management.On the other hand, the technologies and methods of change detection also have evolved dramatically during past 20 years. So it has been well recognized that the change detection had become the best methods for researching dynamic change of land use by multi-temporal remotely-sensed data. The objective of this paper is to assess, evaluate and monitor land use change surrounding the area of Najran city, Kingdom of Saudi Arabia (KSA) using Landsat images (June 23, 2009) and ETM+ image(June. 21, 2014). The post-classification change detection technique was applied. At last,two-time subset images of Najran city are compared on a pixel-by-pixel basis using the post-classification comparison method and the from-to change matrix is produced, the land use change information obtained.Three classes were obtained, urban, bare land and agricultural land from unsupervised classification method by using Erdas Imagine and ArcGIS software. Accuracy assessment of classification has been performed before calculating change detection for study area. The obtained accuracy is between 61% to 87% percent for all the classes. Change detection analysis shows that rapid growth in urban area has been increased by 73.2%, the agricultural area has been decreased by 10.5 % and barren area reduced by 7% between 2009 and 2014. The quantitative study indicated that the area of urban class has unchanged by 58.2 km〗^2, gained 70.3 〖km〗^2 and lost 16 〖km〗^2. For bare land class 586.4〖km〗^2 has unchanged, 53.2〖km〗^2 has gained and 101.5〖km〗^2 has lost. While agriculture area class, 20.2〖km〗^2 has unchanged, 31.2〖km〗^2 has gained and 37.2〖km〗^2 has lost.

Keywords: land use, remote sensing, change detection, satellite images, image classification

Procedia PDF Downloads 525
5853 Mapping Cultural Continuity and the Creation of a New Architectural Heritage in the 21st Century: The Case of Ksar Tafilelt, M’Zab Valley

Authors: Hadjer Messabih

Abstract:

The M’zab architecture has preserved its identity that was able to endure for centuries conserving practically the same way of life and the same building techniques since the 11th century. Even more, the newly built ksar Tafilelt is also designed to meet the local tradition. In 1996, a community led project was initiated to build a “new ksar” named Tafilelt based on a traditional form of community-led cooperative housing. It is a unique experience in the field of community housing that reproduces traditional architectural patterns while addressing contemporary ways of life with their expected modern comfort. This research is based on the hypothesis that the process of producing ksar Tafilelt is culturally responsive to a conservative community that was characterized by certain values which were transmitted to this ksar manifesting as cultural continuity. It aims at investigating what type of cultural continuity manifests itself in the co-production of ksar Tafilelt and the way the settlement and its houses are produced and inhabited, as well as the new emerging values and adaptive transition in social relations. The research methodology is based on a combination of questionnaires, in depth interviews, photography, and site visit to record and demonstrate how these buildings respond to peoples’ needs. Post Occupancy Evaluation (POE) is also employed in order to understand the lessons that can be learned from this project. Finally, this study proves that the cultural continuity that was transmitted from the Ibadi community is sill manifested in ksar Tafilelt, which provided strong religious bonds and a strong sense of community. The research findings have resulted in a number of lessons and principles that can be learnt from the project of ksar Tafilelt which can inform future practices of housing provision and design in Algeria and other countries.

Keywords: community-led cooperative housing, conservative community, cultural continuity, post occupancy evaluation

Procedia PDF Downloads 142
5852 A Study on the Magnetic and Submarine Geology Structure of TA22 Seamount in Lau Basin, Tonga

Authors: Soon Young Choi, Chan Hwan Kim, Chan Hong Park, Hyung Rae Kim, Myoung Hoon Lee, Hyeon-Yeong Park

Abstract:

We performed the marine magnetic, bathymetry and seismic survey at the TA22 seamount (in the Lau basin, SW Pacific) for finding the submarine hydrothermal deposits in October 2009. We acquired magnetic and bathymetry data sets by suing Overhouser Proton Magnetometer SeaSPY (Marine Magnetics Co.), Multi-beam Echo Sounder EM120 (Kongsberg Co.). We conducted the data processing to obtain detailed seabed topography, magnetic anomaly, reduction to the pole (RTP) and magnetization. Based on the magnetic properties result, we analyzed submarine geology structure of TA22 seamount with post-processed seismic profile. The detailed bathymetry of the TA22 seamount showed the left and right crest parts that have caldera features in each crest central part. The magnetic anomaly distribution of the TA22 seamount regionally displayed high magnetic anomalies in northern part and the low magnetic anomalies in southern part around the caldera features. The RTP magnetic anomaly distribution of the TA22 seamount presented commonly high magnetic anomalies in the each caldera central part. Also, it represented strong anomalies at the inside of caldera rather than outside flank of the caldera. The magnetization distribution of the TA22 seamount showed the low magnetization zone in the center of each caldera, high magnetization zone in the southern and northern east part. From analyzed the seismic profile map, The TA22 seamount area is showed for the inferred small mounds inside each caldera central part and it assumes to make possibility of sills by the magma in cases of the right caldera. Taking into account all results of this study (bathymetry, magnetic anomaly, RTP, magnetization, seismic profile) with rock samples at the left caldera area in 2009 survey, we suppose the possibility of hydrothermal deposits at mounds in each caldera central part and at outside flank of the caldera representing the low magnetization zone. We expect to have the better results by combined modeling from this study data with the other geological data (ex. detailed gravity, 3D seismic, petrologic study results and etc).

Keywords: detailed bathymetry, magnetic anomaly, seamounts, seismic profile, SW Pacific

Procedia PDF Downloads 403
5851 Response to Comprehensive Stress of Growing Greylag Geese Offered Alternative Fiber Sources

Authors: He Li Wen, Meng Qing Xiang, Li De Yong, Zhang Ya Wei, Ren Li Ping

Abstract:

Stress always exerts some extent adverse effects on the animal production, food safety and quality concerns. Stress is commonly-seen in livestock industry, but there is rare literature focusing on the effects of nutrition stress. What’s more, the research always concentrates on the effect of single stress additionally, there is scarce information about the stress effect on waterfowl like goose as they are commonly thought to be tolerant to stress. To our knowledge, it is not always true. The object of this study was to evaluate the response of growing Greylag geese offered different fiber sources to the comprehensive stress, primarily involving the procedures of fasting, transport, capture, etc. The birds were randomly selected to rear with the diets differing in fiber source, being corn straw silage (CSS), steam-exploded corn straw (SECS), steam-exploded wheat straw (SEWS), and steam-exploded rice straw (SERS), respectively. Blood samples designated for the determination of stress status were collected before (pre-stress ) and after (post-stress ) the stressors carried out. No difference (P>0.05) was found on the pre-stress blood parameters of growing Greylags fed alternative fiber sources. Irrespective of the dietary differences, the comprehensive stress decreased (P<0.01) the concentration of SOD and increased (P<0.01) that of CK. Between the dietary treatments, the birds fed CSS had a higher (P<0.05)post-stress concentration of MDA than those offered SECS, along with a similarity to those fed the other two fiber sources. There was no difference (P>0.05) found on the stress response of the birds fed different fiber sources. In conclusion, SOD and CK concentration in blood may be more sensitive in indicating stress status and dietary fiber source exerted no effect on the stress response of growing Greylags. There is little chance to improve the stress status by ingesting different fiber sources.

Keywords: blood parameter, fiber source, Greylag goose, stress

Procedia PDF Downloads 518
5850 The Role of University in High-Level Human Capital Cultivation in China’s West Greater Bay Area

Authors: Rochelle Yun Ge

Abstract:

University has played an active role in the country’s development in China. There has been an increasing research interest on the development of higher education cooperation, talent cultivation and attraction, and innovation in the regional development. The Triple Helix model, which indicates that regional innovation and development can be engendered by collaboration among university, industry and government, is often adopted as research framework. The research using triple helix model emphasizes the active and often leading role of university in knowledge-based economy. Within this framework, universities are conceptualized as key institutions of knowledge production, transmission and transference potentially making critical contributions to regional development. Recent research almost uniformly consistent in indicating the high-level research labours (i.e., doctoral, post-doctoral researchers and academics) as important actors in the innovation ecosystem with their cross-geographical human capital and resources presented. In 2019, the development of the Guangdong-Hong Kong-Macao Greater Bay Area (GBA) was officially launched as an important strategy by the Chinese government to boost the regional development of the Pearl River Delta and to support the realization of “One Belt One Road” strategy. Human Capital formation is at the center of this plan. One of the strategic goals of the GBA development is set to evolve into an international educational hub and innovation center with high-level talents. A number of policies have been issued to attract and cultivate human resources in different GBA cities, in particular for the high-level R&D (research and development) talents such as doctoral and post-doctoral researchers. To better understand the development of high-level talents hub in the GBA, more empirical considerations should be given to explore the approaches of talents cultivation and attraction in the GBA. What remains to explore is the ways to better attract, train, support and retain these talents in the cross-systems context. This paper aims to investigate the role of university in human capital development under China’s national agenda of GBA integration through the lens of universities and actors. Two flagship comprehensive universities are selected to be the cases and 30 interviews with university officials, research leaders, post-doctors and doctoral candidates are used for analysis. In particular, we look at in what ways have universities aligned their strategies and practices to the Chinese government’s GBA development strategy? What strategies and practices have been developed by universities for the cultivation and attraction of high-level research labor? And what impacts the universities have made for the regional development? The main arguments of this research highlights the specific ways in which universities in smaller sub-regions can collaborate in high-level human capital formation and the role policy can play in facilitating such collaborations.

Keywords: university, human capital, regional development, triple-helix model

Procedia PDF Downloads 113
5849 Clinical Validation of an Automated Natural Language Processing Algorithm for Finding COVID-19 Symptoms and Complications in Patient Notes

Authors: Karolina Wieczorek, Sophie Wiliams

Abstract:

Introduction: Patient data is often collected in Electronic Health Record Systems (EHR) for purposes such as providing care as well as reporting data. This information can be re-used to validate data models in clinical trials or in epidemiological studies. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. Mentioning a disease in a discharge letter does not necessarily mean that a patient suffers from this disease. Many of them discuss a diagnostic process, different tests, or discuss whether a patient has a certain disease. The COVID-19 dataset in this study used natural language processing (NLP), an automated algorithm which extracts information related to COVID-19 symptoms, complications, and medications prescribed within the hospital. Free-text patient clinical patient notes are rich sources of information which contain patient data not captured in a structured form, hence the use of named entity recognition (NER) to capture additional information. Methods: Patient data (discharge summary letters) were exported and screened by an algorithm to pick up relevant terms related to COVID-19. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. A list of 124 Systematized Nomenclature of Medicine (SNOMED) Clinical Terms has been provided in Excel with corresponding IDs. Two independent medical student researchers were provided with a dictionary of SNOMED list of terms to refer to when screening the notes. They worked on two separate datasets called "A” and "B”, respectively. Notes were screened to check if the correct term had been picked-up by the algorithm to ensure that negated terms were not picked up. Results: Its implementation in the hospital began on March 31, 2020, and the first EHR-derived extract was generated for use in an audit study on June 04, 2020. The dataset has contributed to large, priority clinical trials (including International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC) by bulk upload to REDcap research databases) and local research and audit studies. Successful sharing of EHR-extracted datasets requires communicating the provenance and quality, including completeness and accuracy of this data. The results of the validation of the algorithm were the following: precision (0.907), recall (0.416), and F-score test (0.570). Percentage enhancement with NLP extracted terms compared to regular data extraction alone was low (0.3%) for relatively well-documented data such as previous medical history but higher (16.6%, 29.53%, 30.3%, 45.1%) for complications, presenting illness, chronic procedures, acute procedures respectively. Conclusions: This automated NLP algorithm is shown to be useful in facilitating patient data analysis and has the potential to be used in more large-scale clinical trials to assess potential study exclusion criteria for participants in the development of vaccines.

Keywords: automated, algorithm, NLP, COVID-19

Procedia PDF Downloads 102
5848 Post-Pandemic Public Space, Case Study of Public Parks in Kerala

Authors: Nirupama Sam

Abstract:

COVID-19, the greatest pandemic since the turn of the century, presents several issues for urban planners, the most significant of which is determining appropriate mitigation techniques for creating pandemic-friendly and resilient public spaces. The study is conducted in four stages. The first stage consisted of literature reviews to examine the evolution and transformation of public spaces during pandemics throughout history and the role of public spaces during pandemic outbreaks. The second stage is to determine the factors that influence the success of public spaces, which was accomplished by an analysis of current literature and case studies. The influencing factors are categorized under comfort and images, uses and activity, access and linkages, and sociability. The third stage is to establish the priority of identified factors for which a questionnaire survey of stakeholders is conducted and analyzing of certain factors with the help of GIS tools. COVID-19 has been in effect in India for the last two years. Kerala has the highest daily COVID-19 prevalence due to its high population density, making it more susceptible to viral outbreaks. Despite all preventive measures taken against COVID-19, Kerala remains the worst-affected state in the country. Finally, two live case studies of the hardest-hit localities, namely Subhash bose park and Napier Museum park in the Ernakulam and Trivandrum districts of Kerala, respectively, were chosen as study areas for the survey. The responses to the questionnaire were analyzed using SPSS for determining the weights of the influencing factors. The spatial success of the selected case studies was examined using the GIS interpolation model. Following the overall assessment, the fourth stage is to develop strategies and guidelines for planning public spaces to make them more efficient and robust, which further leads to improved quality, safety and resilience to future pandemics.

Keywords: urban design, public space, covid-19, post-pandemic, public spaces

Procedia PDF Downloads 137
5847 Combined Synchrotron Radiography and Diffraction for in Situ Study of Reactive Infiltration of Aluminum into Iron Porous Preform

Authors: S. Djaziri, F. Sket, A. Hynowska, S. Milenkovic

Abstract:

The use of Fe-Al based intermetallics as an alternative to Cr/Ni based stainless steels is very promising for industrial applications that use critical raw materials parts under extreme conditions. However, the development of advanced Fe-Al based intermetallics with appropriate mechanical properties presents several challenges that involve appropriate processing and microstructure control. A processing strategy is being developed which aims at producing a net-shape porous Fe-based preform that is infiltrated with molten Al or Al-alloy. In the present work, porous Fe-based preforms produced by two different methods (selective laser melting (SLM) and Kochanek-process (KE)) are studied during infiltration with molten aluminum. In the objective to elucidate the mechanisms underlying the formation of Fe-Al intermetallic phases during infiltration, an in-house furnace has been designed for in situ observation of infiltration at synchrotron facilities combining x-ray radiography (XR) and x-ray diffraction (XRD) techniques. The feasibility of this approach has been demonstrated, and information about the melt flow front propagation has been obtained. In addition, reactive infiltration has been achieved where a bi-phased intermetallic layer has been identified to be formed between the solid Fe and liquid Al. In particular, a tongue-like Fe₂Al₅ phase adhering to the Fe and a needle-like Fe₄Al₁₃ phase adhering to the Al were observed. The growth of the intermetallic compound was found to be dependent on the temperature gradient present along the preform as well as on the reaction time which will be discussed in view of the different obtained results.

Keywords: combined synchrotron radiography and diffraction, Fe-Al intermetallic compounds, in-situ molten Al infiltration, porous solid Fe preforms

Procedia PDF Downloads 226
5846 Reverse Logistics Network Optimization for E-Commerce

Authors: Albert W. K. Tan

Abstract:

This research consolidates a comprehensive array of publications from peer-reviewed journals, case studies, and seminar reports focused on reverse logistics and network design. By synthesizing this secondary knowledge, our objective is to identify and articulate key decision factors crucial to reverse logistics network design for e-commerce. Through this exploration, we aim to present a refined mathematical model that offers valuable insights for companies seeking to optimize their reverse logistics operations. The primary goal of this research endeavor is to develop a comprehensive framework tailored to advising organizations and companies on crafting effective networks for their reverse logistics operations, thereby facilitating the achievement of their organizational goals. This involves a thorough examination of various network configurations, weighing their advantages and disadvantages to ensure alignment with specific business objectives. The key objectives of this research include: (i) Identifying pivotal factors pertinent to network design decisions within the realm of reverse logistics across diverse supply chains. (ii) Formulating a structured framework designed to offer informed recommendations for sound network design decisions applicable to relevant industries and scenarios. (iii) Propose a mathematical model to optimize its reverse logistics network. A conceptual framework for designing a reverse logistics network has been developed through a combination of insights from the literature review and information gathered from company websites. This framework encompasses four key stages in the selection of reverse logistics operations modes: (1) Collection, (2) Sorting and testing, (3) Processing, and (4) Storage. Key factors to consider in reverse logistics network design: I) Centralized vs. decentralized processing: Centralized processing, a long-standing practice in reverse logistics, has recently gained greater attention from manufacturing companies. In this system, all products within the reverse logistics pipeline are brought to a central facility for sorting, processing, and subsequent shipment to their next destinations. Centralization offers the advantage of efficiently managing the reverse logistics flow, potentially leading to increased revenues from returned items. Moreover, it aids in determining the most appropriate reverse channel for handling returns. On the contrary, a decentralized system is more suitable when products are returned directly from consumers to retailers. In this scenario, individual sales outlets serve as gatekeepers for processing returns. Considerations encompass the product lifecycle, product value and cost, return volume, and the geographic distribution of returns. II) In-house vs. third-party logistics providers: The decision between insourcing and outsourcing in reverse logistics network design is pivotal. In insourcing, a company handles the entire reverse logistics process, including material reuse. In contrast, outsourcing involves third-party providers taking on various aspects of reverse logistics. Companies may choose outsourcing due to resource constraints or lack of expertise, with the extent of outsourcing varying based on factors such as personnel skills and cost considerations. Based on the conceptual framework, the authors have constructed a mathematical model that optimizes reverse logistics network design decisions. The model will consider key factors identified in the framework, such as transportation costs, facility capacities, and lead times. The authors have employed mixed LP to find the optimal solutions that minimize costs while meeting organizational objectives.

Keywords: reverse logistics, supply chain management, optimization, e-commerce

Procedia PDF Downloads 38
5845 Management of Obstructive Hydrocephalus Secondary to a Posterior Fossa Tumor in Children: About 24 Cases Operated at the Central Hospital of Army

Authors: Hakim Derradji, M’Hammedi Yousra, Sabrou Abdelmalek, Tabet Nacer

Abstract:

Introduction: This is a retrospective study carried out at the Central Hospital of Army from 2017 to 2022. Its objective is to demonstrate the best surgical method for the management of obstructive hydrocephalus secondary to a posterior fossa tumor in children, in pre, per, and post-operative. Patients and Methods: During this period, 24 children (over 1 year old) were admitted for treatment of the posterior fossa tumor with obstructive secondary hydrocephalus and the majority of whom benefited from VCS followed by surgery and excision, the rest, received after evacuation from other hospital structures, were managed there beforehand with ventriculoperitoneal diversion or external drainage. We found that the way hydrocephalus is managed has implications for subsequent management, hence the need for this study to determine the effectiveness of different surgical procedures used in the treatment of hydrocephalus in these patients. The evaluation is made on the basis of revision rate, complications, survival, and radiological evaluation. Results: 6 patients (25%) received a ventriculoperitoneal shunt (VPD), 15 patients (62%) underwent a ventriculocysternostomy (VCS), and 3 patients (12.5%) received temporary ventricular drainage before or during tumor excision. The post-operative results were almost similar. Nevertheless, a high failure rate (25%) was observed. No deaths are recorded. In total, 75% of children who had a DVP were reoperated. The revision by VCS was performed, in addition to the 4 patients benefiting from a DVP, with one patient having received external drainage, and only one revision of a VCS was recorded. In the two patients who received external drainage, restoration of CSF outflow was observed following tumor resection. Conclusion: VCS is indicated in the first intention in the treatment of hydrocephalus secondary to a posterior fossa tumor, in view of the satisfactory results obtained and the high failure rate in DVP, especially with the presence of metastatic cells in the peritoneum, but can be considered as a second-line treatment.

Keywords: posterior fossa tumor, obstructive hydrocephalus, DVP, VCS

Procedia PDF Downloads 117
5844 Context Detection in Spreadsheets Based on Automatically Inferred Table Schema

Authors: Alexander Wachtel, Michael T. Franzen, Walter F. Tichy

Abstract:

Programming requires years of training. With natural language and end user development methods, programming could become available to everyone. It enables end users to program their own devices and extend the functionality of the existing system without any knowledge of programming languages. In this paper, we describe an Interactive Spreadsheet Processing Module (ISPM), a natural language interface to spreadsheets that allows users to address ranges within the spreadsheet based on inferred table schema. Using the ISPM, end users are able to search for values in the schema of the table and to address the data in spreadsheets implicitly. Furthermore, it enables them to select and sort the spreadsheet data by using natural language. ISPM uses a machine learning technique to automatically infer areas within a spreadsheet, including different kinds of headers and data ranges. Since ranges can be identified from natural language queries, the end users can query the data using natural language. During the evaluation 12 undergraduate students were asked to perform operations (sum, sort, group and select) using the system and also Excel without ISPM interface, and the time taken for task completion was compared across the two systems. Only for the selection task did users take less time in Excel (since they directly selected the cells using the mouse) than in ISPM, by using natural language for end user software engineering, to overcome the present bottleneck of professional developers.

Keywords: natural language processing, natural language interfaces, human computer interaction, end user development, dialog systems, data recognition, spreadsheet

Procedia PDF Downloads 311
5843 Central American Security Issue: Civil War Legacy and Contemporary Challenges

Authors: Olga Andrianova, Lazar Jeifets

Abstract:

The security issue has always been one of the most sensitive and significant in Latin American context, especially focused on Central American region. Despite the fact that the time of the civil wars has ended, violence, delinquency, insecurity, discrimination still exist and keep relevance in the 21st century. This article is dedicated to consider this kind of problems, to find out the main causes and to propose solution approaches.

Keywords: Central America, insecurity, instability, post-war countries, violence

Procedia PDF Downloads 473
5842 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling

Authors: Zhenyu Zhang, Hsi-Hsien Wei

Abstract:

Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.

Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime

Procedia PDF Downloads 150
5841 Studying the Effect of Reducing Thermal Processing over the Bioactive Composition of Non-Centrifugal Cane Sugar: Towards Natural Products with High Therapeutic Value

Authors: Laura Rueda-Gensini, Jader Rodríguez, Juan C. Cruz, Carolina Munoz-Camargo

Abstract:

There is an emerging interest in botanicals and plant extracts for medicinal practices due to their widely reported health benefits. A large variety of phytochemicals found in plants have been correlated with antioxidant, immunomodulatory, and analgesic properties, which makes plant-derived products promising candidates for modulating the progression and treatment of numerous diseases. Non-centrifugal cane sugar (NCS), in particular, has been known for its high antioxidant and nutritional value, but composition-wise variability due to changing environmental and processing conditions have considerably limited its use in the nutraceutical and biomedical fields. This work is therefore aimed at assessing the effect of thermal exposure during NCS production over its bioactive composition and, in turn, its therapeutic value. Accordingly, two modified dehydration methods are proposed that employ: (i) vacuum-aided evaporation, which reduces the necessary temperatures to dehydrate the sample, and (ii) window refractance evaporation, which reduces thermal exposure time. The biochemical composition of NCS produced under these two methods was compared to traditionally-produced NCS by estimating their total polyphenolic and protein content with Folin-Ciocalteu and Bradford assays, as well as identifying the major phenolic compounds in each sample via HPLC-coupled mass spectrometry. Their antioxidant activities were also compared as measured by their scavenging potential of ABTS and DPPH radicals. Results show that the two modified production methods enhance polyphenolic and protein yield in resulting NCS samples when compared to traditional production methods. In particular, reducing employed temperatures with vacuum-aided evaporation demonstrated to be superior at preserving polyphenolic compounds, as evidenced both in the total and individual polyphenol concentrations. However, antioxidant activities were not significantly different between these. Although additional studies should be performed to determine if the observed compositional differences affect other therapeutic activities (e.g., anti-inflammatory, analgesic, and immunoprotective), these results suggest that reducing thermal exposure holds great promise for the production of natural products with enhanced nutritional value.

Keywords: non-centrifugal cane sugar, polyphenolic compounds, thermal processing, antioxidant activity

Procedia PDF Downloads 91
5840 Sensitivity Analysis of Prestressed Post-Tensioned I-Girder and Deck System

Authors: Tahsin A. H. Nishat, Raquib Ahsan

Abstract:

Sensitivity analysis of design parameters of the optimization procedure can become a significant factor while designing any structural system. The objectives of the study are to analyze the sensitivity of deck slab thickness parameter obtained from both the conventional and optimum design methodology of pre-stressed post-tensioned I-girder and deck system and to compare the relative significance of slab thickness. For analysis on conventional method, the values of 14 design parameters obtained by the conventional iterative method of design of a real-life I-girder bridge project have been considered. On the other side for analysis on optimization method, cost optimization of this system has been done using global optimization methodology 'Evolutionary Operation (EVOP)'. The problem, by which optimum values of 14 design parameters have been obtained, contains 14 explicit constraints and 46 implicit constraints. For both types of design parameters, sensitivity analysis has been conducted on deck slab thickness parameter which can become too sensitive for the obtained optimum solution. Deviations of slab thickness on both the upper and lower side of its optimum value have been considered reflecting its realistic possible ranges of variations during construction. In this procedure, the remaining parameters have been kept unchanged. For small deviations from the optimum value, compliance with the explicit and implicit constraints has been examined. Variations in the cost have also been estimated. It is obtained that without violating any constraint deck slab thickness obtained by the conventional method can be increased up to 25 mm whereas slab thickness obtained by cost optimization can be increased only up to 0.3 mm. The obtained result suggests that slab thickness becomes less sensitive in case of conventional method of design. Therefore, for realistic design purpose sensitivity should be conducted for any of the design procedure of girder and deck system.

Keywords: sensitivity analysis, optimum design, evolutionary operations, PC I-girder, deck system

Procedia PDF Downloads 137
5839 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

Authors: B. Elshafei, X. Mao

Abstract:

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Keywords: data fusion, Gaussian process regression, signal denoise, temporal extrapolation

Procedia PDF Downloads 136
5838 An Evolutionary Perspective on the Role of Extrinsic Noise in Filtering Transcript Variability in Small RNA Regulation in Bacteria

Authors: Rinat Arbel-Goren, Joel Stavans

Abstract:

Cell-to-cell variations in transcript or protein abundance, called noise, may give rise to phenotypic variability between isogenic cells, enhancing the probability of survival under stress conditions. These variations may be introduced by post-transcriptional regulatory processes such as non-coding, small RNAs stoichiometric degradation of target transcripts in bacteria. We study the iron homeostasis network in Escherichia coli, in which the RyhB small RNA regulates the expression of various targets as a model system. Using fluorescence reporter genes to detect protein levels and single-molecule fluorescence in situ hybridization to monitor transcripts levels in individual cells, allows us to compare noise at both transcript and protein levels. The experimental results and computer simulations show that extrinsic noise buffers through a feed-forward loop configuration the increase in variability introduced at the transcript level by iron deprivation, illuminating the important role that extrinsic noise plays during stress. Surprisingly, extrinsic noise also decouples of fluctuations of two different targets, in spite of RyhB being a common upstream factor degrading both. Thus, phenotypic variability increases under stress conditions by the decoupling of target fluctuations in the same cell rather than by increasing the noise of each. We also present preliminary results on the adaptation of cells to prolonged iron deprivation in order to shed light on the evolutionary role of post-transcriptional downregulation by small RNAs.

Keywords: cell-to-cell variability, Escherichia coli, noise, single-molecule fluorescence in situ hybridization (smFISH), transcript

Procedia PDF Downloads 164
5837 Constitutional Transition and Criminal Justice: Proposals for Reform of Kenya’s Youth Justice System Based on Restorative Justice Principles

Authors: M. Wangai

Abstract:

Following the promulgation of a new Constitution of Kenya in 2010, wide-ranging proposals for reform of the criminal justice system have been made. Proposed measures include a clear and separate system of dealing with juvenile offenders with a greater focus on rehabilitation and reintegration. As part of a broader constitutional transition, this article considers the contribution of restorative justice to reforming the youth justice system. The paper analyses Kenya’s juvenile justice legal framework measured against current international trends in youth justice. It identifies the first post-independence juvenile justice system as a remnant of the colonial period and notes that the post-2001 system is a marked improvement. More recent legal and institutional efforts to incorporate restorative justice are also examined. The paper advocates further development of the juvenile justice system by mainstreaming of restorative justice principles through national level legislative amendments. International and comparative perspectives are used to inform a diversion centered model of restorative justice. In addition, a case is made for the use of existing forms of alternative dispute resolution. Conscious of a tense political climate, the paper also proposes strategies to address challenges posed by a punitive penal environment, chiefly the linking of restorative justice to wider democratic goals and community spirit. The article concludes that restorative justice led juvenile justice reform will contribute to better treatment of young offenders under the criminal justice system and has the potential to set a new precedent for fair, sustainable and effective justice. Further, as part of far-reaching criminal justice reform, the proposed efforts may strengthen democratic progress in Kenya’s ensuing phase of political transition.

Keywords: constitutional transition, criminal justice, restorative justice, young offenders

Procedia PDF Downloads 148
5836 Analysing the Renewable Energy Integration Paradigm in the Post-COVID-19 Era: An Examination of the Upcoming Energy Law of China

Authors: Lan Wu

Abstract:

The declared transformation towards a ‘new electricity system dominated by renewable energy’ by China requires a cleaner electricity consumption mix with high shares of renewable energy sourced-electricity (RES-E). Unfortunately, integration of RES-E into Chinese electricity markets remains a problem pending more robust legal support, evidenced by the curtailment of wind and solar power as a consequence of integration constraints. The upcoming energy law of the PRC (energy law) is expected to provide such long-awaiting support and coordinate the existing diverse sector-specific laws to deal with the weak implementation that dampening the delivery of their desired regulatory effects. However, in the shadow of the COVID-19 crisis, it remains uncertain how this new energy law brings synergies to RES-E integration, mindful of the significant impacts of the pandemic. Through the theoretical lens of the interplay between China’s electricity reform and legislative development, the present paper investigates whether there is a paradigm shift in energy law regarding renewable energy integration compared with the existing sector-specific energy laws. It examines the 2020 draft for comments on the energy law and analyses its relationship with sector-specific energy laws focusing on RES-E integration. The comparison is drawn upon five key aspects of the RES-E integration issue, including the status of renewables, marketisation, incentive schemes, consumption mechanisms, access to power grids, and dispatching. The analysis shows that it is reasonable to expect a more open and well-organized electricity market enabling absorption of high shares of RES-E. The present paper concludes that a period of prosperous development of RES-E in the post-COVID-19 era can be anticipated with the legal support by the upcoming energy law. It contributes to understanding the signals China is sending regarding the transition towards a cleaner energy future.

Keywords: energy law, energy transition, electricity market reform, renewable energy integration

Procedia PDF Downloads 195
5835 The Relation between Cognitive Fluency and Utterance Fluency in Second Language Spoken Fluency: Studying Fluency through a Psycholinguistic Lens

Authors: Tannistha Dasgupta

Abstract:

This study explores the aspects of second language (L2) spoken fluency that are related to L2 linguistic knowledge and processing skill. It draws on Levelt’s ‘blueprint’ of the L2 speaker which discusses the cognitive issues underlying the act of speaking. However, L2 speaking assessments have largely neglected the underlying mechanism involved in language production; emphasis is given on the relationship between subjective ratings of L2 speech sample and objectively measured aspects of fluency. Hence, in this study, the relation between L2 linguistic knowledge and processing skill i.e. Cognitive Fluency (CF), and objectively measurable aspects of L2 spoken fluency i.e. Utterance Fluency (UF) is examined. The participants of the study are L2 learners of English, studying at high school level in Hyderabad, India. 50 participants with intermediate level of proficiency in English performed several lexical retrieval tasks and attention-shifting tasks to measure CF, and 8 oral tasks to measure UF. Each aspect of UF (speed, pause, and repair) were measured against the scores of CF to find out those aspects of UF which are reliable indicators of CF. Quantitative analysis of the data shows that among the three aspects of UF; speed is the best predictor of CF, and pause is weakly related to CF. The study suggests that including the speed aspect of UF could make L2 fluency assessment more reliable, valid, and objective. Thus, incorporating the assessment of psycholinguistic mechanisms into L2 spoken fluency testing, could result in fairer evaluation.

Keywords: attention-shifting, cognitive fluency, lexical retrieval, utterance fluency

Procedia PDF Downloads 711
5834 Digitalisation of the Railway Industry: Recent Advances in the Field of Dialogue Systems: Systematic Review

Authors: Andrei Nosov

Abstract:

This paper discusses the development directions of dialogue systems within the digitalisation of the railway industry, where technologies based on conversational AI are already potentially applied or will be applied. Conversational AI is one of the popular natural language processing (NLP) tasks, as it has great prospects for real-world applications today. At the same time, it is a challenging task as it involves many areas of NLP based on complex computations and deep insights from linguistics and psychology. In this review, we focus on dialogue systems and their implementation in the railway domain. We comprehensively review the state-of-the-art research results on dialogue systems and analyse them from three perspectives: type of problem to be solved, type of model, and type of system. In particular, from the perspective of the type of tasks to be solved, we discuss characteristics and applications. This will help to understand how to prioritise tasks. In terms of the type of models, we give an overview that will allow researchers to become familiar with how to apply them in dialogue systems. By analysing the types of dialogue systems, we propose an unconventional approach in contrast to colleagues who traditionally contrast goal-oriented dialogue systems with open-domain systems. Our view focuses on considering retrieval and generative approaches. Furthermore, the work comprehensively presents evaluation methods and datasets for dialogue systems in the railway domain to pave the way for future research. Finally, some possible directions for future research are identified based on recent research results.

Keywords: digitalisation, railway, dialogue systems, conversational AI, natural language processing, natural language understanding, natural language generation

Procedia PDF Downloads 63
5833 Effects of Foam Rolling with Different Application Volumes on the Isometric Force of the Calf Muscle with Consideration of Muscle Activity

Authors: T. Poppendieker, H. Maurer, C. Segieth

Abstract:

Over the past ten years, foam rolling has become a new trend in the fitness and health market. It is also a frequently used technique for self-massage. However, the scope of effects from foam rolling has only recently started to be researched and understood. The focus of this study is to examine the effects of prolonged foam rolling on muscle performance. Isometric muscle force was used as a parameter to determine an improving impact of the myofascial roller in two different application volumes. Besides the maximal muscle force, data were also collected on muscle activation during all tests. Twenty-four (17 females, 7 males) healthy students with an average age of 23.4 ± 2.8 years were recruited. The study followed a cross-over pre-/post design in which the order of conditions was counterbalanced. The subjects performed a one-minute and three-minute foam rolling application set on two separate days. Isometric maximal muscle force of the dominant calf was tested before and after the self-myofascial release application. The statistic software program SPSS 22 was used to analyze the data of the maximal isometric force of the calf muscle by a 2 x 2 (time of measurement x intervention) analysis of variance with repeated measures. The statistic significance level was set at p ≤ 0.05. Neither for the main effect of time of measurement (F(1,23) = .93, p = .36, f = .20) nor for the interaction of time of measurement x intervention (F(1,23) = 1.99, p = .17, f = 0.29) significant p-values were found. However, the effect size indicates a mean interaction effect with a tendency of greater pre-post improvements under the three-minute foam rolling condition. Changes in maximal force did not correlate with changes in EMG-activity (r = .02, p = .95 in the short and r = -.11, p = .65 in the long rolling condition). Results support findings of previous studies and suggest a positive potential for use of the foam roll as a means for keeping muscle force at least at the same performance level while leading to an increase in flexibility.

Keywords: application volume differences, foam rolling, isometric maximal force, self-myofascial release

Procedia PDF Downloads 287
5832 Overall Function and Symptom Impact of Self-Applied Myofascial Release in Adult Patients With Fibromyalgia. A Seven-Week Pilot Study

Authors: Domenica Tambasco, Riina Bray, Sophia Jaworski, Gillian Grant, Celeste Corkery

Abstract:

Fibromyalgia is a chronic condition characterized by widespread musculoskeletal pain, fatigue, and reduced function. Management of symptoms include medications, physical treatments and mindfulness therapies. Myofascial Release is a modality that has been successfully applied in var-ious musculoskeletal conditions. However, to the author’s best knowledge, it is not yet recog-nized as a self-management therapy option in Fibromyalgia. In this study, we investigated whether Self-applied Myofascial Release (SMR) is associated with overall improved function and symptoms in Fibromyalgia. Methods: Eligible adult patients with a confirmed diagnosis of Fibromyalgia at Women’s College Hospital were recruited to SMR. Sessions ran for 1 hour once a week for 7 weeks, led by the same two Physiotherapists knowledgeable in this physical treat-ment modality. The main outcome measure was an overall impact score for function and symp-toms based on the validated assessment tool for Fibromyalgia, the Revised Fibromyalgia Impact Questionnaire (FIQR), measured pre and post-intervention. Both descriptive and analytical methods were applied and reported. Results: We analyzed results using a paired t-test to deter-mine if there was a statistically significant difference in mean FIQR scores between initial (pre-intervention) and final (post-intervention) scores. A clinically significant difference in FIQR was defined as a reduction in score by 10 or more points. Conclusions: Our pilot study showed that SMR appeared to be a safe and effective intervention for our Fibromyalgia participants and the overall impact on function and symptoms occurred in only 7 weeks. Further studies with larger sample sizes comparing SMR to other physical treatment modalities (such as stretching) in an RCT are recommended.

Keywords: fibromyalgia, myofascial release, physical therapy, FIQR

Procedia PDF Downloads 76
5831 Using ANN in Emergency Reconstruction Projects Post Disaster

Authors: Rasha Waheeb, Bjorn Andersen, Rafa Shakir

Abstract:

Purpose The purpose of this study is to avoid delays that occur in emergency reconstruction projects especially in post disaster circumstances whether if they were natural or manmade due to their particular national and humanitarian importance. We presented a theoretical and practical concepts for projects management in the field of construction industry that deal with a range of global and local trails. This study aimed to identify the factors of effective delay in construction projects in Iraq that affect the time and the specific quality cost, and find the best solutions to address delays and solve the problem by setting parameters to restore balance in this study. 30 projects were selected in different areas of construction were selected as a sample for this study. Design/methodology/approach This study discusses the reconstruction strategies and delay in time and cost caused by different delay factors in some selected projects in Iraq (Baghdad as a case study).A case study approach was adopted, with thirty construction projects selected from the Baghdad region, of different types and sizes. Project participants from the case projects provided data about the projects through a data collection instrument distributed through a survey. Mixed approach and methods were applied in this study. Mathematical data analysis was used to construct models to predict delay in time and cost of projects before they started. The artificial neural networks analysis was selected as a mathematical approach. These models were mainly to help decision makers in construction project to find solutions to these delays before they cause any inefficiency in the project being implemented and to strike the obstacles thoroughly to develop this industry in Iraq. This approach was practiced using the data collected through survey and questionnaire data collection as information form. Findings The most important delay factors identified leading to schedule overruns were contractor failure, redesigning of designs/plans and change orders, security issues, selection of low-price bids, weather factors, and owner failures. Some of these are quite in line with findings from similar studies in other countries/regions, but some are unique to the Iraqi project sample, such as security issues and low-price bid selection. Originality/value we selected ANN’s analysis first because ANN’s was rarely used in project management , and never been used in Iraq to finding solutions for problems in construction industry. Also, this methodology can be used in complicated problems when there is no interpretation or solution for a problem. In some cases statistical analysis was conducted and in some cases the problem is not following a linear equation or there was a weak correlation, thus we suggested using the ANN’s because it is used for nonlinear problems to find the relationship between input and output data and that was really supportive.

Keywords: construction projects, delay factors, emergency reconstruction, innovation ANN, post disasters, project management

Procedia PDF Downloads 165
5830 Clustering Ethno-Informatics of Naming Village in Java Island Using Data Mining

Authors: Atje Setiawan Abdullah, Budi Nurani Ruchjana, I. Gede Nyoman Mindra Jaya, Eddy Hermawan

Abstract:

Ethnoscience is used to see the culture with a scientific perspective, which may help to understand how people develop various forms of knowledge and belief, initially focusing on the ecology and history of the contributions that have been there. One of the areas studied in ethnoscience is etno-informatics, is the application of informatics in the culture. In this study the science of informatics used is data mining, a process to automatically extract knowledge from large databases, to obtain interesting patterns in order to obtain a knowledge. While the application of culture described by naming database village on the island of Java were obtained from Geographic Indonesia Information Agency (BIG), 2014. The purpose of this study is; first, to classify the naming of the village on the island of Java based on the structure of the word naming the village, including the prefix of the word, syllable contained, and complete word. Second to classify the meaning of naming the village based on specific categories, as well as its role in the community behavioral characteristics. Third, how to visualize the naming of the village to a map location, to see the similarity of naming villages in each province. In this research we have developed two theorems, i.e theorems area as a result of research studies have collected intersection naming villages in each province on the island of Java, and the composition of the wedge theorem sets the provinces in Java is used to view the peculiarities of a location study. The methodology in this study base on the method of Knowledge Discovery in Database (KDD) on data mining, the process includes preprocessing, data mining and post processing. The results showed that the Java community prioritizes merit in running his life, always working hard to achieve a more prosperous life, and love as well as water and environmental sustainment. Naming villages in each location adjacent province has a high degree of similarity, and influence each other. Cultural similarities in the province of Central Java, East Java and West Java-Banten have a high similarity, whereas in Jakarta-Yogyakarta has a low similarity. This research resulted in the cultural character of communities within the meaning of the naming of the village on the island of Java, this character is expected to serve as a guide in the behavior of people's daily life on the island of Java.

Keywords: ethnoscience, ethno-informatics, data mining, clustering, Java island culture

Procedia PDF Downloads 283
5829 The Effect of Using Emg-based Luna Neurorobotics for Strengthening of Affected Side in Chronic Stroke Patients - Retrospective Study

Authors: Surbhi Kaura, Sachin Kandhari, Shahiduz Zafar

Abstract:

Chronic stroke, characterized by persistent motor deficits, often necessitates comprehensive rehabilitation interventions to improve functional outcomes and mitigate long-term dependency. Luna neurorobotic devices, integrated with EMG feedback systems, provide an innovative platform for facilitating neuroplasticity and functional improvement in stroke survivors. This retrospective study aims to investigate the impact of EMG-based Luna neurorobotic interventions on the strengthening of the affected side in chronic stroke patients. In rehabilitation, active patient participation significantly activates the sensorimotor network during motor control, unlike passive movement. Stroke is a debilitating condition that, when not effectively treated, can result in significant deficits and lifelong dependency. Common issues like neglecting the use of limbs can lead to weakness in chronic stroke cases. In rehabilitation, active patient participation significantly activates the sensorimotor network during motor control, unlike passive movement. This study aims to assess how electromyographic triggering (EMG-triggered) robotic treatments affect walking, ankle muscle force after an ischemic stroke, and the coactivation of agonist and antagonist muscles, which contributes to neuroplasticity with the assistance of biofeedback using robotics. Methods: The study utilized robotic techniques based on electromyography (EMG) for daily rehabilitation in long-term stroke patients, offering feedback and monitoring progress. Each patient received one session per day for two weeks, with the intervention group undergoing 45 minutes of robot-assisted training and exercise at the hospital, while the control group performed exercises at home. Eight participants with impaired motor function and gait after stroke were involved in the study. EMG-based biofeedback exercises were administered through the LUNA neuro-robotic machine, progressing from trigger and release mode to trigger and hold, and later transitioning to dynamic mode. Assessments were conducted at baseline and after two weeks, including the Timed Up and Go (TUG) test, a 10-meter walk test (10m), Berg Balance Scale (BBG), and gait parameters like cadence, step length, upper limb strength measured by EMG threshold in microvolts, and force in Newton meters. Results: The study utilized a scale to assess motor strength and balance, illustrating the benefits of EMG-biofeedback following LUNA robotic therapy. In the analysis of the left hemiparetic group, an increase in strength post-rehabilitation was observed. The pre-TUG mean value was 72.4, which decreased to 42.4 ± 0.03880133 seconds post-rehabilitation, with a significant difference indicated by a p-value below 0.05, reflecting a reduced task completion time. Similarly, in the force-based task, the pre-knee dynamic force in Newton meters was 18.2NM, which increased to 31.26NM during knee extension post-rehabilitation. The post-student t-test showed a p-value of 0.026, signifying a significant difference. This indicated an increase in the strength of knee extensor muscles after LUNA robotic rehabilitation. Lastly, at baseline, the EMG value for ankle dorsiflexion was 5.11 (µV), which increased to 43.4 ± 0.06 µV post-rehabilitation, signifying an increase in the threshold and the patient's ability to generate more motor units during left ankle dorsiflexion. Conclusion: This study aimed to evaluate the impact of EMG and dynamic force-based rehabilitation devices on walking and strength of the affected side in chronic stroke patients without nominal data comparisons among stroke patients. Additionally, it provides insights into the inclusion of EMG-triggered neurorehabilitation robots in the daily rehabilitation of patients.

Keywords: neurorehabilitation, robotic therapy, stroke, strength, paralysis

Procedia PDF Downloads 62
5828 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier

Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh

Abstract:

This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.

Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems

Procedia PDF Downloads 46