Search results for: 3D models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6743

Search results for: 3D models

1103 Pathologies in the Left Atrium Reproduced Using a Low-Order Synergistic Numerical Model of the Cardiovascular System

Authors: Nicholas Pearce, Eun-jin Kim

Abstract:

Pathologies of the cardiovascular (CV) system remain a serious and deadly health problem for human society. Computational modelling provides a relatively accessible tool for diagnosis, treatment, and research into CV disorders. However, numerical models of the CV system have largely focused on the function of the ventricles, frequently overlooking the behaviour of the atria. Furthermore, in the study of the pressure-volume relationship of the heart, which is a key diagnosis of cardiac vascular pathologies, previous works often evoke popular yet questionable time-varying elastance (TVE) method that imposes the pressure-volume relationship instead of calculating it consistently. Despite the convenience of the TVE method, there have been various indications of its limitations and the need for checking its validity in different scenarios. A model of the combined left ventricle (LV) and left atrium (LA) is presented, which consistently considers various feedback mechanisms in the heart without having to use the TVE method. Specifically, a synergistic model of the left ventricle is extended and modified to include the function of the LA. The synergy of the original model is preserved by modelling the electro-mechanical and chemical functions of the micro-scale myofiber for the LA and integrating it with the microscale and macro-organ-scale heart dynamics of the left ventricle and CV circulation. The atrioventricular node function is included and forms the conduction pathway for electrical signals between the atria and ventricle. The model reproduces the essential features of LA behaviour, such as the two-phase pressure-volume relationship and the classic figure of eight pressure-volume loops. Using this model, disorders in the internal cardiac electrical signalling are investigated by recreating the mechano-electric feedback (MEF), which is impossible where the time-varying elastance method is used. The effects of AV node block and slow conduction are then investigated in the presence of an atrial arrhythmia. It is found that electrical disorders and arrhythmia in the LA degrade the CV system by reducing the cardiac output, power, and heart rate.

Keywords: cardiovascular system, left atrium, numerical model, MEF

Procedia PDF Downloads 115
1102 Targeting Mre11 Nuclease Overcomes Platinum Resistance and Induces Synthetic Lethality in Platinum Sensitive XRCC1 Deficient Epithelial Ovarian Cancers

Authors: Adel Alblihy, Reem Ali, Mashael Algethami, Ahmed Shoqafi, Michael S. Toss, Juliette Brownlie, Natalie J. Tatum, Ian Hickson, Paloma Ordonez Moran, Anna Grabowska, Jennie N. Jeyapalan, Nigel P. Mongan, Emad A. Rakha, Srinivasan Madhusudan

Abstract:

Platinum resistance is a clinical challenge in ovarian cancer. Platinating agents induce DNA damage which activate Mre11 nuclease directed DNA damage signalling and response (DDR). Upregulation of DDR may promote chemotherapy resistance. Here we have comprehensively evaluated Mre11 in epithelial ovarian cancers. In clinical cohort that received platinum- based chemotherapy (n=331), Mre11 protein overexpression was associated with aggressive phenotype and poor progression free survival (PFS) (p=0.002). In the ovarian cancer genome atlas (TCGA) cohort (n=498), Mre11 gene amplification was observed in a subset of serous tumours (5%) which correlated highly with Mre11 mRNA levels (p<0.0001). Altered Mre11 levels was linked with genome wide alterations that can influence platinum sensitivity. At the transcriptomic level (n=1259), Mre11 overexpression was associated with poor PFS (p=0.003). ROC analysis showed an area under the curve (AUC) of 0.642 for response to platinum-based chemotherapy. Pre-clinically, Mre11 depletion by gene knock down or blockade by small molecule inhibitor (Mirin) reversed platinum resistance in ovarian cancer cells and in 3D spheroid models. Importantly, Mre11 inhibition was synthetically lethal in platinum sensitive XRCC1 deficient ovarian cancer cells and 3D-spheroids. Selective cytotoxicity was associated with DNA double strand break (DSB) accumulation, S-phase cell cycle arrest and increased apoptosis. We conclude that pharmaceutical development of Mre11 inhibitors is a viable clinical strategy for platinum sensitization and synthetic lethality in ovarian cancer.

Keywords: MRE11; XRCC1, ovarian cancer, platinum sensitization, synthetic lethality

Procedia PDF Downloads 129
1101 A Review of Benefit-Risk Assessment over the Product Lifecycle

Authors: M. Miljkovic, A. Urakpo, M. Simic-Koumoutsaris

Abstract:

Benefit-risk assessment (BRA) is a valuable tool that takes place in multiple stages during a medicine's lifecycle, and this assessment can be conducted in a variety of ways. The aim was to summarize current BRA methods used during approval decisions and in post-approval settings and to see possible future directions. Relevant reviews, recommendations, and guidelines published in medical literature and through regulatory agencies over the past five years have been examined. BRA implies the review of two dimensions: the dimension of benefits (determined mainly by the therapeutic efficacy) and the dimension of risks (comprises the safety profile of a drug). Regulators, industry, and academia have developed various approaches, ranging from descriptive textual (qualitative) to decision-analytic (quantitative) models, to facilitate the BRA of medicines during the product lifecycle (from Phase I trials, to authorization procedure, post-marketing surveillance and health technology assessment for inclusion in public formularies). These approaches can be classified into the following categories: stepwise structured approaches (frameworks); measures for benefits and risks that are usually endpoint specific (metrics), simulation techniques and meta-analysis (estimation techniques), and utility survey techniques to elicit stakeholders’ preferences (utilities). All these approaches share the following two common goals: to assist this analysis and to improve the communication of decisions, but each is subject to its own specific strengths and limitations. Before using any method, its utility, complexity, the extent to which it is established, and the ease of results interpretation should be considered. Despite widespread and long-time use, BRA is subject to debate, suffers from a number of limitations, and currently is still under development. The use of formal, systematic structured approaches to BRA for regulatory decision-making and quantitative methods to support BRA during the product lifecycle is a standard practice in medicine that is subject to continuous improvement and modernization, not only in methodology but also in cooperation between organizations.

Keywords: benefit-risk assessment, benefit-risk profile, product lifecycle, quantitative methods, structured approaches

Procedia PDF Downloads 155
1100 Application of Response Surface Methodology in Optimizing Chitosan-Argan Nutshell Beads for Radioactive Wastewater Treatment

Authors: F. F. Zahra, E. G. Touria, Y. Samia, M. Ahmed, H. Hasna, B. M. Latifa

Abstract:

The presence of radioactive contaminants in wastewater poses a significant environmental and health risk, necessitating effective treatment solutions. This study investigates the optimization of chitosan-Argan nutshell beads for the removal of radioactive elements from wastewater, utilizing Response Surface Methodology (RSM) to enhance the treatment efficiency. Chitosan, known for its biocompatibility and adsorption properties, was combined with Argan nutshell powder to form composite beads. These beads were then evaluated for their capacity to remove radioactive contaminants from synthetic wastewater. The Box-Behnken design (BBD) under RSM was employed to analyze the influence of key operational parameters, including initial contaminant concentration, pH, bead dosage, and contact time, on the removal efficiency. Experimental results indicated that all tested parameters significantly affected the removal efficiency, with initial contaminant concentration and pH showing the most substantial impact. The optimized conditions, as determined by RSM, were found to be an initial contaminant concentration of 50 mg/L, a pH of 6, a bead dosage of 0.5 g/L, and a contact time of 120 minutes. Under these conditions, the removal efficiency reached up to 95%, demonstrating the potential of chitosan-Argan nutshell beads as a viable solution for radioactive wastewater treatment. Furthermore, the adsorption process was characterized by fitting the experimental data to various isotherm and kinetic models. The adsorption isotherms conformed well to the Langmuir model, indicating monolayer adsorption, while the kinetic data were best described by the pseudo-second-order model, suggesting chemisorption as the primary mechanism. This study highlights the efficacy of chitosan-Argan nutshell beads in removing radioactive contaminants from wastewater and underscores the importance of optimizing treatment parameters using RSM. The findings provide a foundation for developing cost-effective and environmentally friendly treatment technologies for radioactive wastewater.

Keywords: adsorption, argan nutshell, beads, chitosan, mechanism, optimization, radioactive wastewater, response surface methodology

Procedia PDF Downloads 32
1099 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving

Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries

Abstract:

Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.

Keywords: air jet weaving, aerodynamic simulation, energy efficiency, experimental validation, weft insertion

Procedia PDF Downloads 197
1098 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models

Authors: Ravi Ande, Mousumi Hazari

Abstract:

One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.

Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine

Procedia PDF Downloads 92
1097 Application of Building Information Modeling in Energy Management of Individual Departments Occupying University Facilities

Authors: Kung-Jen Tu, Danny Vernatha

Abstract:

To assist individual departments within universities in their energy management tasks, this study explores the application of Building Information Modeling in establishing the ‘BIM based Energy Management Support System’ (BIM-EMSS). The BIM-EMSS consists of six components: (1) sensors installed for each occupant and each equipment, (2) electricity sub-meters (constantly logging lighting, HVAC, and socket electricity consumptions of each room), (3) BIM models of all rooms within individual departments’ facilities, (4) data warehouse (for storing occupancy status and logged electricity consumption data), (5) building energy management system that provides energy managers with various energy management functions, and (6) energy simulation tool (such as eQuest) that generates real time 'standard energy consumptions' data against which 'actual energy consumptions' data are compared and energy efficiency evaluated. Through the building energy management system, the energy manager is able to (a) have 3D visualization (BIM model) of each room, in which the occupancy and equipment status detected by the sensors and the electricity consumptions data logged are displayed constantly; (b) perform real time energy consumption analysis to compare the actual and standard energy consumption profiles of a space; (c) obtain energy consumption anomaly detection warnings on certain rooms so that energy management corrective actions can be further taken (data mining technique is employed to analyze the relation between space occupancy pattern with current space equipment setting to indicate an anomaly, such as when appliances turn on without occupancy); and (d) perform historical energy consumption analysis to review monthly and annually energy consumption profiles and compare them against historical energy profiles. The BIM-EMSS was further implemented in a research lab in the Department of Architecture of NTUST in Taiwan and implementation results presented to illustrate how it can be used to assist individual departments within universities in their energy management tasks.

Keywords: database, electricity sub-meters, energy anomaly detection, sensor

Procedia PDF Downloads 307
1096 PLO-AIM: Potential-Based Lane Organization in Autonomous Intersection Management

Authors: Berk Ecer, Ebru Akcapinar Sezer

Abstract:

Traditional management models of intersections, such as no-light intersections or signalized intersection, are not the most effective way of passing the intersections if the vehicles are intelligent. To this end, Dresner and Stone proposed a new intersection control model called Autonomous Intersection Management (AIM). In the AIM simulation, they were examining the problem from a multi-agent perspective, demonstrating that intelligent intersection control can be made more efficient than existing control mechanisms. In this study, autonomous intersection management has been investigated. We extended their works and added a potential-based lane organization layer. In order to distribute vehicles evenly to each lane, this layer triggers vehicles to analyze near lanes, and they change their lane if other lanes have an advantage. We can observe this behavior in real life, such as drivers, change their lane by considering their intuitions. Basic intuition on selecting the correct lane for traffic is selecting a less crowded lane in order to reduce delay. We model that behavior without any change in the AIM workflow. Experiment results show us that intersection performance is directly connected with the vehicle distribution in lanes of roads of intersections. We see the advantage of handling lane management with a potential approach in performance metrics such as average delay of intersection and average travel time. Therefore, lane management and intersection management are problems that need to be handled together. This study shows us that the lane through which vehicles enter the intersection is an effective parameter for intersection management. Our study draws attention to this parameter and suggested a solution for it. We observed that the regulation of AIM inputs, which are vehicles in lanes, was as effective as contributing to aim intersection management. PLO-AIM model outperforms AIM in evaluation metrics such as average delay of intersection and average travel time for reasonable traffic rates, which is in between 600 vehicle/hour per lane to 1300 vehicle/hour per lane. The proposed model reduced the average travel time reduced in between %0.2 - %17.3 and reduced the average delay of intersection in between %1.6 - %17.1 for 4-lane and 6-lane scenarios.

Keywords: AIM project, autonomous intersection management, lane organization, potential-based approach

Procedia PDF Downloads 139
1095 Modelling Volatility Spillovers and Cross Hedging among Major Agricultural Commodity Futures

Authors: Roengchai Tansuchat, Woraphon Yamaka, Paravee Maneejuk

Abstract:

From the past recent, the global financial crisis, economic instability, and large fluctuation in agricultural commodity price have led to increased concerns about the volatility transmission among them. The problem is further exacerbated by commodities volatility caused by other commodity price fluctuations, hence the decision on hedging strategy has become both costly and useless. Thus, this paper is conducted to analysis the volatility spillover effect among major agriculture including corn, soybeans, wheat and rice, to help the commodity suppliers hedge their portfolios, and manage the risk and co-volatility of them. We provide a switching regime approach to analyzing the issue of volatility spillovers in different economic conditions, namely upturn and downturn economic. In particular, we investigate relationships and volatility transmissions between these commodities in different economic conditions. We purposed a Copula-based multivariate Markov Switching GARCH model with two regimes that depend on an economic conditions and perform simulation study to check the accuracy of our proposed model. In this study, the correlation term in the cross-hedge ratio is obtained from six copula families – two elliptical copulas (Gaussian and Student-t) and four Archimedean copulas (Clayton, Gumbel, Frank, and Joe). We use one-step maximum likelihood estimation techniques to estimate our models and compare the performance of these copula using Akaike information criterion (AIC) and Bayesian information criteria (BIC). In the application study of agriculture commodities, the weekly data used are conducted from 4 January 2005 to 1 September 2016, covering 612 observations. The empirical results indicate that the volatility spillover effects among cereal futures are different, as response of different economic condition. In addition, the results of hedge effectiveness will also suggest the optimal cross hedge strategies in different economic condition especially upturn and downturn economic.

Keywords: agricultural commodity futures, cereal, cross-hedge, spillover effect, switching regime approach

Procedia PDF Downloads 202
1094 Using a Train-the-Trainer Model to Deliver Post-Partum Haemorrhage Simulation in Rural Uganda

Authors: Michael Campbell, Malaz Elsaddig, Kevin Jones

Abstract:

Background: Despite encouraging progress, global maternal mortality has remained stubbornly high since the declaration of the Millennium development goals. Sub-Saharan Africa accounts for well over half of maternal deaths with Post-Partum Haemorrhage (PPH) being the lead cause. ‘In house’ simulation training delivered by local doctors may be a sustainable approach for improving emergency obstetric care. The aim of this study was to evaluate the use of a Train-the-Trainer (TtT) model in a rural Ugandan hospital to ascertain whether it can feasibly improve practitioners’ management of PPH. Methods: Three Ugandan doctors underwent a training course to enable them to design and deliver simulation training. These doctors used MamaNatalie® models to simulate PPH scenarios for midwives, nurses and medical students. The main outcome was improvement in participants’ knowledge and confidence, assessed using self-reported scores on a 10-point scale. Results: The TtT model produced significant improvements in the confidence and knowledge scores of the ten participants. The mean confidence score rose significantly (p=0.0005) from 6.4 to 8.6 following the simulation training. There was also a significant increase in the mean knowledge score from 7.2 to 9.0 (p=0.04). Medical students demonstrated the greatest overall increase in confidence scores whilst increases in knowledge scores were largest amongst nurses. Conclusions: This study demonstrates that a TtT model can be used in a low resource setting to improve healthcare professionals’ confidence and knowledge in managing obstetric emergencies. This Train-the-Trainer model represents a sustainable approach to addressing skill deficits in low resource settings. We believe that its expansion across healthcare institutions in Sub-Saharan Africa will help to reduce the region’s high maternal mortality rate and step closer to achieving the ambitions of the Millennium development goals.

Keywords: low resource setting, post-partum haemorrhage, simulation training, train the trainer

Procedia PDF Downloads 177
1093 Sparse Representation Based Spatiotemporal Fusion Employing Additional Image Pairs to Improve Dictionary Training

Authors: Dacheng Li, Bo Huang, Qinjin Han, Ming Li

Abstract:

Remotely sensed imagery with the high spatial and temporal characteristics, which it is hard to acquire under the current land observation satellites, has been considered as a key factor for monitoring environmental changes over both global and local scales. On a basis of the limited high spatial-resolution observations, challenged studies called spatiotemporal fusion have been developed for generating high spatiotemporal images through employing other auxiliary low spatial-resolution data while with high-frequency observations. However, a majority of spatiotemporal fusion approaches yield to satisfactory assumption, empirical but unstable parameters, low accuracy or inefficient performance. Although the spatiotemporal fusion methodology via sparse representation theory has advantage in capturing reflectance changes, stability and execution efficiency (even more efficient when overcomplete dictionaries have been pre-trained), the retrieval of high-accuracy dictionary and its response to fusion results are still pending issues. In this paper, we employ additional image pairs (here each image-pair includes a Landsat Operational Land Imager and a Moderate Resolution Imaging Spectroradiometer acquisitions covering the partial area of Baotou, China) only into the coupled dictionary training process based on K-SVD (K-means Singular Value Decomposition) algorithm, and attempt to improve the fusion results of two existing sparse representation based fusion models (respectively utilizing one and two available image-pair). The results show that more eligible image pairs are probably related to a more accurate overcomplete dictionary, which generally indicates a better image representation, and is then contribute to an effective fusion performance in case that the added image-pair has similar seasonal aspects and image spatial structure features to the original image-pair. It is, therefore, reasonable to construct multi-dictionary training pattern for generating a series of high spatial resolution images based on limited acquisitions.

Keywords: spatiotemporal fusion, sparse representation, K-SVD algorithm, dictionary learning

Procedia PDF Downloads 261
1092 Dao Din Student Activists: From Hope to Victims under the Thai Society of Darkness

Authors: Siwach Sripokangkul, Autthapon Muangming

Abstract:

The Dao Din group is a gathering of students from the Faculty of Law, Khon Kaen University, a leading university in the northeast of Thailand. The Dao Din group has been one of the most prominent student movements in the past four decades since the bloody massacre of the 6th of October 1976. The group of student is a movement who gather to oppose and protest against different capitalist-run projects that have impacted upon the environment since 2009. The students have become heroes in Thai society and receive support from various groups, especially the middle class who regard the students as role models for the youth. Subsequently, the Dao Din group has received numerous awards between 2011-2013. However, the Dao Din group opposed the military coup d’état of 2014 and the subsequent military junta. Under the military dictatorship regime (2014-present), security officials have hunted, insulted, arrested, and jailed members of the group many times amidst silence from most of the from the middle class. Therefore, this article posits the question of why the Dao Din group which was once the hero and hope of Thai society, has become a political victim in only a few years. The study methods used are the analysis of documentaries, news articles, and interviews with representatives of the Dao Din group. The author argues that Thailand’s middle class previously demonstrated a positive perception of the Dao Din group precisely because that group had earlier opposed policies of the elected Yingluck Shinawatra government, which most of the middle class already despised. However, once the Dao Din group began to protest against the anti-Yingluck military government, then the middle class turned to harshly criticize the Dao Din group. So it can be concluded that the Thai middle class tends to put its partisan interests ahead of a civil society group which has been critical of elected as well as military administrations. This has led the middle class to support the demolishing of Thai democracy. Such a Thai middle-class characteristic not only poses a strong bulwark for the perpetuation of military rule but also destroys a civil society group (composed of young people) who should be the future hope of the nation rather than under the Thai society of darkness.

Keywords: Dao Din student activists, the military coup d’état of 2014, Thai politics, human rights violations

Procedia PDF Downloads 224
1091 Risk Management in Islamic Micro Finance Credit System for Poverty Alleviation from Qualitative Perspective

Authors: Liyu Adhi Kasari Sulung

Abstract:

Poverty has been a major problem in Indonesia. Islamic micro finance (IMF) named Baitul Maal Wat Tamwil (Bmt) plays a prominent role to eradicate this. Indonesia as the biggest muslim country has many successful applied products such as worldwide adopt group-based lending approach, flexible financing for farmers, and gold pawning. The Problems related to these models are operation risk management and internal control system (ICS). A proper ICS will help an organization in preventing the occurrence of bad financing through detecting error and irregularities in its operation. This study aims to seek a proper risk management scheme of credit system in Bmt and internal control system’s rank for every stage. Risk management variables are obtained at the first In-Depth Interview (IDI) and Focus Group Discussion (FGD) with Shariah supervisory boards, boards of directors, and operational managers. Survey was conducted covering nationwide data; West Java, South Sulawesi, and West Nusa Tenggara. Moreover, Content analysis is employed to build the relationship among these variables. Research Findings shows that risk management Characteristics in Indonesia involves ex ante, credit process, and ex post strategies to deal with risk in credit system. Ex-ante control consists of Shariah compliance, survey, group leader reference, and islamic forming orientation. Then, credit process involves saving, collateral, joint liability, loan repayment, and credit installment controlling. Finally, ex-post control includes shariah evaluation, credit evaluation, grace period and low installment provisions. In addition, internal control order sort three stages by its priority; Credit process as first rank, then ex-post control as second, and ex ante control as the last rank.

Keywords: internal control system, islamic micro finance, poverty, risk management

Procedia PDF Downloads 408
1090 Healthcare-SignNet: Advanced Video Classification for Medical Sign Language Recognition Using CNN and RNN Models

Authors: Chithra A. V., Somoshree Datta, Sandeep Nithyanandan

Abstract:

Sign Language Recognition (SLR) is the process of interpreting and translating sign language into spoken or written language using technological systems. It involves recognizing hand gestures, facial expressions, and body movements that makeup sign language communication. The primary goal of SLR is to facilitate communication between hearing- and speech-impaired communities and those who do not understand sign language. Due to the increased awareness and greater recognition of the rights and needs of the hearing- and speech-impaired community, sign language recognition has gained significant importance over the past 10 years. Technological advancements in the fields of Artificial Intelligence and Machine Learning have made it more practical and feasible to create accurate SLR systems. This paper presents a distinct approach to SLR by framing it as a video classification problem using Deep Learning (DL), whereby a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) has been used. This research targets the integration of sign language recognition into healthcare settings, aiming to improve communication between medical professionals and patients with hearing impairments. The spatial features from each video frame are extracted using a CNN, which captures essential elements such as hand shapes, movements, and facial expressions. These features are then fed into an RNN network that learns the temporal dependencies and patterns inherent in sign language sequences. The INCLUDE dataset has been enhanced with more videos from the healthcare domain and the model is evaluated on the same. Our model achieves 91% accuracy, representing state-of-the-art performance in this domain. The results highlight the effectiveness of treating SLR as a video classification task with the CNN-RNN architecture. This approach not only improves recognition accuracy but also offers a scalable solution for real-time SLR applications, significantly advancing the field of accessible communication technologies.

Keywords: sign language recognition, deep learning, convolution neural network, recurrent neural network

Procedia PDF Downloads 27
1089 Building Education Leader Capacity through an Integrated Information and Communication Technology Leadership Model and Tool

Authors: Sousan Arafeh

Abstract:

Educational systems and schools worldwide are increasingly reliant on information and communication technology (ICT). Unfortunately, most educational leadership development programs do not offer formal curricular and/or field experiences that prepare students for managing ICT resources, personnel, and processes. The result is a steep learning curve for the leader and his/her staff and dissipated organizational energy that compromises desired outcomes. To address this gap in education leaders’ development, Arafeh’s Integrated Technology Leadership Model (AITLM) was created. It is a conceptual model and tool that educational leadership students can use to better understand the ICT ecology that exists within their schools. The AITL Model consists of six 'infrastructure types' where ICT activity takes place: technical infrastructure, communications infrastructure, core business infrastructure, context infrastructure, resources infrastructure, and human infrastructure. These six infrastructures are further divided into 16 key areas that need management attention. The AITL Model was created by critically analyzing existing technology/ICT leadership models and working to make something more authentic and comprehensive regarding school leaders’ purview and experience. The AITL Model then served as a tool when it was distributed to over 150 educational leadership students who were asked to review it and qualitatively share their reactions. Students said the model presented crucial areas of consideration that they had not been exposed to before and that the exercise of reviewing and discussing the AITL Model as a group was useful for identifying areas of growth that they could pursue in the leadership development program and in their professional settings. While development in all infrastructures and key areas was important for students’ understanding of ICT, they noted that they were least aware of the importance of the intangible area of the resources infrastructure. The AITL Model will be presented and session participants will have an opportunity to review and reflect on its impact and utility. Ultimately, the AITL Model is one that could have significant policy and practice implications. At the very least, it might help shape ICT content in educational leadership development programs through curricular and pedagogical updates.

Keywords: education leadership, information and communications technology, ICT, leadership capacity building, leadership development

Procedia PDF Downloads 116
1088 Modelling Dengue Disease With Climate Variables Using Geospatial Data For Mekong River Delta Region of Vietnam

Authors: Thi Thanh Nga Pham, Damien Philippon, Alexis Drogoul, Thi Thu Thuy Nguyen, Tien Cong Nguyen

Abstract:

Mekong River Delta region of Vietnam is recognized as one of the most vulnerable to climate change due to flooding and seawater rise and therefore an increased burden of climate change-related diseases. Changes in temperature and precipitation are likely to alter the incidence and distribution of vector-borne diseases such as dengue fever. In this region, the peak of the dengue epidemic period is around July to September during the rainy season. It is believed that climate is an important factor for dengue transmission. This study aims to enhance the capacity of dengue prediction by the relationship of dengue incidences with climate and environmental variables for Mekong River Delta of Vietnam during 2005-2015. Mathematical models for vector-host infectious disease, including larva, mosquito, and human being were used to calculate the impacts of climate to the dengue transmission with incorporating geospatial data for model input. Monthly dengue incidence data were collected at provincial level. Precipitation data were extracted from satellite observations of GSMaP (Global Satellite Mapping of Precipitation), land surface temperature and land cover data were from MODIS. The value of seasonal reproduction number was estimated to evaluate the potential, severity and persistence of dengue infection, while the final infected number was derived to check the outbreak of dengue. The result shows that the dengue infection depends on the seasonal variation of climate variables with the peak during the rainy season and predicted dengue incidence follows well with this dynamic for the whole studied region. However, the highest outbreak of 2007 dengue was not captured by the model reflecting nonlinear dependences of transmission on climate. Other possible effects will be discussed to address the limitation of the model. This suggested the need of considering of both climate variables and another variability across temporal and spatial scales.

Keywords: infectious disease, dengue, geospatial data, climate

Procedia PDF Downloads 383
1087 Heterogeneous-Resolution and Multi-Source Terrain Builder for CesiumJS WebGL Virtual Globe

Authors: Umberto Di Staso, Marco Soave, Alessio Giori, Federico Prandi, Raffaele De Amicis

Abstract:

The increasing availability of information about earth surface elevation (Digital Elevation Models DEM) generated from different sources (remote sensing, Aerial Images, Lidar) poses the question about how to integrate and make available to the most than possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the quality of data management plays a fundamental role. Due to the high acquisition costs and the huge amount of generated data, highresolution terrain surveys tend to be small or medium sized and available on limited portion of earth. Here comes the need to merge large-scale height maps that typically are made available for free at worldwide level, with very specific high resolute datasets. One the other hand, the third dimension increases the user experience and the data representation quality, unlocking new possibilities in data analysis for civil protection, real estate, urban planning, environment monitoring, etc. The open-source 3D virtual globes, which are trending topics in Geovisual Analytics, aim at improving the visualization of geographical data provided by standard web services or with proprietary formats. Typically, 3D Virtual globes like do not offer an open-source tool that allows the generation of a terrain elevation data structure starting from heterogeneous-resolution terrain datasets. This paper describes a technological solution aimed to set up a so-called “Terrain Builder”. This tool is able to merge heterogeneous-resolution datasets, and to provide a multi-resolution worldwide terrain services fully compatible with CesiumJS and therefore accessible via web using traditional browser without any additional plug-in.

Keywords: Terrain Builder, WebGL, Virtual Globe, CesiumJS, Tiled Map Service, TMS, Height-Map, Regular Grid, Geovisual Analytics, DTM

Procedia PDF Downloads 426
1086 Associations between Sharing Bike Usage and Characteristics of Urban Street Built Environment in Wuhan, China

Authors: Miao Li, Mengyuan Xu

Abstract:

As a low-carbon travel mode, bicycling has drawn increasing political interest in the contemporary Chinese urban context, and the public sharing bikes have become the most popular ways of bike usage in China now. This research aims to explore the spatial-temporal relationship between sharing bike usage and different characteristics of the urban street built environment. In the research, street segments were used as the analytic unit of the street built environment defined by street intersections. The sharing bike usage data in the research include a total of 2.64 million samples that are the entire sharing bike distribution data recorded in two days in 2018 within a neighborhood of 185.4 hectares in the city of Wuhan, China. And these data are assigned to the 97 urban street segments in this area based on their geographic location. The built environment variables used in this research are categorized into three sections: 1) street design characteristics, such as street width, street greenery, types of bicycle lanes; 2) condition of other public transportation, such as the availability of metro station; 3) Street function characteristics that are described by the categories and density of the point of interest (POI) along the segments. Spatial Lag Models (SLM) were used in order to reveal the relationships of specific urban streets built environment characteristics and the likelihood of sharing bicycling usage in whole and different periods a day. The results show: 1) there is spatial autocorrelation among sharing bicycling usage of urban streets in case area in general, non-working day, working day and each period of a day, which presents a clustering pattern in the street space; 2) a statistically strong association between bike sharing usage and several different built environment characteristics such as POI density, types of bicycle lanes and street width; 3) the pattern that bike sharing usage is influenced by built environment characteristics depends on the period within a day. These findings could be useful for policymakers and urban designers to better understand the factors affecting bike sharing system and thus propose guidance and strategy for urban street planning and design in order to promote the use of sharing bikes.

Keywords: big data, sharing bike usage, spatial statistics, urban street built environment

Procedia PDF Downloads 145
1085 Adaptor Protein APPL2 Could Be a Therapeutic Target for Improving Hippocampal Neurogenesis and Attenuating Depressant Behaviors and Olfactory Dysfunctions in Chronic Corticosterone-induced Depression

Authors: Jiangang Shen

Abstract:

Olfactory dysfunction is a common symptom companied by anxiety- and depressive-like behaviors in depressive patients. Chronic stress triggers hormone responses and inhibits the proliferation and differentiation of neural stem cells (NSCs) in the hippocampus and subventricular zone (SVZ)-olfactory bulb (OB), contributing to depressive behaviors and olfactory dysfunction. However, the cellular signaling molecules to regulate chronic stress mediated olfactory dysfunction are largely unclear. Adaptor proteins containing the pleckstrin homology domain, phosphotyrosine binding domain, and leucine zipper motif (APPLs) are multifunctional adaptor proteins. Herein, we tested the hypothesis that APPL2 could inhibit hippocampal neurogenesis by affecting glucocorticoid receptor (GR) signaling, subsequently contributing to depressive and anxiety behaviors as well as olfactory dysfunctions. The major discoveries are included: (1) APPL2 Tg mice had enhanced GR phosphorylation under basic conditions but had no different plasma corticosterone (CORT) level and GR phosphorylation under stress stimulation. (2) APPL2 Tg mice had impaired hippocampal neurogenesis and revealed depressive and anxiety behaviors. (3) GR antagonist RU486 reversed the impaired hippocampal neurogenesis in the APPL2 Tg mice. (4) APPL2 Tg mice displayed higher GR activity and less capacity for neurogenesis at the olfactory system with lesser olfactory sensitivity than WT mice. (5) APPL2 negatively regulates olfactory functions by switching fate commitments of NSCs in adult olfactory bulbs via interaction with Notch1 signaling. Furthermore, baicalin, a natural medicinal compound, was found to be a promising agent targeting APPL2/GR signaling and promoting adult neurogenesis in APPL2 Tg mice and chronic corticosterone-induced depression mouse models. Behavioral tests revealed that baicalin had antidepressant and olfactory-improving effects. Taken together, APPL2 is a critical therapeutic target for antidepressant treatment.

Keywords: APPL2, hippocampal neurogenesis, depressive behaviors and olfactory dysfunction, stress

Procedia PDF Downloads 76
1084 Computer Modeling and Plant-Wide Dynamic Simulation for Industrial Flare Minimization

Authors: Sujing Wang, Song Wang, Jian Zhang, Qiang Xu

Abstract:

Flaring emissions during abnormal operating conditions such as plant start-ups, shut-downs, and upsets in chemical process industries (CPI) are usually significant. Flare minimization can help to save raw material and energy for CPI plants, and to improve local environmental sustainability. In this paper, a systematic methodology based on plant-wide dynamic simulation is presented for CPI plant flare minimizations under abnormal operating conditions. Since off-specification emission sources are inevitable during abnormal operating conditions, to significantly reduce flaring emission in a CPI plant, they must be either recycled to the upstream process for online reuse, or stored somewhere temporarily for future reprocessing, when the CPI plant manufacturing returns to stable operation. Thus, the off-spec products could be reused instead of being flared. This can be achieved through the identification of viable design and operational strategies during normal and abnormal operations through plant-wide dynamic scheduling, simulation, and optimization. The proposed study includes three stages of simulation works: (i) developing and validating a steady-state model of a CPI plant; (ii) transiting the obtained steady-state plant model to the dynamic modeling environment; and refining and validating the plant dynamic model; and (iii) developing flare minimization strategies for abnormal operating conditions of a CPI plant via a validated plant-wide dynamic model. This cost-effective methodology has two main merits: (i) employing large-scale dynamic modeling and simulations for industrial flare minimization, which involves various unit models for modeling hundreds of CPI plant facilities; (ii) dealing with critical abnormal operating conditions of CPI plants such as plant start-up and shut-down. Two virtual case studies on flare minimizations for start-up operation (over 50% of emission savings) and shut-down operation (over 70% of emission savings) of an ethylene plant have been employed to demonstrate the efficacy of the proposed study.

Keywords: flare minimization, large-scale modeling and simulation, plant shut-down, plant start-up

Procedia PDF Downloads 320
1083 Trip Reduction in Turbo Machinery

Authors: Pranay Mathur, Carlo Michelassi, Simi Karatha, Gilda Pedoto

Abstract:

Industrial plant uptime is top most importance for reliable, profitable & sustainable operation. Trip and failed start has major impact on plant reliability and all plant operators focussed on efforts required to minimise the trips & failed starts. The performance of these CTQs are measured with 2 metrics, MTBT(Mean time between trips) and SR (Starting reliability). These metrics helps to identify top failure modes and identify units need more effort to improve plant reliability. Baker Hughes Trip reduction program structured to reduce these unwanted trip 1. Real time machine operational parameters remotely available and capturing the signature of malfunction including related boundary condition. 2. Real time alerting system based on analytics available remotely. 3. Remote access to trip logs and alarms from control system to identify the cause of events. 4. Continuous support to field engineers by remotely connecting with subject matter expert. 5. Live tracking of key CTQs 6. Benchmark against fleet 7. Break down to the cause of failure to component level 8. Investigate top contributor, identify design and operational root cause 9. Implement corrective and preventive action 10. Assessing effectiveness of implemented solution using reliability growth models. 11. Develop analytics for predictive maintenance With this approach , Baker Hughes team is able to support customer in achieving their Reliability Key performance Indicators for monitored units, huge cost savings for plant operators. This Presentation explains these approach while providing successful case studies, in particular where 12nos. of LNG and Pipeline operators with about 140 gas compressing line-ups has adopted these techniques and significantly reduce the number of trips and improved MTBT

Keywords: reliability, availability, sustainability, digital infrastructure, weibull, effectiveness, automation, trips, fail start

Procedia PDF Downloads 76
1082 To Identify the Importance of Telemedicine in Diabetes and Its Impact on Hba1c

Authors: Sania Bashir

Abstract:

A promising approach to healthcare delivery, telemedicine makes use of communication technology to reach out to remote regions of the world, allowing for beneficial interactions between diabetic patients and healthcare professionals as well as the provision of affordable and easily accessible medical care. The emergence of contemporary care models, fueled by the pervasiveness of mobile devices, provides better information, offers low cost with the best possible outcomes, and is known as digital health. It involves the integration of collected data using software and apps, as well as low-cost, high-quality outcomes. The goal of this study is to assess how well telemedicine works for diabetic patients and how it impacts their HbA1c levels. A questionnaire-based survey of 300 diabetics included 150 patients in each of the groups receiving usual care and via telemedicine. A descriptive and observational study that lasted from September 2021 to May 2022 was conducted. HbA1c has been gathered for both categories every three months. A remote monitoring tool has been used to assess the efficacy of telemedicine and continuing therapy instead of the customary three monthly meetings like in-person consultations. The patients were (42.3) 18.3 years old on average. 128 men were outnumbered by 172 women (57.3% of the total). 200 patients (66.6%) have type 2 diabetes, compared to over 100 (33.3%) candidates for type 1. Despite the average baseline BMI being within normal ranges at 23.4 kg/m², the mean baseline HbA1c (9.45 1.20) indicates that glycemic treatment is not well-controlled at the time of registration. While patients who use telemedicine experienced a mean percentage change of 10.5, those who visit the clinic experienced a mean percentage change of 3.9. Changes in HbA1c are dependent on several factors, including improvements in BMI (61%) after 9 months of research and compliance with healthy lifestyle recommendations for diet and activity. More compliance was achieved by the telemedicine group. It is an undeniable reality that patient-physician communication is crucial for enhancing health outcomes and avoiding long-term issues. Telemedicine has shown its value in the management of diabetes and holds promise as a novel technique for improved clinical-patient communication in the twenty-first century.

Keywords: diabetes, digital health, mobile app, telemedicine

Procedia PDF Downloads 91
1081 Understanding the Classification of Rain Microstructure and Estimation of Z-R Relationship using a Micro Rain Radar in Tropical Region

Authors: Tomiwa, Akinyemi Clement

Abstract:

Tropical regions experience diverse and complex precipitation patterns, posing significant challenges for accurate rainfall estimation and forecasting. This study addresses the problem of effectively classifying tropical rain types and refining the Z-R (Reflectivity-Rain Rate) relationship to enhance rainfall estimation accuracy. Through a combination of remote sensing, meteorological analysis, and machine learning, the research aims to develop an advanced classification framework capable of distinguishing between different types of tropical rain based on their unique characteristics. This involves utilizing high-resolution satellite imagery, radar data, and atmospheric parameters to categorize precipitation events into distinct classes, providing a comprehensive understanding of tropical rain systems. Additionally, the study seeks to improve the Z-R relationship, a crucial aspect of rainfall estimation. One year of rainfall data was analyzed using a Micro Rain Radar (MRR) located at The Federal University of Technology Akure, Nigeria, measuring rainfall parameters from ground level to a height of 4.8 km with a vertical resolution of 0.16 km. Rain rates were classified into low (stratiform) and high (convective) based on various microstructural attributes such as rain rates, liquid water content, Drop Size Distribution (DSD), average fall speed of the drops, and radar reflectivity. By integrating diverse datasets and employing advanced statistical techniques, the study aims to enhance the precision of Z-R models, offering a more reliable means of estimating rainfall rates from radar reflectivity data. This refined Z-R relationship holds significant potential for improving our understanding of tropical rain systems and enhancing forecasting accuracy in regions prone to heavy precipitation.

Keywords: remote sensing, precipitation, drop size distribution, micro rain radar

Procedia PDF Downloads 34
1080 Utilizing Spatial Uncertainty of On-The-Go Measurements to Design Adaptive Sampling of Soil Electrical Conductivity in a Rice Field

Authors: Ismaila Olabisi Ogundiji, Hakeem Mayowa Olujide, Qasim Usamot

Abstract:

The main reasons for site-specific management for agricultural inputs are to increase the profitability of crop production, to protect the environment and to improve products’ quality. Information about the variability of different soil attributes within a field is highly essential for the decision-making process. Lack of fast and accurate acquisition of soil characteristics remains one of the biggest limitations of precision agriculture due to being expensive and time-consuming. Adaptive sampling has been proven as an accurate and affordable sampling technique for planning within a field for site-specific management of agricultural inputs. This study employed spatial uncertainty of soil apparent electrical conductivity (ECa) estimates to identify adaptive re-survey areas in the field. The original dataset was grouped into validation and calibration groups where the calibration group was sub-grouped into three sets of different measurements pass intervals. A conditional simulation was performed on the field ECa to evaluate the ECa spatial uncertainty estimates by the use of the geostatistical technique. The grouping of high-uncertainty areas for each set was done using image segmentation in MATLAB, then, high and low area value-separate was identified. Finally, an adaptive re-survey was carried out on those areas of high-uncertainty. Adding adaptive re-surveying significantly minimized the time required for resampling whole field and resulted in ECa with minimal error. For the most spacious transect, the root mean square error (RMSE) yielded from an initial crude sampling survey was minimized after an adaptive re-survey, which was close to that value of the ECa yielded with an all-field re-survey. The estimated sampling time for the adaptive re-survey was found to be 45% lesser than that of all-field re-survey. The results indicate that designing adaptive sampling through spatial uncertainty models significantly mitigates sampling cost, and there was still conformity in the accuracy of the observations.

Keywords: soil electrical conductivity, adaptive sampling, conditional simulation, spatial uncertainty, site-specific management

Procedia PDF Downloads 132
1079 Corporate Digital Responsibility in Construction Engineering-Construction 4.0: Ethical Guidelines for Digitization and Artificial Intelligence

Authors: Weber-Lewerenz Bianca

Abstract:

Digitization is developing fast and has become a powerful tool for digital planning, construction, and operations. Its transformation bears high potentials for companies, is critical for success, and thus, requires responsible handling. This study provides an assessment of calls made in the sustainable development goals by the United Nations (SDGs), White Papers on AI by international institutions, EU-Commission and German Government requesting for the consideration and protection of values and fundamental rights, the careful demarcation between machine (artificial) and human intelligence and the careful use of such technologies. The study discusses digitization and the impacts of artificial intelligence (AI) in construction engineering from an ethical perspective by generating data via conducting case studies and interviewing experts as part of the qualitative method. This research evaluates critically opportunities and risks revolving around corporate digital responsibility (CDR) in the construction industry. To the author's knowledge, no study has set out to investigate how CDR in construction could be conceptualized, especially in relation to the digitization and AI, to mitigate digital transformation both in large, medium-sized, and small companies. No study addressed the key research question: Where can CDR be allocated, how shall its adequate ethical framework be designed to support digital innovations in order to make full use of the potentials of digitization and AI? Now is the right timing for constructive approaches and apply ethics-by-design in order to develop and implement a safe and efficient AI. This represents the first study in construction engineering applying a holistic, interdisciplinary, inclusive approach to provide guidelines for orientation, examine benefits of AI and define ethical principles as the key driver for success, resources-cost-time efficiency, and sustainability using digital technologies and AI in construction engineering to enhance digital transformation. Innovative corporate organizations starting new business models are more likely to succeed than those dominated by conservative, traditional attitudes.

Keywords: construction engineering, digitization, digital transformation, artificial intelligence, ethics, corporate digital responsibility, digital innovation

Procedia PDF Downloads 248
1078 Assessing the Survival Time of Hospitalized Patients in Eastern Ethiopia During 2019–2020 Using the Bayesian Approach: A Retrospective Cohort Study

Authors: Chalachew Gashu, Yoseph Kassa, Habtamu Geremew, Mengestie Mulugeta

Abstract:

Background and Aims: Severe acute malnutrition remains a significant health challenge, particularly in low‐ and middle‐income countries. The aim of this study was to determine the survival time of under‐five children with severe acute malnutrition. Methods: A retrospective cohort study was conducted at a hospital, focusing on under‐five children with severe acute malnutrition. The study included 322 inpatients admitted to the Chiro hospital in Chiro, Ethiopia, between September 2019 and August 2020, whose data was obtained from medical records. Survival functions were analyzed using Kaplan‒Meier plots and log‐rank tests. The survival time of severe acute malnutrition was further analyzed using the Cox proportional hazards model and Bayesian parametric survival models, employing integrated nested Laplace approximation methods. Results: Among the 322 patients, 118 (36.6%) died as a result of severe acute malnutrition. The estimated median survival time for inpatients was found to be 2 weeks. Model selection criteria favored the Bayesian Weibull accelerated failure time model, which demonstrated that age, body temperature, pulse rate, nasogastric (NG) tube usage, hypoglycemia, anemia, diarrhea, dehydration, malaria, and pneumonia significantly influenced the survival time of severe acute malnutrition. Conclusions: This study revealed that children below 24 months, those with altered body temperature and pulse rate, NG tube usage, hypoglycemia, and comorbidities such as anemia, diarrhea, dehydration, malaria, and pneumonia had a shorter survival time when affected by severe acute malnutrition under the age of five. To reduce the death rate of children under 5 years of age, it is necessary to design community management for acute malnutrition to ensure early detection and improve access to and coverage for children who are malnourished.

Keywords: Bayesian analysis, severe acute malnutrition, survival data analysis, survival time

Procedia PDF Downloads 47
1077 Design and Analysis for a 4-Stage Crash Energy Management System for Railway Vehicles

Authors: Ziwen Fang, Jianran Wang, Hongtao Liu, Weiguo Kong, Kefei Wang, Qi Luo, Haifeng Hong

Abstract:

A 4-stage crash energy management (CEM) system for subway rail vehicles used by Massachusetts Bay Transportation Authority (MBTA) in the USA is developed in this paper. The 4 stages of this new CEM system include 1) energy absorbing coupler (draft gear and shear bolts), 2) primary energy absorbers (aluminum honeycomb structured box), 3) secondary energy absorbers (crush tube), and 4) collision post and corner post. A sliding anti-climber and a fixed anti-climber are designed at the front of the vehicle cooperating with the 4-stage CEM to maximize the energy to be absorbed and minimize the damage to passengers and crews. In order to investigate the effectiveness of this CEM system, both finite element (FE) methods and crashworthiness test have been employed. The whole vehicle consists of 3 married pairs, i.e., six cars. In the FE approach, full-scale railway car models are developed and different collision cases such as a single moving car impacting a rigid wall, two moving cars into a rigid wall, two moving cars into two stationary cars, six moving cars into six stationary cars and so on are investigated. The FE analysis results show that the railway vehicle incorporating this CEM system has a superior crashworthiness performance. In the crashworthiness test, a simplified vehicle front end including the sliding anti-climber, the fixed anti-climber, the primary energy absorbers, the secondary energy absorber, the collision post and the corner post is built and impacted to a rigid wall. The same test model is also analyzed in the FE and the results such as crushing force, stress, and strain of critical components, acceleration and velocity curves are compared and studied. FE results show very good comparison to the test results.

Keywords: railway vehicle collision, crash energy management design, finite element method, crashworthiness test

Procedia PDF Downloads 402
1076 Developing Medical Leaders: A Realistic Evaluation Study for Improving Patient Safety and Maximising Medical Engagement

Authors: Lisa Fox, Jill Aylott

Abstract:

There is a global need to identify ways to engage doctors in non-clinical matters such as medical leadership, service improvement and health system transformation. Using the core principles of Realistic Evaluation (RE), this study examined what works, for doctors of different grades, specialities and experience in an acute NHS Hospital Trust in the UK. Realistic Evaluation is an alternative to more traditional cause and effect evaluation models and seeks to understand the interdependencies of Context, Mechanism and Outcome proposing that Context (C) + Mechanism (M) = Outcome (O). In this study, the context, mechanism and outcome were examined from within individual medical leaders to determine what enables levels of medical engagement in a specific improvement project to reduce hospital inpatient mortality. Five qualitative case studies were undertaken with consultants who had regularly completed mortality reviews over a six month period. The case studies involved semi-structured interviews to test the theory behind the drivers for medical engagement. The interviews were analysed using a theory-driven thematic analysis to identify CMO configurations to explain what works, for whom and in what circumstances. The findings showed that consultants with a longer length of service became more engaged if there were opportunities to be involved in the beginning of an improvement project, with more opportunities to affect the design. Those that are new to a consultant role were more engaged if they felt able to apply any learning directly into their own settings or if they could use it as an opportunity to understand more about the organisation they are working in. This study concludes that RE is a useful methodology for better understanding the complexities of motivation and consultant engagement in a trust wide service improvement project. The study showed that there should be differentiated and bespoke training programmes to maximise each individual doctor’s propensity for medical engagement. The RE identified that there are different ways to ensure that doctors have the right skills to feel confident in service improvement projects.

Keywords: realistic evaluation, medical leadership, medical engagement, patient safety, service improvement

Procedia PDF Downloads 218
1075 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)

Procedia PDF Downloads 107
1074 Prediction Model of Body Mass Index of Young Adult Students of Public Health Faculty of University of Indonesia

Authors: Yuwaratu Syafira, Wahyu K. Y. Putra, Kusharisupeni Djokosujono

Abstract:

Background/Objective: Body Mass Index (BMI) serves various purposes, including measuring the prevalence of obesity in a population, and also in formulating a patient’s diet at a hospital, and can be calculated with the equation = body weight (kg)/body height (m)². However, the BMI of an individual with difficulties in carrying their weight or standing up straight can not necessarily be measured. The aim of this study was to form a prediction model for the BMI of young adult students of Public Health Faculty of University of Indonesia. Subject/Method: This study used a cross sectional design, with a total sample of 132 respondents, consisted of 58 males and 74 females aged 21- 30. The dependent variable of this study was BMI, and the independent variables consisted of sex and anthropometric measurements, which included ulna length, arm length, tibia length, knee height, mid-upper arm circumference, and calf circumference. Anthropometric information was measured and recorded in a single sitting. Simple and multiple linear regression analysis were used to create the prediction equation for BMI. Results: The male respondents had an average BMI of 24.63 kg/m² and the female respondents had an average of 22.52 kg/m². A total of 17 variables were analysed for its correlation with BMI. Bivariate analysis showed the variable with the strongest correlation with BMI was Mid-Upper Arm Circumference/√Ulna Length (MUAC/√UL) (r = 0.926 for males and r = 0.886 for females). Furthermore, MUAC alone also has a very strong correlation with BMI (r = 0,913 for males and r = 0,877 for females). Prediction models formed from either MUAC/√UL or MUAC alone both produce highly accurate predictions of BMI. However, measuring MUAC/√UL is considered inconvenient, which may cause difficulties when applied on the field. Conclusion: The prediction model considered most ideal to estimate BMI is: Male BMI (kg/m²) = 1.109(MUAC (cm)) – 9.202 and Female BMI (kg/m²) = 0.236 + 0.825(MUAC (cm)), based on its high accuracy levels and the convenience of measuring MUAC on the field.

Keywords: body mass index, mid-upper arm circumference, prediction model, ulna length

Procedia PDF Downloads 214