Search results for: modern analytical methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18979

Search results for: modern analytical methods

12139 Polymer-Layered Gold Nanoparticles: Preparation, Properties and Uses of a New Class of Materials

Authors: S. M. Chabane sari S. Zargou, A.R. Senoudi, F. Benmouna

Abstract:

Immobilization of nano particles (NPs) is the subject of numerous studies pertaining to the design of polymer nano composites, supported catalysts, bioactive colloidal crystals, inverse opals for novel optical materials, latex templated-hollow inorganic capsules, immunodiagnostic assays; “Pickering” emulsion polymerization for making latex particles and film-forming composites or Janus particles; chemo- and biosensors, tunable plasmonic nano structures, hybrid porous monoliths for separation science and technology, biocidal polymer/metal nano particle composite coatings, and so on. Particularly, in the recent years, the literature has witnessed an impressive progress of investigations on polymer coatings, grafts and particles as supports for anchoring nano particles. This is actually due to several factors: polymer chains are flexible and may contain a variety of functional groups that are able to efficiently immobilize nano particles and their precursors by dispersive or van der Waals, electrostatic, hydrogen or covalent bonds. We review methods to prepare polymer-immobilized nano particles through a plethora of strategies in view of developing systems for separation, sensing, extraction and catalysis. The emphasis is on methods to provide (i) polymer brushes and grafts; (ii) monoliths and porous polymer systems; (iii) natural polymers and (iv) conjugated polymers as platforms for anchoring nano particles. The latter range from soft bio macromolecular species (proteins, DNA) to metallic, C60, semiconductor and oxide nano particles; they can be attached through electrostatic interactions or covalent bonding. It is very clear that physicochemical properties of polymers (e.g. sensing and separation) are enhanced by anchored nano particles, while polymers provide excellent platforms for dispersing nano particles for e.g. high catalytic performances. We thus anticipate that the synergetic role of polymeric supports and anchored particles will increasingly be exploited in view of designing unique hybrid systems with unprecedented properties.

Keywords: gold, layer, polymer, macromolecular

Procedia PDF Downloads 389
12138 The Observable Method for the Regularization of Shock-Interface Interactions

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique that is capable of regularizing the shocks and sharp interfaces simultaneously in the shock-interface interaction simulations. The direct numerical simulation of flows involving shocks has been investigated for many years and a lot of numerical methods were developed to capture the shocks. However, most of these methods rely on the numerical dissipation to regularize the shocks. Moreover, in high Reynolds number flows, the nonlinear terms in hyperbolic Partial Differential Equations (PDE) dominates, constantly generating small scale features. This makes direct numerical simulation of shocks even harder. The same difficulty happens in two-phase flow with sharp interfaces where the nonlinear terms in the governing equations keep sharpening the interfaces to discontinuities. The main idea of the proposed technique is to average out the small scales that is below the resolution (observable scale) of the computational grid by filtering the convective velocity in the nonlinear terms in the governing PDE. This technique is named “observable method” and it results in a set of hyperbolic equations called observable equations, namely, observable Navier-Stokes or Euler equations. The observable method has been applied to the flow simulations involving shocks, turbulence, and two-phase flows, and the results are promising. In the current paper, the observable method is examined on the performance of regularizing shocks and interfaces at the same time in shock-interface interaction problems. Bubble-shock interactions and Richtmyer-Meshkov instability are particularly chosen to be studied. Observable Euler equations will be numerically solved with pseudo-spectral discretization in space and third order Total Variation Diminishing (TVD) Runge Kutta method in time. Results are presented and compared with existing publications. The interface acceleration and deformation and shock reflection are particularly examined.

Keywords: compressible flow simulation, inviscid regularization, Richtmyer-Meshkov instability, shock-bubble interactions.

Procedia PDF Downloads 344
12137 Towards a Critical Disentanglement of the ‘Religion’ Nexus in the Global East

Authors: Daan F. Oostveen

Abstract:

‘Religion’ as a term is not native to the Global East. The concept ‘religion’ is both understood in its meaning of ‘religious traditions’, commonly referring to the ‘World Religions’ and in its adjective meaning ‘the religious’ or ‘religiosity’ as a separate domain of human culture, commonly contrasted to the secular. Though neither of these understandings are native to the historical worldviews of East Asia, their development in modern Western scholarship has had an enormous impact on the self-understanding of cultural diversity in the Global East as well. One example is the identification and therefore elevation to the status of World Religion of ‘Buddhism’ which connected formerly dispersed religious practices throughout the Global East and subsumed them under this powerful label. On the other hand, we see how popular religiosity, shamanism and hybrid cultural expressions have become excluded from genuine religion; this had an immense impact on the sense of legitimacy of these practices, which became sometimes labeled as superstition are rejected as magic. Our theoretical frameworks on religion in the Global East do not always consider the complex power dynamics between religious actors, both elites and lay expressions of religion in everyday life, governments and religious studies scholars. In order to get a clear image of how religiosity functions in the context of the Global East, we have to take into account these power dynamics. What is important in particular is the issue of religious identity or absence of religious identity. The self-understanding of religious actors in the Global East is often very different from what scholars of religion observe. Religious practice, from an etic perspective, is often unrelated to religious identification from an emic perspective. But we also witness the rise of Christian churches in the Global East, in which religious identity and belonging does play a pivotal role. Finally, religion in the Global East has since the beginning of the 20th Century been conceptualized as the ‘other’ or republicanism or Marxist-Maoist ideology. It is important not to deny the key role of colonial thinking in the process of religion formation in the Global East. In this paper, it is argued that religious realities constituted emerging as a result from our theory of religion, and that these religious realities in turn inform our theory. Therefore, the relationship between phenomenology of religion and theory of religion can never be disentangled. In fact, we have to acknowledge that our conceptualizations of religious diversity are always already influenced by our valuation of those cultural expressions that we have come to call ‘religious’.

Keywords: global east, religion, religious belonging, secularity

Procedia PDF Downloads 130
12136 Developing Sustainable Rammed Earth Material Using Pulp Mill Fly Ash as Cement Replacement

Authors: Amin Ajabi, Chinchu Cherian, Sumi Siddiqua

Abstract:

Rammed earth (RE) is a traditional soil-based building material made by compressing a mixture of natural earth and binder ingredients such as chalk or lime, in temporary formworks. However, the modern RE uses 5 to 10% cement as a binder in order to meet the strength and durability requirements as per the standard specifications and guidelines. RE construction is considered to be an energy-efficient and environmental-friendly approach when compared to conventional concrete systems, which use 20 to 30% cement. The present study aimed to develop RE mix designs by utilizing non-hazardous wood-based fly ash generated by pulp and paper mills as a partial replacement for cement. The pulp mill fly ash (PPFA)-stabilized RE is considered to be a sustainable approach keeping in view of the massive carbon footprints associated with cement production as well as the adverse environmental impacts due to disposal of PPFA in landfills. For the experimental study, as-received PPFA, as well as PPFA-based geopolymer (synthesized by alkaline activation method), were incorporated as cement substitutes in the RE mixtures. Initially, local soil was collected and characterized by index and engineering properties. The PPFA was procured from a pulp manufacturing mill, and its physicochemical, mineralogical and morphological characterization, as well as environmental impact assessment, was conducted. Further, the various mix designs of RE material incorporating local soil and different proportions of cement, PPFA, and alkaline activator (a mixture of sodium silicate and sodium hydroxide solutions) were developed. The compacted RE specimens were cured and tested for 7-day and 28-day unconfined compressive strength (UCS) variations. Based on UCS results, the optimum mix design was identified corresponding to maximum strength improvement. Further, the cured RE specimens were subjected to freeze-thaw cycle testing for evaluating its performance and durability as a sustainable construction technique under extreme climatic conditions.

Keywords: sustainability, rammed earth, stabilization, pulp mill fly ash, geopolymer, alkaline activation, strength, durability

Procedia PDF Downloads 96
12135 Threat Modeling Methodology for Supporting Industrial Control Systems Device Manufacturers and System Integrators

Authors: Raluca Ana Maria Viziteu, Anna Prudnikova

Abstract:

Industrial control systems (ICS) have received much attention in recent years due to the convergence of information technology (IT) and operational technology (OT) that has increased the interdependence of safety and security issues to be considered. These issues require ICS-tailored solutions. That led to the need to creation of a methodology for supporting ICS device manufacturers and system integrators in carrying out threat modeling of embedded ICS devices in a way that guarantees the quality of the identified threats and minimizes subjectivity in the threat identification process. To research, the possibility of creating such a methodology, a set of existing standards, regulations, papers, and publications related to threat modeling in the ICS sector and other sectors was reviewed to identify various existing methodologies and methods used in threat modeling. Furthermore, the most popular ones were tested in an exploratory phase on a specific PLC device. The outcome of this exploratory phase has been used as a basis for defining specific characteristics of ICS embedded devices and their deployment scenarios, identifying the factors that introduce subjectivity in the threat modeling process of such devices, and defining metrics for evaluating the minimum quality requirements of identified threats associated to the deployment of the devices in existing infrastructures. Furthermore, the threat modeling methodology was created based on the previous steps' results. The usability of the methodology was evaluated through a set of standardized threat modeling requirements and a standardized comparison method for threat modeling methodologies. The outcomes of these verification methods confirm that the methodology is effective. The full paper includes the outcome of research on different threat modeling methodologies that can be used in OT, their comparison, and the results of implementing each of them in practice on a PLC device. This research is further used to build a threat modeling methodology tailored to OT environments; a detailed description is included. Moreover, the paper includes results of the evaluation of created methodology based on a set of parameters specifically created to rate threat modeling methodologies.

Keywords: device manufacturers, embedded devices, industrial control systems, threat modeling

Procedia PDF Downloads 75
12134 Modification of Newton Method in Two Points Block Differentiation Formula

Authors: Khairil Iskandar Othman, Nadhirah Kamal, Zarina Bibi Ibrahim

Abstract:

Block methods for solving stiff systems of ordinary differential equations (ODEs) are based on backward differential formulas (BDF) with PE(CE)2 and Newton method. In this paper, we introduce Modified Newton as a new strategy to get more efficient result. The derivation of BBDF using modified block Newton method is presented. This new block method with predictor-corrector gives more accurate result when compared to the existing BBDF.

Keywords: modified Newton, stiff, BBDF, Jacobian matrix

Procedia PDF Downloads 371
12133 Improvising Grid Interconnection Capabilities through Implementation of Power Electronics

Authors: Ashhar Ahmed Shaikh, Ayush Tandon

Abstract:

The swift reduction of fossil fuels from nature has crucial need for alternative energy sources to cater vital demand. It is essential to boost alternative energy sources to cover the continuously increasing demand for energy while minimizing the negative environmental impacts. Solar energy is one of the reliable sources that can generate energy. Solar energy is freely available in nature and is completely eco-friendly, and they are considered as the most promising power generating sources due to their easy availability and other advantages for the local power generation. This paper is to review the implementation of power electronic devices through Solar Energy Grid Integration System (SEGIS) to increase the efficiency. This paper will also concentrate on the future grid infrastructure and various other applications in order to make the grid smart. Development and implementation of a power electronic devices such as PV inverters and power controllers play an important role in power supply in the modern energy economy. Solar Energy Grid Integration System (SEGIS) opens pathways for promising solutions for new electronic and electrical components such as advanced innovative inverter/controller topologies and their functions, economical energy management systems, innovative energy storage systems with equipped advanced control algorithms, advanced maximum-power-point tracking (MPPT) suited for all PV technologies, protocols and the associated communications. In addition to advanced grid interconnection capabilities and features, the new hardware design results in small size, less maintenance, and higher reliability. The SEGIS systems will make the 'advanced integrated system' and 'smart grid' evolutionary processes to run in a better way. Since the last few years, there was a major development in the field of power electronics which led to more efficient systems and reduction of the cost per Kilo-watt. The inverters became more efficient and had reached efficiencies in excess of 98%, and commercial solar modules have reached almost 21% efficiency.

Keywords: solar energy grid integration systems, smart grid, advanced integrated system, power electronics

Procedia PDF Downloads 180
12132 Optimizing the Use of Google Translate in Translation Teaching: A Case Study at Prince Sultan University

Authors: Saadia Elamin

Abstract:

The quasi-universal use of smart phones with internet connection available all the time makes it a reflex action for translation undergraduates, once they encounter the least translation problem, to turn to the freely available web resource: Google Translate. Like for other translator resources and aids, the use of Google Translate needs to be moderated in such a way that it contributes to developing translation competence. Here, instead of interfering with students’ learning by providing ready-made solutions which might not always fit into the contexts of use, it can help to consolidate the skills of analysis and transfer which students have already acquired. One way to do so is by training students to adhere to the basic principles of translation work. The most important of these is that analyzing the source text for comprehension comes first and foremost before jumping into the search for target language equivalents. Another basic principle is that certain translator aids and tools can be used for comprehension, while others are to be confined to the phase of re-expressing the meaning into the target language. The present paper reports on the experience of making a measured and reasonable use of Google Translate in translation teaching at Prince Sultan University (PSU), Riyadh. First, it traces the development that has taken place in the field of translation in this age of information technology, be it in translation teaching and translator training, or in the real-world practice of the profession. Second, it describes how, with the aim of reflecting this development onto the way translation is taught, senior students, after being trained on post-editing machine translation output, are authorized to use Google Translate in classwork and assignments. Third, the paper elaborates on the findings of this case study which has demonstrated that Google Translate, if used at the appropriate levels of training, can help to enhance students’ ability to perform different translation tasks. This help extends from the search for terms and expressions, to the tasks of drafting the target text, revising its content and finally editing it. In addition, using Google Translate in this way fosters a reflexive and critical attitude towards web resources in general, maximizing thus the benefit gained from them in preparing students to meet the requirements of the modern translation job market.

Keywords: Google Translate, post-editing machine translation output, principles of translation work, translation competence, translation teaching, translator aids and tools

Procedia PDF Downloads 467
12131 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption, and GDP for Turkey: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of CO2 emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), carbon dioxide (CO2) emissions and gross domestic product (GDP) for Turkey using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Phillips–Perron (PP) test for stationarity, Johansen maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in the VECM suggests negative long-run causalities from consumption of petroleum products and the direct combustion of crude oil, coal and natural gas to GDP. Conversely, positive impacts of CO2 emissions and electricity consumption on GDP are found to be significant in Turkey during the period. There exists a short-run bidirectional relationship between electricity consumption and natural gas consumption. There exists a positive unidirectional causality running from electricity consumption to natural gas consumption, while there exists a negative unidirectional causality running from natural gas consumption to electricity consumption. Moreover, GDP has a negative effect on electricity consumption in Turkey in the short run. Overall, the results support arguments that there are relationships among environmental quality, energy use and economic output but the associations can to be differed by the sources of energy in the case of Turkey over of period 1980-2010.

Keywords: CO2 emissions, energy consumption, GDP, Turkey, time series analysis

Procedia PDF Downloads 506
12130 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 126
12129 A Mixed-Methods Design and Implementation Study of ‘the Attach Project’: An Attachment-Based Educational Intervention for Looked after Children in Northern Ireland

Authors: Hannah M. Russell

Abstract:

‘The Attach Project’ (TAP), is an educational intervention aimed at improving educational and socio-emotional outcomes for children who are looked after. TAP is underpinned by Attachment Theory and is adapted from Dyadic Developmental Psychotherapy (DDP), which is a treatment for children and young people impacted by complex trauma and disorders of attachment. TAP has been implemented in primary schools in Northern Ireland throughout the 2018/19 academic year. During this time, a design and implementation study has been conducted to assess the promise of effectiveness for the future dissemination and ‘scaling-up’ of the programme for a larger, randomised control trial. TAP has been designed specifically for implementation in a school setting and is comprised of a whole school element and a more individualised Key Adult-Key Child pairing. This design and implementation study utilises a mixed-methods research design consisting of quantitative, qualitative, and observational measures with stakeholder input and involvement being considered an integral component. The use of quantitative measures, such as self-report questionnaires prior to and eight months following the implementation of TAP, enabled the analysis of the strengths and direction of relations between the various components of the programme, as well as the influence of implementation factors. The use of qualitative measures, incorporating semi-structured interviews and focus groups, enabled the assessment of implementation factors, identification of implementation barriers, and potential methods of addressing these issues. Observational measures facilitated the continual development and improvement of ‘TAP training’ for school staff. Preliminary findings have provided evidence of promise for the effectiveness of TAP and indicate the potential benefits of introducing this type of attachment-based intervention across other educational settings. This type of intervention could benefit not only children who are looked after but all children who may be impacted by complex trauma or disorders of attachment. Furthermore, findings from this study demonstrate that it is possible for children to form a secondary attachment relationship with a significant adult in school. However, various implementation factors which should be addressed were identified throughout the study, such as the necessity of protected time being introduced to facilitate the development of a positive Key Adult- Key Child relationship. Furthermore, additional ‘re-cap’ training is required in future dissemination of the programme, to maximise ‘attachment friendly practice’ in the whole staff team. Qualitative findings have also indicated that there is a general opinion across school staff that this type of Key Adult- Key Child pairing could be more effective if it was introduced as soon as children begin primary school. This research has provided ample evidence for the need to introduce relationally based interventions in schools, to help to ensure that children who are looked after, or who are impacted by complex trauma or disorders of attachment, can thrive in the school environment. In addition, this research has facilitated the identification of important implementation factors and barriers to implementation, which can be addressed prior to the ‘scaling-up’ of TAP for a robust, randomised controlled trial.

Keywords: attachment, complex trauma, educational interventions, implementation

Procedia PDF Downloads 186
12128 Maturity Classification of Oil Palm Fresh Fruit Bunches Using Thermal Imaging Technique

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Reza Ehsani, Hawa Ze Jaffar, Ishak Aris

Abstract:

Ripeness estimation of oil palm fresh fruit is important processes that affect the profitableness and salability of oil palm fruits. The adulthood or ripeness of the oil palm fruits influences the quality of oil palm. Conventional procedure includes physical grading of Fresh Fruit Bunches (FFB) maturity by calculating the number of loose fruits per bunch. This physical classification of oil palm FFB is costly, time consuming and the results may have human error. Hence, many researchers try to develop the methods for ascertaining the maturity of oil palm fruits and thereby, deviously the oil content of distinct palm fruits without the need for exhausting oil extraction and analysis. This research investigates the potential of infrared images (Thermal Images) as a predictor to classify the oil palm FFB ripeness. A total of 270 oil palm fresh fruit bunches from most common cultivar of oil palm bunches Nigresens according to three maturity categories: under ripe, ripe and over ripe were collected. Each sample was scanned by the thermal imaging cameras FLIR E60 and FLIR T440. The average temperature of each bunches were calculated by using image processing in FLIR Tools and FLIR ThermaCAM researcher pro 2.10 environment software. The results show that temperature content decreased from immature to over mature oil palm FFBs. An overall analysis-of-variance (ANOVA) test was proved that this predictor gave significant difference between underripe, ripe and overripe maturity categories. This shows that the temperature as predictors can be good indicators to classify oil palm FFB. Classification analysis was performed by using the temperature of the FFB as predictors through Linear Discriminant Analysis (LDA), Mahalanobis Discriminant Analysis (MDA), Artificial Neural Network (ANN) and K- Nearest Neighbor (KNN) methods. The highest overall classification accuracy was 88.2% by using Artificial Neural Network. This research proves that thermal imaging and neural network method can be used as predictors of oil palm maturity classification.

Keywords: artificial neural network, maturity classification, oil palm FFB, thermal imaging

Procedia PDF Downloads 353
12127 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning

Authors: Pei Yi Lin

Abstract:

Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.

Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model

Procedia PDF Downloads 66
12126 A Strategic Approach in Utilising Limited Resources to Achieve High Organisational Performance

Authors: Collen Tebogo Masilo, Erik Schmikl

Abstract:

The demand for the DataMiner product by customers has presented a great challenge for the vendor in Skyline Communications in deploying its limited resources in the form of human resources, financial resources, and office space, to achieve high organisational performance in all its international operations. The rapid growth of the organisation has been unable to efficiently support its existing customers across the globe, and provide services to new customers, due to the limited number of approximately one hundred employees in its employ. The combined descriptive and explanatory case study research methods were selected as research design, making use of a survey questionnaire which was distributed to a sample of 100 respondents. A sample return of 89 respondents was achieved. The sampling method employed was non-probability sampling, using the convenient sampling method. Frequency analysis and correlation between the subscales (the four themes) were used for statistical analysis to interpret the data. The investigation was conducted into mechanisms that can be deployed to balance the high demand for products and the limited production capacity of the company’s Belgian operations across four aspects: demand management strategies, capacity management strategies, communication methods that can be used to align a sales management department, and reward systems in use to improve employee performance. The conclusions derived from the theme ‘demand management strategies’ are that the company is fully aware of the future market demand for its products. However, there seems to be no evidence that there is proper demand forecasting conducted within the organisation. The conclusions derived from the theme 'capacity management strategies' are that employees always have a lot of work to complete during office hours, and, also, employees seem to need help from colleagues with urgent tasks. This indicates that employees often work on unplanned tasks and multiple projects. Conclusions derived from the theme 'communication methods used to align sales management department with operations' are that communication is not good throughout the organisation. This means that information often stays with management, and does not reach non-management employees. This also means that there is a lack of smooth synergy as expected and a lack of good communication between the sales department and the projects office. This has a direct impact on the delivery of projects to customers by the operations department. The conclusions derived from the theme ‘employee reward systems’ are that employees are motivated, and feel that they add value in their current functions. There are currently no measures in place to identify unhappy employees, and there are also no proper reward systems in place which are linked to a performance management system. The research has made a contribution to the body of research by exploring the impact of the four sub-variables and their interaction on the challenges of organisational productivity, in particular where an organisation experiences a capacity problem during its growth stage during tough economic conditions. Recommendations were made which, if implemented by management, could further enhance the organisation’s sustained competitive operations.

Keywords: high demand for products, high organisational performance, limited production capacity, limited resources

Procedia PDF Downloads 141
12125 Assessing the Survival Time of Hospitalized Patients in Eastern Ethiopia During 2019–2020 Using the Bayesian Approach: A Retrospective Cohort Study

Authors: Chalachew Gashu, Yoseph Kassa, Habtamu Geremew, Mengestie Mulugeta

Abstract:

Background and Aims: Severe acute malnutrition remains a significant health challenge, particularly in low‐ and middle‐income countries. The aim of this study was to determine the survival time of under‐five children with severe acute malnutrition. Methods: A retrospective cohort study was conducted at a hospital, focusing on under‐five children with severe acute malnutrition. The study included 322 inpatients admitted to the Chiro hospital in Chiro, Ethiopia, between September 2019 and August 2020, whose data was obtained from medical records. Survival functions were analyzed using Kaplan‒Meier plots and log‐rank tests. The survival time of severe acute malnutrition was further analyzed using the Cox proportional hazards model and Bayesian parametric survival models, employing integrated nested Laplace approximation methods. Results: Among the 322 patients, 118 (36.6%) died as a result of severe acute malnutrition. The estimated median survival time for inpatients was found to be 2 weeks. Model selection criteria favored the Bayesian Weibull accelerated failure time model, which demonstrated that age, body temperature, pulse rate, nasogastric (NG) tube usage, hypoglycemia, anemia, diarrhea, dehydration, malaria, and pneumonia significantly influenced the survival time of severe acute malnutrition. Conclusions: This study revealed that children below 24 months, those with altered body temperature and pulse rate, NG tube usage, hypoglycemia, and comorbidities such as anemia, diarrhea, dehydration, malaria, and pneumonia had a shorter survival time when affected by severe acute malnutrition under the age of five. To reduce the death rate of children under 5 years of age, it is necessary to design community management for acute malnutrition to ensure early detection and improve access to and coverage for children who are malnourished.

Keywords: Bayesian analysis, severe acute malnutrition, survival data analysis, survival time

Procedia PDF Downloads 38
12124 Religion, Health and Ageing: A Geroanthropological Study on Spiritual Dimensions of Well-Being among the Elderly Residing in Old Age Homes in Jallandher Punjab, India

Authors: A. Rohit Kumar, B. R. K. Pathak

Abstract:

Background: Geroanthropology or the anthropology of ageing is a term which can be understood in terms of the anthropology of old age, old age within anthropology, and the anthropology of age. India is known as the land of spirituality and philosophy and is the birthplace of four major religions of the world namely Hinduasim, Buddhisim, Jainisim, and Sikhism. The most dominant religion in India today is Hinduism. About 80% of Indians are Hindus. Hinduism is a religion with a large number of Gods and Goddesses. Religion in India plays an important role at all life stages i.e. at birth, adulthood and particularly during old age. India is the second largest country in the world with 72 million elder persons above 60 years of age in 2001 as compared to china 127 million. The very concept of old age homes in India is new. The elderly people staying away from their homes, from their children or left to them is not considered to be a very happy situation. This paper deals with anthropology of ageing, religion and spirituality among the elderly residing in old age homes and tries to explain that how religion plays a vital role in the health of the elderly during old age. Methods: The data for the present paper was collected through both Qualitative and Quantitative methods. Old age homes located in Jallandher (Punjab) were selected for the present study. Age sixty was considered as a cut off age. Narratives, case studies were collected from 100 respondents residing in old age homes. The dominant religion in Punjab was found to be Sikhism and Hinduism while Jainism and Buddhism were found to be in minority. It was found that as one grows older the religiosity increases. Religiosity and sprituality was found to be directly proportional to ageing. Therefore religiosity and health were found to be connected. Results and Conclusion: Religion was found out to be a coping mechanism during ill health. The elderly living in old age homes were purposely selected for the study as the elderly in old age homes gets medical attention provided only by the old age home authorities. Moreover, the inmates in old age homes were of low socio-economic status couldn’t afford medical attention on their own. It was found that elderly who firmly believed in religion were found to be more satisfied with their health as compare to elderly who does not believe in religion at all. Belief in particular religion, God and godess had an impact on the health of the elderly.

Keywords: ageing, geroanthropology, religion, spirituality

Procedia PDF Downloads 337
12123 Numerical Modelling of Hydrodynamic Drag and Supercavitation Parameters for Supercavitating Torpedoes

Authors: Sezer Kefeli, Sertaç Arslan

Abstract:

In this paper, supercavitationphenomena, and parameters are explained, and hydrodynamic design approaches are investigated for supercavitating torpedoes. In addition, drag force calculation methods ofsupercavitatingvehicles are obtained. Basically, conventional heavyweight torpedoes reach up to ~50 knots by classic hydrodynamic techniques, on the other hand super cavitating torpedoes may reach up to ~200 knots, theoretically. However, in order to reachhigh speeds, hydrodynamic viscous forces have to be reduced or eliminated completely. This necessity is revived the supercavitation phenomena that is implemented to conventional torpedoes. Supercavitation is a type of cavitation, after all, it is more stable and continuous than other cavitation types. The general principle of supercavitation is to separate the underwater vehicle from water phase by surrounding the vehicle with cavitation bubbles. This situation allows the torpedo to operate at high speeds through the water being fully developed cavitation. Conventional torpedoes are entitled as supercavitating torpedoes when the torpedo moves in a cavity envelope due to cavitator in the nose section and solid fuel rocket engine in the rear section. There are two types of supercavitation phase, these are natural and artificial cavitation phases. In this study, natural cavitation is investigated on the disk cavitators based on numerical methods. Once the supercavitation characteristics and drag reduction of natural cavitationare studied on CFD platform, results are verified with the empirical equations. As supercavitation parameters cavitation number (), pressure distribution along axial axes, drag coefficient (C_?) and drag force (D), cavity wall velocity (U_?) and dimensionless cavity shape parameters, which are cavity length (L_?/d_?), cavity diameter(d_ₘ/d_?) and cavity fineness ratio (〖L_?/d〗_ₘ) are investigated and compared with empirical results. This paper has the characteristics of feasibility study to carry out numerical solutions of the supercavitation phenomena comparing with empirical equations.

Keywords: CFD, cavity envelope, high speed underwater vehicles, supercavitating flows, supercavitation, drag reduction, supercavitation parameters

Procedia PDF Downloads 165
12122 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods

Authors: Matthew D. Baffa

Abstract:

Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.

Keywords: emissivity, heat loss, infrared thermography, thermal conductance

Procedia PDF Downloads 307
12121 A Review of Critical Framework Assessment Matrices for Data Analysis on Overheating in Buildings Impact

Authors: Martin Adlington, Boris Ceranic, Sally Shazhad

Abstract:

In an effort to reduce carbon emissions, changes in UK regulations, such as Part L Conservation of heat and power, dictates improved thermal insulation and enhanced air tightness. These changes were a direct response to the UK Government being fully committed to achieving its carbon targets under the Climate Change Act 2008. The goal is to reduce emissions by at least 80% by 2050. Factors such as climate change are likely to exacerbate the problem of overheating, as this phenomenon expects to increase the frequency of extreme heat events exemplified by stagnant air masses and successive high minimum overnight temperatures. However, climate change is not the only concern relevant to overheating, as research signifies, location, design, and occupation; construction type and layout can also play a part. Because of this growing problem, research shows the possibility of health effects on occupants of buildings could be an issue. Increases in temperature can perhaps have a direct impact on the human body’s ability to retain thermoregulation and therefore the effects of heat-related illnesses such as heat stroke, heat exhaustion, heat syncope and even death can be imminent. This review paper presents a comprehensive evaluation of the current literature on the causes and health effects of overheating in buildings and has examined the differing applied assessment approaches used to measure the concept. Firstly, an overview of the topic was presented followed by an examination of overheating research work from the last decade. These papers form the body of the article and are grouped into a framework matrix summarizing the source material identifying the differing methods of analysis of overheating. Cross case evaluation has identified systematic relationships between different variables within the matrix. Key areas focused on include, building types and country, occupants behavior, health effects, simulation tools, computational methods.

Keywords: overheating, climate change, thermal comfort, health

Procedia PDF Downloads 348
12120 The Impact of Artificial Intelligence on Legislations and Laws

Authors: Keroles Akram Saed Ghatas

Abstract:

The near future will bring significant changes in modern organizations and management due to the growing role of intangible assets and knowledge workers. The area of copyright, intellectual property, digital (intangible) assets and media redistribution appears to be one of the greatest challenges facing business and society in general and management sciences and organizations in particular. The proposed article examines the views and perceptions of fairness in digital media sharing among Harvard Law School's LL.M.s. Students, based on 50 qualitative interviews and 100 surveys. The researcher took an ethnographic approach to her research and entered the Harvard LL.M. in 2016. at, a Face book group that allows people to connect naturally and attend in-person and private events more easily. After listening to numerous students, the researcher conducted a quantitative survey among 100 respondents to assess respondents' perceptions of fairness in digital file sharing in various contexts (based on media price, its availability, regional licenses, copyright holder status, etc.). to understand better . .). Based on the survey results, the researcher conducted long-term, open-ended and loosely structured ethnographic interviews (50 interviews) to further deepen the understanding of the results. The most important finding of the study is that Harvard lawyers generally support digital piracy in certain contexts, despite having the best possible legal and professional knowledge. Interestingly, they are also more accepting of working for the government than the private sector. The results of this study provide a better understanding of how “fairness” is perceived by the younger generation of lawyers and pave the way for a more rational application of licensing laws.

Keywords: cognitive impairments, communication disorders, death penalty, executive function communication disorders, cognitive disorders, capital murder, executive function death penalty, egyptian law absence, justice, political cases piracy, digital sharing, perception of fairness, legal profession

Procedia PDF Downloads 59
12119 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.

Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model

Procedia PDF Downloads 202
12118 Relationship of Macro-Concepts in Educational Technologies

Authors: L. R. Valencia Pérez, A. Morita Alexander, Peña A. Juan Manuel, A. Lamadrid Álvarez

Abstract:

This research shows the reflection and identification of explanatory variables and their relationships between different variables that are involved with educational technology, all of them encompassed in macro-concepts which are: cognitive inequality, economy, food and language; These will give the guideline to have a more detailed knowledge of educational systems, the communication and equipment, the physical space and the teachers; All of them interacting with each other give rise to what is called educational technology management. These elements contribute to have a very specific knowledge of the equipment of communications, networks and computer equipment, systems and content repositories. This is intended to establish the importance of knowing a global environment in the transfer of knowledge in poor countries, so that it does not diminish the capacity to be authentic and preserve their cultures, their languages or dialects, their hierarchies and real needs; In short, to respect the customs of different towns, villages or cities that are intended to be reached through the use of internationally agreed professional educational technologies. The methodology used in this research is the analytical - descriptive, which allows to explain each of the variables, which in our opinion must be taken into account, in order to achieve an optimal incorporation of the educational technology in a model that gives results in a medium term. The idea is that in an encompassing way the concepts will be integrated to others with greater coverage until reaching macro concepts that are of national coverage in the countries and that are elements of conciliation in the different federal and international reforms. At the center of the model is the educational technology which is directly related to the concepts that are contained in factors such as the educational system, communication and equipment, spaces and teachers, which are globally immersed in macro concepts Cognitive inequality, economics, food and language. One of the major contributions of this article is to leave this idea under an algorithm that allows to be as unbiased as possible when evaluating this indicator, since other indicators that are to be taken from international preference entities like the OECD in the area of education systems studied, so that they are not influenced by particular political or interest pressures. This work opens the way for a relationship between involved entities, both conceptual, procedural and human activity, to clearly identify the convergence of their impact on the problem of education and how the relationship can contribute to an improvement, but also shows possibilities of being able to reach a comprehensive education reform for all.

Keywords: relationships macro-concepts, cognitive inequality, economics, alimentation and language

Procedia PDF Downloads 195
12117 Infusing Social Business Skills into the Curriculum of Higher Learning Institutions with Special Reference to Albukhari International University

Authors: Abdi Omar Shuriye

Abstract:

A social business is a business designed to address socio-economic problems to enhance the welfare of the communities involved. Lately, social business, with its focus on innovative ideas, is capturing the interest of educational institutions, governments, and non-governmental organizations. Social business uses a business model to achieve a social goal, and in the last few decades, the idea of imbuing social business into the education system of higher learning institutions has spurred much excitement. This is due to the belief that it will lead to job creation and increased social resilience. One of the higher learning institutions which have invested immensely in the idea is Albukhari International University; it is a private education institution, on a state-of-the-art campus, providing an advantageous learning ecosystem. The niche area of this institution is social business, and it graduates job creators, not job seekers; this Malaysian institution is unique and one of its kind. The objective of this paper is to develop a work plan, direction, and milestone as well as the focus area for the infusion of social business into higher learning institutions with special reference to Al-Bukhari International University. The purpose is to develop a prototype and model full-scale to enable higher learning education institutions to construct the desired curriculum fermented with social business. With this model, major predicaments faced by these institutions could be overcome. The paper sets forth an educational plan and will spell out the basic tenets of social business, focusing on the nature and implementational aspects of the curriculum. It will also evaluate the mechanisms applied by these educational institutions. Currently, since research in this area remains scarce, institutions adopt the process of experimenting with various methods to find the best way to reach the desired result on the matter. The author is of the opinion that social business in education is the main tool to educate holistic future leaders; hence educational institutions should inspire students in the classroom to start up their own businesses by adopting creative and proactive teaching methods. This proposed model is a contribution in that direction.

Keywords: social business, curriculum, skills, university

Procedia PDF Downloads 84
12116 Renewable Energy Storage Capacity Rating: A Forecast of Selected Load and Resource Scenario in Nigeria

Authors: Yakubu Adamu, Baba Alfa, Salahudeen Adamu Gene

Abstract:

As the drive towards clean, renewable and sustainable energy generation is gradually been reshaped by renewable penetration over time, energy storage has thus, become an optimal solution for utilities looking to reduce transmission and capacity cost, therefore the need for capacity resources to be adjusted accordingly such that renewable energy storage may have the opportunity to substitute for retiring conventional energy systems with higher capacity factors. Considering the Nigeria scenario, where Over 80% of the current Nigerian primary energy consumption is met by petroleum, electricity demand is set to more than double by mid-century, relative to 2025 levels. With renewable energy penetration rapidly increasing, in particular biomass, hydro power, solar and wind energy, it is expected to account for the largest share of power output in the coming decades. Despite this rapid growth, the imbalance between load and resources has created a hindrance to the development of energy storage capacity, load and resources, hence forecasting energy storage capacity will therefore play an important role in maintaining the balance between load and resources including supply and demand. Therefore, the degree to which this might occur, its timing and more importantly its sustainability, is the subject matter of the current research. Here, we forecast the future energy storage capacity rating and thus, evaluate the load and resource scenario in Nigeria. In doing so, We used the scenario-based International Energy Agency models, the projected energy demand and supply structure of the country through 2030 are presented and analysed. Overall, this shows that in high renewable (solar) penetration scenarios in Nigeria, energy storage with 4-6h duration can obtain over 86% capacity rating with storage comprising about 24% of peak load capacity. Therefore, the general takeaway from the current study is that most power systems currently used has the potential to support fairly large penetrations of 4-6 hour storage as capacity resources prior to a substantial reduction in capacity ratings. The data presented in this paper is a crucial eye-opener for relevant government agencies towards developing these energy resources in tackling the present energy crisis in Nigeria. However, if the transformation of the Nigeria. power system continues primarily through expansion of renewable generation, then longer duration energy storage will be needed to qualify as capacity resources. Hence, the analytical task from the current survey will help to determine whether and when long-duration storage becomes an integral component of the capacity mix that is expected in Nigeria by 2030.

Keywords: capacity, energy, power system, storage

Procedia PDF Downloads 31
12115 Tagging a corpus of Media Interviews with Diplomats: Challenges and Solutions

Authors: Roberta Facchinetti, Sara Corrizzato, Silvia Cavalieri

Abstract:

Increasing interconnection between data digitalization and linguistic investigation has given rise to unprecedented potentialities and challenges for corpus linguists, who need to master IT tools for data analysis and text processing, as well as to develop techniques for efficient and reliable annotation in specific mark-up languages that encode documents in a format that is both human and machine-readable. In the present paper, the challenges emerging from the compilation of a linguistic corpus will be taken into consideration, focusing on the English language in particular. To do so, the case study of the InterDiplo corpus will be illustrated. The corpus, currently under development at the University of Verona (Italy), represents a novelty in terms both of the data included and of the tag set used for its annotation. The corpus covers media interviews and debates with diplomats and international operators conversing in English with journalists who do not share the same lingua-cultural background as their interviewees. To date, this appears to be the first tagged corpus of international institutional spoken discourse and will be an important database not only for linguists interested in corpus analysis but also for experts operating in international relations. In the present paper, special attention will be dedicated to the structural mark-up, parts of speech annotation, and tagging of discursive traits, that are the innovational parts of the project being the result of a thorough study to find the best solution to suit the analytical needs of the data. Several aspects will be addressed, with special attention to the tagging of the speakers’ identity, the communicative events, and anthropophagic. Prominence will be given to the annotation of question/answer exchanges to investigate the interlocutors’ choices and how such choices impact communication. Indeed, the automated identification of questions, in relation to the expected answers, is functional to understand how interviewers elicit information as well as how interviewees provide their answers to fulfill their respective communicative aims. A detailed description of the aforementioned elements will be given using the InterDiplo-Covid19 pilot corpus. The data yielded by our preliminary analysis of the data will highlight the viable solutions found in the construction of the corpus in terms of XML conversion, metadata definition, tagging system, and discursive-pragmatic annotation to be included via Oxygen.

Keywords: spoken corpus, diplomats’ interviews, tagging system, discursive-pragmatic annotation, english linguistics

Procedia PDF Downloads 178
12114 Molecular Detection of Acute Virus Infection in Children Hospitalized with Diarrhea in North India during 2014-2016

Authors: Ali Ilter Akdag, Pratima Ray

Abstract:

Background:This acute gastroenteritis viruses such as rotavirus, astrovirus, and adenovirus are mainly responsible for diarrhea in children below < 5 years old. Molecular detection of these viruses is crucially important to the understand development of the effective cure. This study aimed to determine the prevalence of common these viruses in children < 5 years old presented with diarrhea from Lala Lajpat Rai Memorial Medical College (LLRM) centre (Meerut) North India, India Methods: Total 312 fecal samples were collected from diarrheal children duration 3 years: in year 2014 (n = 118), 2015 (n = 128) and 2016 (n = 66) ,< 5 years of age who presented with acute diarrhea at the Lala Lajpat Rai Memorial Medical College (LLRM) centre(Meerut) North India, India. All samples were the first detection by EIA/RT-PCR for rotaviruses, adenovirus and astrovirus. Results: In 312 samples from children with acute diarrhea in sample viral agent was found, rotavirus A was the most frequent virus identified (57 cases; 18.2%), followed by Astrovirus in 28 cases (8.9%), adenovirus in 21 cases (6.7%). Mixed infections were found in 14 cases, all of which presented with acute diarrhea (14/312; 4.48%). Conclusions: These viruses are a major cause of diarrhea in children <5 years old in North India. Rotavirus A is the most common etiological agent, follow by astrovirus. This surveillance is important to vaccine development of the entire population. There is variation detection of virus year wise due to differences in the season of sampling, method of sampling, hygiene condition, socioeconomic level of the entire people, enrolment criteria, and virus detection methods. It was found Astrovirus higher then Rotavirus in 2015, but overall three years study Rotavirus A is mainly responsible for causing severe diarrhea in children <5 years old in North India. It emphasizes the required for cost-effective diagnostic assays for Rotaviruses which would help to determine the disease burden.

Keywords: adenovirus, Astrovirus, hospitalized children, Rotavirus

Procedia PDF Downloads 135
12113 Generalization of Tau Approximant and Error Estimate of Integral Form of Tau Methods for Some Class of Ordinary Differential Equations

Authors: A. I. Ma’ali, R. B. Adeniyi, A. Y. Badeggi, U. Mohammed

Abstract:

An error estimation of the integrated formulation of the Lanczos tau method for some class of ordinary differential equations was reported. This paper is concern with the generalization of tau approximants and their corresponding error estimates for some class of ordinary differential equations (ODEs) characterized by m + s =3 (i.e for m =1, s=2; m=2, s=1; and m=3, s=0) where m and s are the order of differential equations and number of overdetermination, respectively. The general result obtained were validated with some numerical examples.

Keywords: approximant, error estimate, tau method, overdetermination

Procedia PDF Downloads 602
12112 From Text to Data: Sentiment Analysis of Presidential Election Political Forums

Authors: Sergio V Davalos, Alison L. Watkins

Abstract:

User generated content (UGC) such as website post has data associated with it: time of the post, gender, location, type of device, and number of words. The text entered in user generated content (UGC) can provide a valuable dimension for analysis. In this research, each user post is treated as a collection of terms (words). In addition to the number of words per post, the frequency of each term is determined by post and by the sum of occurrences in all posts. This research focuses on one specific aspect of UGC: sentiment. Sentiment analysis (SA) was applied to the content (user posts) of two sets of political forums related to the US presidential elections for 2012 and 2016. Sentiment analysis results in deriving data from the text. This enables the subsequent application of data analytic methods. The SASA (SAIL/SAI Sentiment Analyzer) model was used for sentiment analysis. The application of SASA resulted with a sentiment score for each post. Based on the sentiment scores for the posts there are significant differences between the content and sentiment of the two sets for the 2012 and 2016 presidential election forums. In the 2012 forums, 38% of the forums started with positive sentiment and 16% with negative sentiment. In the 2016 forums, 29% started with positive sentiment and 15% with negative sentiment. There also were changes in sentiment over time. For both elections as the election got closer, the cumulative sentiment score became negative. The candidate who won each election was in the more posts than the losing candidates. In the case of Trump, there were more negative posts than Clinton’s highest number of posts which were positive. KNIME topic modeling was used to derive topics from the posts. There were also changes in topics and keyword emphasis over time. Initially, the political parties were the most referenced and as the election got closer the emphasis changed to the candidates. The performance of the SASA method proved to predict sentiment better than four other methods in Sentibench. The research resulted in deriving sentiment data from text. In combination with other data, the sentiment data provided insight and discovery about user sentiment in the US presidential elections for 2012 and 2016.

Keywords: sentiment analysis, text mining, user generated content, US presidential elections

Procedia PDF Downloads 183
12111 Status Quo Bias: A Paradigm Shift in Policy Making

Authors: Divyansh Goel, Varun Jain

Abstract:

Classical economics works on the principle that people are rational and analytical in their decision making and their choices fall in line with the most suitable option according to the dominant strategy in a standard game theory model. This model has failed at many occasions in estimating the behavior and dealings of rational people, giving proof of some other underlying heuristics and cognitive biases at work. This paper probes into the study of these factors, which fall under the umbrella of behavioral economics and through their medium explore the solution to a problem which a lot of nations presently face. There has long been a wide disparity in the number of people holding favorable views on organ donation and the actual number of people signing up for the same. This paper, in its entirety, is an attempt to shape the public policy which leads to an increase the number of organ donations that take place and close the gap in the statistics of the people who believe in signing up for organ donation and the ones who actually do. The key assumption here is that in cases of cognitive dissonance, where people have an inconsistency due to conflicting views, people have a tendency to go with the default choice. This tendency is a well-documented cognitive bias known as the status quo bias. The research in this project involves an assay of mandated choice models of organ donation with two case studies. The first of an opt-in system of Germany (where people have to explicitly sign up for organ donation) and the second of an opt-out system of Austria (every citizen at the time of their birth is an organ donor and has to explicitly sign up for refusal). Additionally, there has also been presented a detailed analysis of the experiment performed by Eric J. Johnson and Daniel G. Goldstein. Their research as well as many other independent experiments such as that by Tsvetelina Yordanova of the University of Sofia, both of which yield similar results. The conclusion being that the general population has by and large no rigid stand on organ donation and are gullible to status quo bias, which in turn can determine whether a large majority of people will consent to organ donation or not. Thus, in our paper, we throw light on how governments can use status quo bias to drive positive social change by making policies in which everyone by default is marked an organ donor, which will, in turn, save the lives of people who succumb on organ transplantation waitlists and save the economy countless hours of economic productivity.

Keywords: behavioral economics, game theory, organ donation, status quo bias

Procedia PDF Downloads 295
12110 IT Investment Decision Making: Case Studies on the Implementation of Contactless Payments in Commercial Banks of Kazakhstan

Authors: Symbat Moldabekova

Abstract:

This research explores the practice of decision-making in commercial banks in Kazakhstan. It focuses on recent technologies, such as contactless payments and QR code, and uses interviews with bank executives and industry practitioners to gain an understanding of how decisions are made and the role of financial assessment methods. The aim of the research is (1) to study the importance of financial techniques to evaluate IT investments; (2) to understand the role of different expert groups; (3) to explore how market trends and industry features affect decisions on IT; (4) to build a model that defines the real practice of decision-making on IT in commercial banks in Kazakhstan. The theoretical framework suggests that decision-making on IT is a socially constructed process, where actor groups with different background interact and negotiate with each other to develop a shared understanding of IT and to make more effective decisions. Theory and observations suggest that the more parties involved in the process of decision-making, the higher the possibility of disagreements between them. As each actor group has their views on the rational decision on an IT project, it is worth exploring how the final decision is made in practice. Initial findings show that the financial assessment methods are used as a guideline and do not play a big role in the final decision. The commercial banks of Kazakhstan tend to study experience of neighboring countries before adopting innovation. Implementing contactless payments is widely regarded as pinnacle success factor due to increasing competition in the market. First-to-market innovations are considered as priorities therefore, such decisions can be made with exemption of some certain actor groups from the process. Customers play significant role and they participate in testing demo versions of the products before bringing innovation to the market. The study will identify the viewpoints of actors in the banking sector on a rational decision, and the ways decision-makers from a variety of disciplines interact with each other in order to make a decision on IT in retail banks.

Keywords: actor groups, decision making, technology investment, retail banks

Procedia PDF Downloads 119