Search results for: agile methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15420

Search results for: agile methods

12540 A Mixed-Method Exploration of the Interrelationship between Corporate Governance and Firm Performance

Authors: Chen Xiatong

Abstract:

The study aims to explore the interrelationship between corporate governance factors and firm performance in Mainland China using a mixed-method approach. To clarify the current effectiveness of corporate governance, uncover the complex interrelationships between governance factors and firm performance, and enhance understanding of corporate governance strategies in Mainland China. The research involves quantitative methods like statistical analysis of governance factors and firm performance data, as well as qualitative approaches including policy research, case studies, and interviews with staff members. The study aims to reveal the current effectiveness of corporate governance in Mainland China, identify complex interrelationships between governance factors and firm performance, and provide suggestions for companies to enhance their governance practices. The research contributes to enriching the literature on corporate governance by providing insights into the effectiveness of governance practices in Mainland China and offering suggestions for improvement. Quantitative data will be gathered through surveys and sampling methods, focusing on governance factors and firm performance indicators. Qualitative data will be collected through policy research, case studies, and interviews with staff members. Quantitative data will be analyzed using statistical, mathematical, and computational techniques. Qualitative data will be analyzed through thematic analysis and interpretation of policy documents, case study findings, and interview responses. The study addresses the effectiveness of corporate governance in Mainland China, the interrelationship between governance factors and firm performance, and staff members' perceptions of corporate governance strategies. The research aims to enhance understanding of corporate governance effectiveness, enrich the literature on governance practices, and contribute to the field of business management and human resources management in Mainland China.

Keywords: corporate governance, business management, human resources management, board of directors

Procedia PDF Downloads 55
12539 The Role of Home Composting in Waste Management Cost Reduction

Authors: Nahid Hassanshahi, Ayoub Karimi-Jashni, Nasser Talebbeydokhti

Abstract:

Due to the economic and environmental benefits of producing less waste, the US Environmental Protection Agency (EPA) introduces source reduction as one of the most important means to deal with the problems caused by increased landfills and pollution. Waste reduction involves all waste management methods, including source reduction, recycling, and composting, which reduce waste flow to landfills or other disposal facilities. Source reduction of waste can be studied from two perspectives: avoiding waste production, or reducing per capita waste production, and waste deviation that indicates the reduction of waste transfer to landfills. The present paper has investigated home composting as a managerial solution for reduction of waste transfer to landfills. Home composting has many benefits. The use of household waste for the production of compost will result in a much smaller amount of waste being sent to landfills, which in turn will reduce the costs of waste collection, transportation and burial. Reducing the volume of waste for disposal and using them for the production of compost and plant fertilizer might help to recycle the material in a shorter time and to use them effectively in order to preserve the environment and reduce contamination. Producing compost in a home-based manner requires very small piece of land for preparation and recycling compared with other methods. The final product of home-made compost is valuable and helps to grow crops and garden plants. It is also used for modifying the soil structure and maintaining its moisture. The food that is transferred to landfills will spoil and produce leachate after a while. It will also release methane and greenhouse gases. But, composting these materials at home is the best way to manage degradable materials, use them efficiently and reduce environmental pollution. Studies have shown that the benefits of the sale of produced compost and the reduced costs of collecting, transporting, and burying waste can well be responsive to the costs of purchasing home compost machine and the cost of related trainings. Moreover, the process of producing home compost may be profitable within 4 to 5 years and as a result, it will have a major role in reducing waste management.

Keywords: compost, home compost, reducing waste, waste management

Procedia PDF Downloads 428
12538 Investigating a Crack in Care: Assessing Long-Term Impacts of Child Abuse and Neglect

Authors: Remya Radhakrishnan, Hema Perinbanathan, Anukriti Rath, Reshmi Ramachandran, Rohith Thazhathuvetil Sasindrababu, Maria Karizhenskaia

Abstract:

Childhood adversities have lasting effects on health and well-being. This abstract explores the connection between adverse childhood experiences (ACEs) and health consequences, including substance abuse and obesity. Understanding the impact of childhood trauma and emphasizing the importance of culturally sensitive treatments and focused interventions help to mitigate these effects. Research consistently shows a strong link between ACEs and poor health outcomes. Our team conducted a comprehensive literature review of depression and anxiety in Canadian children and youth, exploring diverse treatment methods, including medical, psychotherapy, and alternative therapies like art and music therapy. We searched Medline, Google Scholar, and St. Lawrence College Library. Only original research papers, published between 2012 and 2023, peer-reviewed, and reporting on childhood adversities on health and its treatment methods in children and youth in Canada were considered. We focused on their significance in treating depression and anxiety. According to the study's findings, the prevalence of adverse childhood experiences (ACEs) is still a significant concern. In Canada, 40% of people report having had multiple ACEs, and 78% report having had at least one ACE, highlighting the persistence of childhood adversity and indicating that the issue is unlikely to fade off in the near future. Likewise, findings revealed that individuals who experienced abuse, neglect, or violence during childhood are likelier to engage in harmful behaviors like polydrug use, suicidal ideation, and victimization and suffer from mental health problems such as depression and post-traumatic stress disorder (PTSD).

Keywords: adverse childhood experiences (ACEs), obesity, post-traumatic stress disorder (PTSD), resilience, substance abuse, trauma-informed care

Procedia PDF Downloads 121
12537 Exploring the Influence of Climate Change on Food Behavior in Medieval France: A Multi-Method Analysis of Human-Animal Interactions

Authors: Unsain Dianne, Roussel Audrey, Goude Gwenaëlle, Magniez Pierre, Storå Jan

Abstract:

This paper aims to investigate the changes in husbandry practices and meat consumption during the transition from the Medieval Climate Anomaly to the Little Ice Age in the South of France. More precisely, we will investigate breeding strategies, animal size and health status, carcass exploitation strategies, and the impact of socioeconomic status on human-environment interactions. For that purpose, we will analyze faunal remains from ten sites equally distributed between the two periods. Those include consumers from different socio-economic backgrounds (peasants, city dwellers, soldiers, lords, and the Popes). The research will employ different methods used in zooarchaeology: comparative anatomy, biometry, pathologies analyses, traceology, and utility indices, as well as experimental archaeology, to reconstruct and understand the changes in animal breeding and consumption practices. Their analysis will allow the determination of modifications in the animal production chain, with the composition of the flocks (species, size), their management (age, sex, health status), culinary practices (strategies for the exploitation of carcasses, cooking, tastes) or the importance of trade (butchers, sales of processed animal products). The focus will also be on the social extraction of consumers. The aim will be to determine whether climate change has had a greater impact on the most modest groups (such as peasants), whether the consequences have been global and have also affected the highest levels of society, or whether the social and economic factors have been sufficient to balance out the climatic hazards, leading to no significant changes. This study will contribute to our understanding of the impact of climate change on breeding and consumption strategies in medieval society from a historical and social point of view. It combines various research methods to provide a comprehensive analysis of the changes in human-animal interactions during different climatic periods.

Keywords: archaeology, animal economy, cooking, husbandry practices, climate change, France

Procedia PDF Downloads 59
12536 Ethnobotanical Medicines for Treating Snakebites among the Indigenous Maya Populations of Belize

Authors: Kerry Hull, Mark Wright

Abstract:

This paper brings light to ethnobotanical medicines used by the Maya of Belize to treat snake bites. The varying ecological zones of Belize boast over fifty species of snakes, nine of which are poisonous and dangerous to humans. Two distinct Maya groups occupy neighboring regions of Belize, the Q’eqchi’ and the Mopan. With Western medical care often far from their villages, what traditional methods are used to treat poisonous snake bites? Based primarily on data gathered with native consultants during the authors’ fieldwork with both groups, this paper details the ethnobotanical resources used by the Q’eqchi’ and Mopan traditional healers. The Q’eqchi’ and Mopan most commonly rely on traditional ‘bush doctors’ (ilmaj in Mopan), both male and female, and specialized ‘snake doctors’ to heal bites from venomous snakes. First, this paper presents each plant employed by healers for bites for the nine poisonous snakes in Belize along with the specific botanical recipes and methods of application for each remedy. Individual chemical and therapeutic qualities of some of those plants are investigated in an effort to explain their possible medicinal value for different toxins or the symptoms caused by those toxins. In addition, this paper explores mythological associations with certain snakes that inform local understanding regarding which plants are considered efficacious in each case, arguing that numerous oral traditions (recorded by the authors) help to link botanical medicines to episodes within their mythic traditions. Finally, the use of plants to counteract snakebites brought about through sorcery is discussed inasmuch as some snakes are seen as ‘helpers’ of sorcerers. Snake bites given under these circumstances can only be cured by those who know both the proper corresponding plant(s) and ritual prayer(s). This paper provides detailed documentation of traditional ethnomedicines and practices from the dying art of traditional Maya healers and argues for multi-faceted diagnostic techniques to determine toxin severity, the presence or absence of sorcery, and the appropriate botanical remedy.

Keywords: ethnobotany, Maya, medicine, snake bites

Procedia PDF Downloads 237
12535 Review of Concepts and Tools Applied to Assess Risks Associated with Food Imports

Authors: A. Falenski, A. Kaesbohrer, M. Filter

Abstract:

Introduction: Risk assessments can be performed in various ways and in different degrees of complexity. In order to assess risks associated with imported foods additional information needs to be taken into account compared to a risk assessment on regional products. The present review is an overview on currently available best practise approaches and data sources used for food import risk assessments (IRAs). Methods: A literature review has been performed. PubMed was searched for articles about food IRAs published in the years 2004 to 2014 (English and German texts only, search string “(English [la] OR German [la]) (2004:2014 [dp]) import [ti] risk”). Titles and abstracts were screened for import risks in the context of IRAs. The finally selected publications were analysed according to a predefined questionnaire extracting the following information: risk assessment guidelines followed, modelling methods used, data and software applied, existence of an analysis of uncertainty and variability. IRAs cited in these publications were also included in the analysis. Results: The PubMed search resulted in 49 publications, 17 of which contained information about import risks and risk assessments. Within these 19 cross references were identified to be of interest for the present study. These included original articles, reviews and guidelines. At least one of the guidelines of the World Organisation for Animal Health (OIE) and the Codex Alimentarius Commission were referenced in any of the IRAs, either for import of animals or for imports concerning foods, respectively. Interestingly, also a combination of both was used to assess the risk associated with the import of live animals serving as the source of food. Methods ranged from full quantitative IRAs using probabilistic models and dose-response models to qualitative IRA in which decision trees or severity tables were set up using parameter estimations based on expert opinions. Calculations were done using @Risk, R or Excel. Most heterogeneous was the type of data used, ranging from general information on imported goods (food, live animals) to pathogen prevalence in the country of origin. These data were either publicly available in databases or lists (e.g., OIE WAHID and Handystatus II, FAOSTAT, Eurostat, TRACES), accessible on a national level (e.g., herd information) or only open to a small group of people (flight passenger import data at national airport customs office). In the IRAs, an uncertainty analysis has been mentioned in some cases, but calculations have been performed only in a few cases. Conclusion: The current state-of-the-art in the assessment of risks of imported foods is characterized by a great heterogeneity in relation to general methodology and data used. Often information is gathered on a case-by-case basis and reformatted by hand in order to perform the IRA. This analysis therefore illustrates the need for a flexible, modular framework supporting the connection of existing data sources with data analysis and modelling tools. Such an infrastructure could pave the way to IRA workflows applicable ad-hoc, e.g. in case of a crisis situation.

Keywords: import risk assessment, review, tools, food import

Procedia PDF Downloads 302
12534 Polymer-Layered Gold Nanoparticles: Preparation, Properties and Uses of a New Class of Materials

Authors: S. M. Chabane sari S. Zargou, A.R. Senoudi, F. Benmouna

Abstract:

Immobilization of nano particles (NPs) is the subject of numerous studies pertaining to the design of polymer nano composites, supported catalysts, bioactive colloidal crystals, inverse opals for novel optical materials, latex templated-hollow inorganic capsules, immunodiagnostic assays; “Pickering” emulsion polymerization for making latex particles and film-forming composites or Janus particles; chemo- and biosensors, tunable plasmonic nano structures, hybrid porous monoliths for separation science and technology, biocidal polymer/metal nano particle composite coatings, and so on. Particularly, in the recent years, the literature has witnessed an impressive progress of investigations on polymer coatings, grafts and particles as supports for anchoring nano particles. This is actually due to several factors: polymer chains are flexible and may contain a variety of functional groups that are able to efficiently immobilize nano particles and their precursors by dispersive or van der Waals, electrostatic, hydrogen or covalent bonds. We review methods to prepare polymer-immobilized nano particles through a plethora of strategies in view of developing systems for separation, sensing, extraction and catalysis. The emphasis is on methods to provide (i) polymer brushes and grafts; (ii) monoliths and porous polymer systems; (iii) natural polymers and (iv) conjugated polymers as platforms for anchoring nano particles. The latter range from soft bio macromolecular species (proteins, DNA) to metallic, C60, semiconductor and oxide nano particles; they can be attached through electrostatic interactions or covalent bonding. It is very clear that physicochemical properties of polymers (e.g. sensing and separation) are enhanced by anchored nano particles, while polymers provide excellent platforms for dispersing nano particles for e.g. high catalytic performances. We thus anticipate that the synergetic role of polymeric supports and anchored particles will increasingly be exploited in view of designing unique hybrid systems with unprecedented properties.

Keywords: gold, layer, polymer, macromolecular

Procedia PDF Downloads 391
12533 The Observable Method for the Regularization of Shock-Interface Interactions

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique that is capable of regularizing the shocks and sharp interfaces simultaneously in the shock-interface interaction simulations. The direct numerical simulation of flows involving shocks has been investigated for many years and a lot of numerical methods were developed to capture the shocks. However, most of these methods rely on the numerical dissipation to regularize the shocks. Moreover, in high Reynolds number flows, the nonlinear terms in hyperbolic Partial Differential Equations (PDE) dominates, constantly generating small scale features. This makes direct numerical simulation of shocks even harder. The same difficulty happens in two-phase flow with sharp interfaces where the nonlinear terms in the governing equations keep sharpening the interfaces to discontinuities. The main idea of the proposed technique is to average out the small scales that is below the resolution (observable scale) of the computational grid by filtering the convective velocity in the nonlinear terms in the governing PDE. This technique is named “observable method” and it results in a set of hyperbolic equations called observable equations, namely, observable Navier-Stokes or Euler equations. The observable method has been applied to the flow simulations involving shocks, turbulence, and two-phase flows, and the results are promising. In the current paper, the observable method is examined on the performance of regularizing shocks and interfaces at the same time in shock-interface interaction problems. Bubble-shock interactions and Richtmyer-Meshkov instability are particularly chosen to be studied. Observable Euler equations will be numerically solved with pseudo-spectral discretization in space and third order Total Variation Diminishing (TVD) Runge Kutta method in time. Results are presented and compared with existing publications. The interface acceleration and deformation and shock reflection are particularly examined.

Keywords: compressible flow simulation, inviscid regularization, Richtmyer-Meshkov instability, shock-bubble interactions.

Procedia PDF Downloads 349
12532 Threat Modeling Methodology for Supporting Industrial Control Systems Device Manufacturers and System Integrators

Authors: Raluca Ana Maria Viziteu, Anna Prudnikova

Abstract:

Industrial control systems (ICS) have received much attention in recent years due to the convergence of information technology (IT) and operational technology (OT) that has increased the interdependence of safety and security issues to be considered. These issues require ICS-tailored solutions. That led to the need to creation of a methodology for supporting ICS device manufacturers and system integrators in carrying out threat modeling of embedded ICS devices in a way that guarantees the quality of the identified threats and minimizes subjectivity in the threat identification process. To research, the possibility of creating such a methodology, a set of existing standards, regulations, papers, and publications related to threat modeling in the ICS sector and other sectors was reviewed to identify various existing methodologies and methods used in threat modeling. Furthermore, the most popular ones were tested in an exploratory phase on a specific PLC device. The outcome of this exploratory phase has been used as a basis for defining specific characteristics of ICS embedded devices and their deployment scenarios, identifying the factors that introduce subjectivity in the threat modeling process of such devices, and defining metrics for evaluating the minimum quality requirements of identified threats associated to the deployment of the devices in existing infrastructures. Furthermore, the threat modeling methodology was created based on the previous steps' results. The usability of the methodology was evaluated through a set of standardized threat modeling requirements and a standardized comparison method for threat modeling methodologies. The outcomes of these verification methods confirm that the methodology is effective. The full paper includes the outcome of research on different threat modeling methodologies that can be used in OT, their comparison, and the results of implementing each of them in practice on a PLC device. This research is further used to build a threat modeling methodology tailored to OT environments; a detailed description is included. Moreover, the paper includes results of the evaluation of created methodology based on a set of parameters specifically created to rate threat modeling methodologies.

Keywords: device manufacturers, embedded devices, industrial control systems, threat modeling

Procedia PDF Downloads 80
12531 Modification of Newton Method in Two Points Block Differentiation Formula

Authors: Khairil Iskandar Othman, Nadhirah Kamal, Zarina Bibi Ibrahim

Abstract:

Block methods for solving stiff systems of ordinary differential equations (ODEs) are based on backward differential formulas (BDF) with PE(CE)2 and Newton method. In this paper, we introduce Modified Newton as a new strategy to get more efficient result. The derivation of BBDF using modified block Newton method is presented. This new block method with predictor-corrector gives more accurate result when compared to the existing BBDF.

Keywords: modified Newton, stiff, BBDF, Jacobian matrix

Procedia PDF Downloads 378
12530 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 128
12529 A Mixed-Methods Design and Implementation Study of ‘the Attach Project’: An Attachment-Based Educational Intervention for Looked after Children in Northern Ireland

Authors: Hannah M. Russell

Abstract:

‘The Attach Project’ (TAP), is an educational intervention aimed at improving educational and socio-emotional outcomes for children who are looked after. TAP is underpinned by Attachment Theory and is adapted from Dyadic Developmental Psychotherapy (DDP), which is a treatment for children and young people impacted by complex trauma and disorders of attachment. TAP has been implemented in primary schools in Northern Ireland throughout the 2018/19 academic year. During this time, a design and implementation study has been conducted to assess the promise of effectiveness for the future dissemination and ‘scaling-up’ of the programme for a larger, randomised control trial. TAP has been designed specifically for implementation in a school setting and is comprised of a whole school element and a more individualised Key Adult-Key Child pairing. This design and implementation study utilises a mixed-methods research design consisting of quantitative, qualitative, and observational measures with stakeholder input and involvement being considered an integral component. The use of quantitative measures, such as self-report questionnaires prior to and eight months following the implementation of TAP, enabled the analysis of the strengths and direction of relations between the various components of the programme, as well as the influence of implementation factors. The use of qualitative measures, incorporating semi-structured interviews and focus groups, enabled the assessment of implementation factors, identification of implementation barriers, and potential methods of addressing these issues. Observational measures facilitated the continual development and improvement of ‘TAP training’ for school staff. Preliminary findings have provided evidence of promise for the effectiveness of TAP and indicate the potential benefits of introducing this type of attachment-based intervention across other educational settings. This type of intervention could benefit not only children who are looked after but all children who may be impacted by complex trauma or disorders of attachment. Furthermore, findings from this study demonstrate that it is possible for children to form a secondary attachment relationship with a significant adult in school. However, various implementation factors which should be addressed were identified throughout the study, such as the necessity of protected time being introduced to facilitate the development of a positive Key Adult- Key Child relationship. Furthermore, additional ‘re-cap’ training is required in future dissemination of the programme, to maximise ‘attachment friendly practice’ in the whole staff team. Qualitative findings have also indicated that there is a general opinion across school staff that this type of Key Adult- Key Child pairing could be more effective if it was introduced as soon as children begin primary school. This research has provided ample evidence for the need to introduce relationally based interventions in schools, to help to ensure that children who are looked after, or who are impacted by complex trauma or disorders of attachment, can thrive in the school environment. In addition, this research has facilitated the identification of important implementation factors and barriers to implementation, which can be addressed prior to the ‘scaling-up’ of TAP for a robust, randomised controlled trial.

Keywords: attachment, complex trauma, educational interventions, implementation

Procedia PDF Downloads 194
12528 Maturity Classification of Oil Palm Fresh Fruit Bunches Using Thermal Imaging Technique

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Reza Ehsani, Hawa Ze Jaffar, Ishak Aris

Abstract:

Ripeness estimation of oil palm fresh fruit is important processes that affect the profitableness and salability of oil palm fruits. The adulthood or ripeness of the oil palm fruits influences the quality of oil palm. Conventional procedure includes physical grading of Fresh Fruit Bunches (FFB) maturity by calculating the number of loose fruits per bunch. This physical classification of oil palm FFB is costly, time consuming and the results may have human error. Hence, many researchers try to develop the methods for ascertaining the maturity of oil palm fruits and thereby, deviously the oil content of distinct palm fruits without the need for exhausting oil extraction and analysis. This research investigates the potential of infrared images (Thermal Images) as a predictor to classify the oil palm FFB ripeness. A total of 270 oil palm fresh fruit bunches from most common cultivar of oil palm bunches Nigresens according to three maturity categories: under ripe, ripe and over ripe were collected. Each sample was scanned by the thermal imaging cameras FLIR E60 and FLIR T440. The average temperature of each bunches were calculated by using image processing in FLIR Tools and FLIR ThermaCAM researcher pro 2.10 environment software. The results show that temperature content decreased from immature to over mature oil palm FFBs. An overall analysis-of-variance (ANOVA) test was proved that this predictor gave significant difference between underripe, ripe and overripe maturity categories. This shows that the temperature as predictors can be good indicators to classify oil palm FFB. Classification analysis was performed by using the temperature of the FFB as predictors through Linear Discriminant Analysis (LDA), Mahalanobis Discriminant Analysis (MDA), Artificial Neural Network (ANN) and K- Nearest Neighbor (KNN) methods. The highest overall classification accuracy was 88.2% by using Artificial Neural Network. This research proves that thermal imaging and neural network method can be used as predictors of oil palm maturity classification.

Keywords: artificial neural network, maturity classification, oil palm FFB, thermal imaging

Procedia PDF Downloads 361
12527 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning

Authors: Pei Yi Lin

Abstract:

Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.

Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model

Procedia PDF Downloads 76
12526 A Strategic Approach in Utilising Limited Resources to Achieve High Organisational Performance

Authors: Collen Tebogo Masilo, Erik Schmikl

Abstract:

The demand for the DataMiner product by customers has presented a great challenge for the vendor in Skyline Communications in deploying its limited resources in the form of human resources, financial resources, and office space, to achieve high organisational performance in all its international operations. The rapid growth of the organisation has been unable to efficiently support its existing customers across the globe, and provide services to new customers, due to the limited number of approximately one hundred employees in its employ. The combined descriptive and explanatory case study research methods were selected as research design, making use of a survey questionnaire which was distributed to a sample of 100 respondents. A sample return of 89 respondents was achieved. The sampling method employed was non-probability sampling, using the convenient sampling method. Frequency analysis and correlation between the subscales (the four themes) were used for statistical analysis to interpret the data. The investigation was conducted into mechanisms that can be deployed to balance the high demand for products and the limited production capacity of the company’s Belgian operations across four aspects: demand management strategies, capacity management strategies, communication methods that can be used to align a sales management department, and reward systems in use to improve employee performance. The conclusions derived from the theme ‘demand management strategies’ are that the company is fully aware of the future market demand for its products. However, there seems to be no evidence that there is proper demand forecasting conducted within the organisation. The conclusions derived from the theme 'capacity management strategies' are that employees always have a lot of work to complete during office hours, and, also, employees seem to need help from colleagues with urgent tasks. This indicates that employees often work on unplanned tasks and multiple projects. Conclusions derived from the theme 'communication methods used to align sales management department with operations' are that communication is not good throughout the organisation. This means that information often stays with management, and does not reach non-management employees. This also means that there is a lack of smooth synergy as expected and a lack of good communication between the sales department and the projects office. This has a direct impact on the delivery of projects to customers by the operations department. The conclusions derived from the theme ‘employee reward systems’ are that employees are motivated, and feel that they add value in their current functions. There are currently no measures in place to identify unhappy employees, and there are also no proper reward systems in place which are linked to a performance management system. The research has made a contribution to the body of research by exploring the impact of the four sub-variables and their interaction on the challenges of organisational productivity, in particular where an organisation experiences a capacity problem during its growth stage during tough economic conditions. Recommendations were made which, if implemented by management, could further enhance the organisation’s sustained competitive operations.

Keywords: high demand for products, high organisational performance, limited production capacity, limited resources

Procedia PDF Downloads 144
12525 Assessing the Survival Time of Hospitalized Patients in Eastern Ethiopia During 2019–2020 Using the Bayesian Approach: A Retrospective Cohort Study

Authors: Chalachew Gashu, Yoseph Kassa, Habtamu Geremew, Mengestie Mulugeta

Abstract:

Background and Aims: Severe acute malnutrition remains a significant health challenge, particularly in low‐ and middle‐income countries. The aim of this study was to determine the survival time of under‐five children with severe acute malnutrition. Methods: A retrospective cohort study was conducted at a hospital, focusing on under‐five children with severe acute malnutrition. The study included 322 inpatients admitted to the Chiro hospital in Chiro, Ethiopia, between September 2019 and August 2020, whose data was obtained from medical records. Survival functions were analyzed using Kaplan‒Meier plots and log‐rank tests. The survival time of severe acute malnutrition was further analyzed using the Cox proportional hazards model and Bayesian parametric survival models, employing integrated nested Laplace approximation methods. Results: Among the 322 patients, 118 (36.6%) died as a result of severe acute malnutrition. The estimated median survival time for inpatients was found to be 2 weeks. Model selection criteria favored the Bayesian Weibull accelerated failure time model, which demonstrated that age, body temperature, pulse rate, nasogastric (NG) tube usage, hypoglycemia, anemia, diarrhea, dehydration, malaria, and pneumonia significantly influenced the survival time of severe acute malnutrition. Conclusions: This study revealed that children below 24 months, those with altered body temperature and pulse rate, NG tube usage, hypoglycemia, and comorbidities such as anemia, diarrhea, dehydration, malaria, and pneumonia had a shorter survival time when affected by severe acute malnutrition under the age of five. To reduce the death rate of children under 5 years of age, it is necessary to design community management for acute malnutrition to ensure early detection and improve access to and coverage for children who are malnourished.

Keywords: Bayesian analysis, severe acute malnutrition, survival data analysis, survival time

Procedia PDF Downloads 50
12524 Religion, Health and Ageing: A Geroanthropological Study on Spiritual Dimensions of Well-Being among the Elderly Residing in Old Age Homes in Jallandher Punjab, India

Authors: A. Rohit Kumar, B. R. K. Pathak

Abstract:

Background: Geroanthropology or the anthropology of ageing is a term which can be understood in terms of the anthropology of old age, old age within anthropology, and the anthropology of age. India is known as the land of spirituality and philosophy and is the birthplace of four major religions of the world namely Hinduasim, Buddhisim, Jainisim, and Sikhism. The most dominant religion in India today is Hinduism. About 80% of Indians are Hindus. Hinduism is a religion with a large number of Gods and Goddesses. Religion in India plays an important role at all life stages i.e. at birth, adulthood and particularly during old age. India is the second largest country in the world with 72 million elder persons above 60 years of age in 2001 as compared to china 127 million. The very concept of old age homes in India is new. The elderly people staying away from their homes, from their children or left to them is not considered to be a very happy situation. This paper deals with anthropology of ageing, religion and spirituality among the elderly residing in old age homes and tries to explain that how religion plays a vital role in the health of the elderly during old age. Methods: The data for the present paper was collected through both Qualitative and Quantitative methods. Old age homes located in Jallandher (Punjab) were selected for the present study. Age sixty was considered as a cut off age. Narratives, case studies were collected from 100 respondents residing in old age homes. The dominant religion in Punjab was found to be Sikhism and Hinduism while Jainism and Buddhism were found to be in minority. It was found that as one grows older the religiosity increases. Religiosity and sprituality was found to be directly proportional to ageing. Therefore religiosity and health were found to be connected. Results and Conclusion: Religion was found out to be a coping mechanism during ill health. The elderly living in old age homes were purposely selected for the study as the elderly in old age homes gets medical attention provided only by the old age home authorities. Moreover, the inmates in old age homes were of low socio-economic status couldn’t afford medical attention on their own. It was found that elderly who firmly believed in religion were found to be more satisfied with their health as compare to elderly who does not believe in religion at all. Belief in particular religion, God and godess had an impact on the health of the elderly.

Keywords: ageing, geroanthropology, religion, spirituality

Procedia PDF Downloads 342
12523 Numerical Modelling of Hydrodynamic Drag and Supercavitation Parameters for Supercavitating Torpedoes

Authors: Sezer Kefeli, Sertaç Arslan

Abstract:

In this paper, supercavitationphenomena, and parameters are explained, and hydrodynamic design approaches are investigated for supercavitating torpedoes. In addition, drag force calculation methods ofsupercavitatingvehicles are obtained. Basically, conventional heavyweight torpedoes reach up to ~50 knots by classic hydrodynamic techniques, on the other hand super cavitating torpedoes may reach up to ~200 knots, theoretically. However, in order to reachhigh speeds, hydrodynamic viscous forces have to be reduced or eliminated completely. This necessity is revived the supercavitation phenomena that is implemented to conventional torpedoes. Supercavitation is a type of cavitation, after all, it is more stable and continuous than other cavitation types. The general principle of supercavitation is to separate the underwater vehicle from water phase by surrounding the vehicle with cavitation bubbles. This situation allows the torpedo to operate at high speeds through the water being fully developed cavitation. Conventional torpedoes are entitled as supercavitating torpedoes when the torpedo moves in a cavity envelope due to cavitator in the nose section and solid fuel rocket engine in the rear section. There are two types of supercavitation phase, these are natural and artificial cavitation phases. In this study, natural cavitation is investigated on the disk cavitators based on numerical methods. Once the supercavitation characteristics and drag reduction of natural cavitationare studied on CFD platform, results are verified with the empirical equations. As supercavitation parameters cavitation number (), pressure distribution along axial axes, drag coefficient (C_?) and drag force (D), cavity wall velocity (U_?) and dimensionless cavity shape parameters, which are cavity length (L_?/d_?), cavity diameter(d_ₘ/d_?) and cavity fineness ratio (〖L_?/d〗_ₘ) are investigated and compared with empirical results. This paper has the characteristics of feasibility study to carry out numerical solutions of the supercavitation phenomena comparing with empirical equations.

Keywords: CFD, cavity envelope, high speed underwater vehicles, supercavitating flows, supercavitation, drag reduction, supercavitation parameters

Procedia PDF Downloads 173
12522 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods

Authors: Matthew D. Baffa

Abstract:

Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.

Keywords: emissivity, heat loss, infrared thermography, thermal conductance

Procedia PDF Downloads 313
12521 A Review of Critical Framework Assessment Matrices for Data Analysis on Overheating in Buildings Impact

Authors: Martin Adlington, Boris Ceranic, Sally Shazhad

Abstract:

In an effort to reduce carbon emissions, changes in UK regulations, such as Part L Conservation of heat and power, dictates improved thermal insulation and enhanced air tightness. These changes were a direct response to the UK Government being fully committed to achieving its carbon targets under the Climate Change Act 2008. The goal is to reduce emissions by at least 80% by 2050. Factors such as climate change are likely to exacerbate the problem of overheating, as this phenomenon expects to increase the frequency of extreme heat events exemplified by stagnant air masses and successive high minimum overnight temperatures. However, climate change is not the only concern relevant to overheating, as research signifies, location, design, and occupation; construction type and layout can also play a part. Because of this growing problem, research shows the possibility of health effects on occupants of buildings could be an issue. Increases in temperature can perhaps have a direct impact on the human body’s ability to retain thermoregulation and therefore the effects of heat-related illnesses such as heat stroke, heat exhaustion, heat syncope and even death can be imminent. This review paper presents a comprehensive evaluation of the current literature on the causes and health effects of overheating in buildings and has examined the differing applied assessment approaches used to measure the concept. Firstly, an overview of the topic was presented followed by an examination of overheating research work from the last decade. These papers form the body of the article and are grouped into a framework matrix summarizing the source material identifying the differing methods of analysis of overheating. Cross case evaluation has identified systematic relationships between different variables within the matrix. Key areas focused on include, building types and country, occupants behavior, health effects, simulation tools, computational methods.

Keywords: overheating, climate change, thermal comfort, health

Procedia PDF Downloads 351
12520 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.

Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model

Procedia PDF Downloads 207
12519 Infusing Social Business Skills into the Curriculum of Higher Learning Institutions with Special Reference to Albukhari International University

Authors: Abdi Omar Shuriye

Abstract:

A social business is a business designed to address socio-economic problems to enhance the welfare of the communities involved. Lately, social business, with its focus on innovative ideas, is capturing the interest of educational institutions, governments, and non-governmental organizations. Social business uses a business model to achieve a social goal, and in the last few decades, the idea of imbuing social business into the education system of higher learning institutions has spurred much excitement. This is due to the belief that it will lead to job creation and increased social resilience. One of the higher learning institutions which have invested immensely in the idea is Albukhari International University; it is a private education institution, on a state-of-the-art campus, providing an advantageous learning ecosystem. The niche area of this institution is social business, and it graduates job creators, not job seekers; this Malaysian institution is unique and one of its kind. The objective of this paper is to develop a work plan, direction, and milestone as well as the focus area for the infusion of social business into higher learning institutions with special reference to Al-Bukhari International University. The purpose is to develop a prototype and model full-scale to enable higher learning education institutions to construct the desired curriculum fermented with social business. With this model, major predicaments faced by these institutions could be overcome. The paper sets forth an educational plan and will spell out the basic tenets of social business, focusing on the nature and implementational aspects of the curriculum. It will also evaluate the mechanisms applied by these educational institutions. Currently, since research in this area remains scarce, institutions adopt the process of experimenting with various methods to find the best way to reach the desired result on the matter. The author is of the opinion that social business in education is the main tool to educate holistic future leaders; hence educational institutions should inspire students in the classroom to start up their own businesses by adopting creative and proactive teaching methods. This proposed model is a contribution in that direction.

Keywords: social business, curriculum, skills, university

Procedia PDF Downloads 91
12518 Molecular Detection of Acute Virus Infection in Children Hospitalized with Diarrhea in North India during 2014-2016

Authors: Ali Ilter Akdag, Pratima Ray

Abstract:

Background:This acute gastroenteritis viruses such as rotavirus, astrovirus, and adenovirus are mainly responsible for diarrhea in children below < 5 years old. Molecular detection of these viruses is crucially important to the understand development of the effective cure. This study aimed to determine the prevalence of common these viruses in children < 5 years old presented with diarrhea from Lala Lajpat Rai Memorial Medical College (LLRM) centre (Meerut) North India, India Methods: Total 312 fecal samples were collected from diarrheal children duration 3 years: in year 2014 (n = 118), 2015 (n = 128) and 2016 (n = 66) ,< 5 years of age who presented with acute diarrhea at the Lala Lajpat Rai Memorial Medical College (LLRM) centre(Meerut) North India, India. All samples were the first detection by EIA/RT-PCR for rotaviruses, adenovirus and astrovirus. Results: In 312 samples from children with acute diarrhea in sample viral agent was found, rotavirus A was the most frequent virus identified (57 cases; 18.2%), followed by Astrovirus in 28 cases (8.9%), adenovirus in 21 cases (6.7%). Mixed infections were found in 14 cases, all of which presented with acute diarrhea (14/312; 4.48%). Conclusions: These viruses are a major cause of diarrhea in children <5 years old in North India. Rotavirus A is the most common etiological agent, follow by astrovirus. This surveillance is important to vaccine development of the entire population. There is variation detection of virus year wise due to differences in the season of sampling, method of sampling, hygiene condition, socioeconomic level of the entire people, enrolment criteria, and virus detection methods. It was found Astrovirus higher then Rotavirus in 2015, but overall three years study Rotavirus A is mainly responsible for causing severe diarrhea in children <5 years old in North India. It emphasizes the required for cost-effective diagnostic assays for Rotaviruses which would help to determine the disease burden.

Keywords: adenovirus, Astrovirus, hospitalized children, Rotavirus

Procedia PDF Downloads 141
12517 Generalization of Tau Approximant and Error Estimate of Integral Form of Tau Methods for Some Class of Ordinary Differential Equations

Authors: A. I. Ma’ali, R. B. Adeniyi, A. Y. Badeggi, U. Mohammed

Abstract:

An error estimation of the integrated formulation of the Lanczos tau method for some class of ordinary differential equations was reported. This paper is concern with the generalization of tau approximants and their corresponding error estimates for some class of ordinary differential equations (ODEs) characterized by m + s =3 (i.e for m =1, s=2; m=2, s=1; and m=3, s=0) where m and s are the order of differential equations and number of overdetermination, respectively. The general result obtained were validated with some numerical examples.

Keywords: approximant, error estimate, tau method, overdetermination

Procedia PDF Downloads 606
12516 From Text to Data: Sentiment Analysis of Presidential Election Political Forums

Authors: Sergio V Davalos, Alison L. Watkins

Abstract:

User generated content (UGC) such as website post has data associated with it: time of the post, gender, location, type of device, and number of words. The text entered in user generated content (UGC) can provide a valuable dimension for analysis. In this research, each user post is treated as a collection of terms (words). In addition to the number of words per post, the frequency of each term is determined by post and by the sum of occurrences in all posts. This research focuses on one specific aspect of UGC: sentiment. Sentiment analysis (SA) was applied to the content (user posts) of two sets of political forums related to the US presidential elections for 2012 and 2016. Sentiment analysis results in deriving data from the text. This enables the subsequent application of data analytic methods. The SASA (SAIL/SAI Sentiment Analyzer) model was used for sentiment analysis. The application of SASA resulted with a sentiment score for each post. Based on the sentiment scores for the posts there are significant differences between the content and sentiment of the two sets for the 2012 and 2016 presidential election forums. In the 2012 forums, 38% of the forums started with positive sentiment and 16% with negative sentiment. In the 2016 forums, 29% started with positive sentiment and 15% with negative sentiment. There also were changes in sentiment over time. For both elections as the election got closer, the cumulative sentiment score became negative. The candidate who won each election was in the more posts than the losing candidates. In the case of Trump, there were more negative posts than Clinton’s highest number of posts which were positive. KNIME topic modeling was used to derive topics from the posts. There were also changes in topics and keyword emphasis over time. Initially, the political parties were the most referenced and as the election got closer the emphasis changed to the candidates. The performance of the SASA method proved to predict sentiment better than four other methods in Sentibench. The research resulted in deriving sentiment data from text. In combination with other data, the sentiment data provided insight and discovery about user sentiment in the US presidential elections for 2012 and 2016.

Keywords: sentiment analysis, text mining, user generated content, US presidential elections

Procedia PDF Downloads 192
12515 IT Investment Decision Making: Case Studies on the Implementation of Contactless Payments in Commercial Banks of Kazakhstan

Authors: Symbat Moldabekova

Abstract:

This research explores the practice of decision-making in commercial banks in Kazakhstan. It focuses on recent technologies, such as contactless payments and QR code, and uses interviews with bank executives and industry practitioners to gain an understanding of how decisions are made and the role of financial assessment methods. The aim of the research is (1) to study the importance of financial techniques to evaluate IT investments; (2) to understand the role of different expert groups; (3) to explore how market trends and industry features affect decisions on IT; (4) to build a model that defines the real practice of decision-making on IT in commercial banks in Kazakhstan. The theoretical framework suggests that decision-making on IT is a socially constructed process, where actor groups with different background interact and negotiate with each other to develop a shared understanding of IT and to make more effective decisions. Theory and observations suggest that the more parties involved in the process of decision-making, the higher the possibility of disagreements between them. As each actor group has their views on the rational decision on an IT project, it is worth exploring how the final decision is made in practice. Initial findings show that the financial assessment methods are used as a guideline and do not play a big role in the final decision. The commercial banks of Kazakhstan tend to study experience of neighboring countries before adopting innovation. Implementing contactless payments is widely regarded as pinnacle success factor due to increasing competition in the market. First-to-market innovations are considered as priorities therefore, such decisions can be made with exemption of some certain actor groups from the process. Customers play significant role and they participate in testing demo versions of the products before bringing innovation to the market. The study will identify the viewpoints of actors in the banking sector on a rational decision, and the ways decision-makers from a variety of disciplines interact with each other in order to make a decision on IT in retail banks.

Keywords: actor groups, decision making, technology investment, retail banks

Procedia PDF Downloads 122
12514 Testing and Validation Stochastic Models in Epidemiology

Authors: Snigdha Sahai, Devaki Chikkavenkatappa Yellappa

Abstract:

This study outlines approaches for testing and validating stochastic models used in epidemiology, focusing on the integration and functional testing of simulation code. It details methods for combining simple functions into comprehensive simulations, distinguishing between deterministic and stochastic components, and applying tests to ensure robustness. Techniques include isolating stochastic elements, utilizing large sample sizes for validation, and handling special cases. Practical examples are provided using R code to demonstrate integration testing, handling of incorrect inputs, and special cases. The study emphasizes the importance of both functional and defensive programming to enhance code reliability and user-friendliness.

Keywords: computational epidemiology, epidemiology, public health, infectious disease modeling, statistical analysis, health data analysis, disease transmission dynamics, predictive modeling in health, population health modeling, quantitative public health, random sampling simulations, randomized numerical analysis, simulation-based analysis, variance-based simulations, algorithmic disease simulation, computational public health strategies, epidemiological surveillance, disease pattern analysis, epidemic risk assessment, population-based health strategies, preventive healthcare models, infection dynamics in populations, contagion spread prediction models, survival analysis techniques, epidemiological data mining, host-pathogen interaction models, risk assessment algorithms for disease spread, decision-support systems in epidemiology, macro-level health impact simulations, socioeconomic determinants in disease spread, data-driven decision making in public health, quantitative impact assessment of health policies, biostatistical methods in population health, probability-driven health outcome predictions

Procedia PDF Downloads 8
12513 Global Supply Chain Tuning: Role of National Culture

Authors: Aleksandr S. Demin, Anastasiia V. Ivanova

Abstract:

Purpose: The current economy tends to increase the influence of digital technologies and diminish the human role in management. However, it is impossible to deny that a person still leads a business with its own set of values and priorities. The article presented aims to incorporate the peculiarities of the national culture and the characteristics of the supply chain using the quantitative values of the national culture obtained by the scholars of comparative management (Hofstede, House, and others). Design/Methodology/Approach: The conducted research is based on the secondary data in the field of cross-country comparison achieved by Prof. Hofstede and received in the GLOBE project. The data mentioned are used to design different aspects of the supply chain both on the cross-functional and inter-organizational levels. The connection between a range of principles in general (roles assignment, customer service prioritization, coordination of supply chain partners) and in comparative management (acknowledgment of the national peculiarities of the country in which the company operates) is shown over economic and mathematical models, mainly linear programming models. Findings: The combination of the team management wheel concept, the business processes of the global supply chain, and the national culture characteristics let a transnational corporation to form a supply chain crew balanced in costs, functions, and personality. To elaborate on an effective customer service policy and logistics strategy in goods and services distribution in the country under review, two approaches are offered. The first approach relies exceptionally on the customer’s interest in the place of operation, while the second one takes into account the position of the transnational corporation and its previous experience in order to accord both organizational and national cultures. The effect of integration practice on the achievement of a specific supply chain goal in a specific location is advised to assess via types of correlation (positive, negative, non) and the value of national culture indices. Research Limitations: The models developed are intended to be used by transnational companies and business forms located in several nationally different areas. Some of the inputs to illustrate the application of the methods offered are simulated. That is why the numerical measurements should be used with caution. Practical Implications: The research can be of great interest for the supply chain managers who are responsible for the engineering of global supply chains in a transnational corporation and the further activities in doing business on the international area. As well, the methods, tools, and approaches suggested can be used by top managers searching for new ways of competitiveness and can be suitable for all staff members who are keen on the national culture traits topic. Originality/Value: The elaborated methods of decision-making with regard to the national environment suggest the mathematical and economic base to find a comprehensive solution.

Keywords: logistics integration, logistics services, multinational corporation, national culture, team management, service policy, supply chain management

Procedia PDF Downloads 106
12512 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)

Authors: Tarek Duzan

Abstract:

Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.

Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data

Procedia PDF Downloads 93
12511 Rapid Atmospheric Pressure Photoionization-Mass Spectrometry (APPI-MS) Method for the Detection of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans in Real Environmental Samples Collected within the Vicinity of Industrial Incinerators

Authors: M. Amo, A. Alvaro, A. Astudillo, R. Mc Culloch, J. C. del Castillo, M. Gómez, J. M. Martín

Abstract:

Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) of course comprise a range of highly toxic compounds that may exist as particulates within the air or accumulate within water supplies, soil, or vegetation. They may be created either ubiquitously or naturally within the environment as a product of forest fires or volcanic eruptions. It is only since the industrial revolution, however, that it has become necessary to closely monitor their generation as a byproduct of manufacturing/combustion processes, in an effort to mitigate widespread contamination events. Of course, the environmental concentrations of these toxins are expected to be extremely low, therefore highly sensitive and accurate methods are required for their determination. Since ionization of non-polar compounds through electrospray and APCI is difficult and inefficient, we evaluate the performance of a novel low-flow Atmospheric Pressure Photoionization (APPI) source for the trace detection of various dioxins and furans using rapid Mass Spectrometry workflows. Air, soil and biota (vegetable matter) samples were collected monthly during one year from various locations within the vicinity of an industrial incinerator in Spain. Analytes were extracted and concentrated using soxhlet extraction in toluene and concentrated by rotavapor and nitrogen flow. Various ionization methods as electrospray (ES) and atmospheric pressure chemical ionization (APCI) were evaluated, however, only the low-flow APPI source was capable of providing the necessary performance, in terms of sensitivity, required for detecting all targeted analytes. In total, 10 analytes including 2,3,7,8-tetrachlorodibenzodioxin (TCDD) were detected and characterized using the APPI-MS method. Both PCDDs and PCFDs were detected most efficiently in negative ionization mode. The most abundant ion always corresponded to the loss of a chlorine and addition of an oxygen, yielding [M-Cl+O]- ions. MRM methods were created in order to provide selectivity for each analyte. No chromatographic separation was employed; however, matrix effects were determined to have a negligible impact on analyte signals. Triple Quadrupole Mass Spectrometry was chosen because of its unique potential for high sensitivity and selectivity. The mass spectrometer used was a Sciex´s Qtrap3200 working in negative Multi Reacting Monitoring Mode (MRM). Typically mass detection limits were determined to be near the 1-pg level. The APPI-MS2 technology applied to the detection of PCDD/Fs allows fast and reliable atmospheric analysis, minimizing considerably operational times and costs, with respect other technologies available. In addition, the limit of detection can be easily improved using a more sensitive mass spectrometer since the background in the analysis channel is very low. The APPI developed by SEADM allows polar and non-polar compounds ionization with high efficiency and repeatability.

Keywords: atmospheric pressure photoionization-mass spectrometry (APPI-MS), dioxin, furan, incinerator

Procedia PDF Downloads 208