Search results for: forensic autopsy data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25358

Search results for: forensic autopsy data

22238 Mulberry Leave: An Efficient and Economical Adsorbent for Remediation of Arsenic (V) and Arsenic (III) Contaminated Water

Authors: Saima Q. Memon, Mazhar I. Khaskheli

Abstract:

The aim of present study was to investigate the efficiency of mulberry leaves for the removal of both arsenic (III) and arsenic (V) from aqueous medium. Batch equilibrium studies were carried out to optimize various parameters such as pH of metal ion solution, volume of sorbate, sorbent doze, and agitation speed and agitation time. Maximum sorption efficiency of mulberry leaves for As (III) and As (V) at optimum conditions were 2818 μg.g-1 and 4930 μg.g-1, respectively. The experimental data was a good fit to Freundlich and D-R adsorption isotherm. Energy of adsorption was found to be in the range of 3-6 KJ/mole suggesting the physical nature of process. Kinetic data followed the first order rate, Morris-Weber equations. Developed method was applied to remove arsenic from real water samples.

Keywords: arsenic removal, mulberry, adsorption isotherms, kinetics of adsorption

Procedia PDF Downloads 275
22237 Using RASCAL Code to Analyze the Postulated UF6 Fire Accident

Authors: J. R. Wang, Y. Chiang, W. S. Hsu, S. H. Chen, J. H. Yang, S. W. Chen, C. Shih, Y. F. Chang, Y. H. Huang, B. R. Shen

Abstract:

In this research, the RASCAL code was used to simulate and analyze the postulated UF6 fire accident which may occur in the Institute of Nuclear Energy Research (INER). There are four main steps in this research. In the first step, the UF6 data of INER were collected. In the second step, the RASCAL analysis methodology and model was established by using these data. Third, this RASCAL model was used to perform the simulation and analysis of the postulated UF6 fire accident. Three cases were simulated and analyzed in this step. Finally, the analysis results of RASCAL were compared with the hazardous levels of the chemicals. According to the compared results of three cases, Case 3 has the maximum danger in human health.

Keywords: RASCAL, UF₆, safety, hydrogen fluoride

Procedia PDF Downloads 222
22236 Financial Fraud Prediction for Russian Non-Public Firms Using Relational Data

Authors: Natalia Feruleva

Abstract:

The goal of this paper is to develop the fraud risk assessment model basing on both relational and financial data and test the impact of the relationships between Russian non-public companies on the likelihood of financial fraud commitment. Relationships mean various linkages between companies such as parent-subsidiary relationship and person-related relationships. These linkages may provide additional opportunities for committing fraud. Person-related relationships appear when firms share a director, or the director owns another firm. The number of companies belongs to CEO and managed by CEO, the number of subsidiaries was calculated to measure the relationships. Moreover, the dummy variable describing the existence of parent company was also included in model. Control variables such as financial leverage and return on assets were also implemented because they describe the motivating factors of fraud. To check the hypotheses about the influence of the chosen parameters on the likelihood of financial fraud, information about person-related relationships between companies, existence of parent company and subsidiaries, profitability and the level of debt was collected. The resulting sample consists of 160 Russian non-public firms. The sample includes 80 fraudsters and 80 non-fraudsters operating in 2006-2017. The dependent variable is dichotomous, and it takes the value 1 if the firm is engaged in financial crime, otherwise 0. Employing probit model, it was revealed that the number of companies which belong to CEO of the firm or managed by CEO has significant impact on the likelihood of financial fraud. The results obtained indicate that the more companies are affiliated with the CEO, the higher the likelihood that the company will be involved in financial crime. The forecast accuracy of the model is about is 80%. Thus, the model basing on both relational and financial data gives high level of forecast accuracy.

Keywords: financial fraud, fraud prediction, non-public companies, regression analysis, relational data

Procedia PDF Downloads 119
22235 Design and Development of an Algorithm to Predict Fluctuations of Currency Rates

Authors: Nuwan Kuruwitaarachchi, M. K. M. Peiris, C. N. Madawala, K. M. A. R. Perera, V. U. N Perera

Abstract:

Dealing with businesses with the foreign market always took a special place in a country’s economy. Political and social factors came into play making currency rate changes fluctuate rapidly. Currency rate prediction has become an important factor for larger international businesses since large amounts of money exchanged between countries. This research focuses on comparing the accuracy of mainly three models; Autoregressive Integrated Moving Average (ARIMA), Artificial Neural Networks(ANN) and Support Vector Machines(SVM). series of data import, export, USD currency exchange rate respect to LKR has been selected for training using above mentioned algorithms. After training the data set and comparing each algorithm, it was able to see that prediction in SVM performed better than other models. It was improved more by combining SVM and SVR models together.

Keywords: ARIMA, ANN, FFNN, RMSE, SVM, SVR

Procedia PDF Downloads 212
22234 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association

Authors: Jacky Liu

Abstract:

This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.

Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation

Procedia PDF Downloads 102
22233 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing

Authors: Abdullah Bal, Sevdenur Bal

Abstract:

This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.

Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware

Procedia PDF Downloads 506
22232 Preservation Model to Process 'La Bomba Del Chota' as a Living Cultural Heritage

Authors: Lucia Carrion Gordon, Maria Gabriela Lopez Yanez

Abstract:

This project focuses on heritage concepts and their importance in every evolving and changing Digital Era where system solutions have to be sustainable, efficient and suitable to the basic needs. The prototype has to cover the principal requirements for the case studies. How to preserve the sociological ideas of dances in Ecuador like ‘La Bomba’ is the best example and challenge to preserve the intangible data. The same idea is applicable with books and music. The History and how to keep it, is the principal mission of Heritage Preservation. The dance of La Bomba is rooted on a specific movement system whose main part is the sideward hip movement. La Bomba´s movement system is the surface manifestation of a whole system of knowledge whose principal characteristics are the historical relation of Chote˜nos with their land and their families.

Keywords: digital preservation, heritage, IT management, data, metadata, ontology, serendipity

Procedia PDF Downloads 387
22231 Exploring Teachers’ Beliefs about Diagnostic Language Assessment Practices in a Large-Scale Assessment Program

Authors: Oluwaseun Ijiwade, Chris Davison, Kelvin Gregory

Abstract:

In Australia, like other parts of the world, the debate on how to enhance teachers using assessment data to inform teaching and learning of English as an Additional Language (EAL, Australia) or English as a Foreign Language (EFL, United States) have occupied the centre of academic scholarship. Traditionally, this approach was conceptualised as ‘Formative Assessment’ and, in recent times, ‘Assessment for Learning (AfL)’. The central problem is that teacher-made tests are limited in providing data that can inform teaching and learning due to variability of classroom assessments, which are hindered by teachers’ characteristics and assessment literacy. To address this concern, scholars in language education and testing have proposed a uniformed large-scale computer-based assessment program to meet the needs of teachers and promote AfL in language education. In Australia, for instance, the Victoria state government commissioned a large-scale project called 'Tools to Enhance Assessment Literacy (TEAL) for Teachers of English as an additional language'. As part of the TEAL project, a tool called ‘Reading and Vocabulary assessment for English as an Additional Language (RVEAL)’, as a diagnostic language assessment (DLA), was developed by language experts at the University of New South Wales for teachers in Victorian schools to guide EAL pedagogy in the classroom. Therefore, this study aims to provide qualitative evidence for understanding beliefs about the diagnostic language assessment (DLA) among EAL teachers in primary and secondary schools in Victoria, Australia. To realize this goal, this study raises the following questions: (a) How do teachers use large-scale assessment data for diagnostic purposes? (b) What skills do language teachers think are necessary for using assessment data for instruction in the classroom? and (c) What factors, if any, contribute to teachers’ beliefs about diagnostic assessment in a large-scale assessment? Semi-structured interview method was used to collect data from at least 15 professional teachers who were selected through a purposeful sampling. The findings from the resulting data analysis (thematic analysis) provide an understanding of teachers’ beliefs about DLA in a classroom context and identify how these beliefs are crystallised in language teachers. The discussion shows how the findings can be used to inform professional development processes for language teachers as well as informing important factor of teacher cognition in the pedagogic processes of language assessment. This, hopefully, will help test developers and testing organisations to align the outcome of this study with their test development processes to design assessment that can enhance AfL in language education.

Keywords: beliefs, diagnostic language assessment, English as an additional language, teacher cognition

Procedia PDF Downloads 199
22230 Evaluating Models Through Feature Selection Methods Using Data Driven Approach

Authors: Shital Patil, Surendra Bhosale

Abstract:

Cardiac diseases are the leading causes of mortality and morbidity in the world, from recent few decades accounting for a large number of deaths have emerged as the most life-threatening disorder globally. Machine learning and Artificial intelligence have been playing key role in predicting the heart diseases. A relevant set of feature can be very helpful in predicting the disease accurately. In this study, we proposed a comparative analysis of 4 different features selection methods and evaluated their performance with both raw (Unbalanced dataset) and sampled (Balanced) dataset. The publicly available Z-Alizadeh Sani dataset have been used for this study. Four feature selection methods: Data Analysis, minimum Redundancy maximum Relevance (mRMR), Recursive Feature Elimination (RFE), Chi-squared are used in this study. These methods are tested with 8 different classification models to get the best accuracy possible. Using balanced and unbalanced dataset, the study shows promising results in terms of various performance metrics in accurately predicting heart disease. Experimental results obtained by the proposed method with the raw data obtains maximum AUC of 100%, maximum F1 score of 94%, maximum Recall of 98%, maximum Precision of 93%. While with the balanced dataset obtained results are, maximum AUC of 100%, F1-score 95%, maximum Recall of 95%, maximum Precision of 97%.

Keywords: cardio vascular diseases, machine learning, feature selection, SMOTE

Procedia PDF Downloads 118
22229 Device to Alert and Fire Prevention through Temperature Monitoring and Gas Detection

Authors: Dêivisson Alves Anjos, Blenda Fonseca Aires Teles, Queitiane Castro Costa

Abstract:

Fire is one of the biggest dangers for factories, warehouses, mills, among other places, causing unimaginable damage, because besides the material damage also directly affects the lives of workers who are likely to suffer death or very serious consequences. This protection of the lives of these people should be taken seriously, always seeking safety. Thus investment in security and monitoring equipment must be high, so you can prevent or reduce the impacts of a possible fire. Our device, made in PIC micro controller monitors the temperature and the presence of gas in the environment, it sends the data via Bluetooth device to a developed in LabVIEW interface saves these data continuously and alert if the temperature exceeds the allowed or some gas is detected. Currently the device is in operation and can perform several tests, as well as use in different areas for which you need anti-fire protection.

Keywords: pic, bluetooth, fire, temperature, gas, LabVIEW

Procedia PDF Downloads 532
22228 Parametric Analysis and Optimal Design of Functionally Graded Plates Using Particle Swarm Optimization Algorithm and a Hybrid Meshless Method

Authors: Foad Nazari, Seyed Mahmood Hosseini, Mohammad Hossein Abolbashari, Mohammad Hassan Abolbashari

Abstract:

The present study is concerned with the optimal design of functionally graded plates using particle swarm optimization (PSO) algorithm. In this study, meshless local Petrov-Galerkin (MLPG) method is employed to obtain the functionally graded (FG) plate’s natural frequencies. Effects of two parameters including thickness to height ratio and volume fraction index on the natural frequencies and total mass of plate are studied by using the MLPG results. Then the first natural frequency of the plate, for different conditions where MLPG data are not available, is predicted by an artificial neural network (ANN) approach which is trained by back-error propagation (BEP) technique. The ANN results show that the predicted data are in good agreement with the actual one. To maximize the first natural frequency and minimize the mass of FG plate simultaneously, the weighted sum optimization approach and PSO algorithm are used. However, the proposed optimization process of this study can provide the designers of FG plates with useful data.

Keywords: optimal design, natural frequency, FG plate, hybrid meshless method, MLPG method, ANN approach, particle swarm optimization

Procedia PDF Downloads 368
22227 Clustering of Extremes in Financial Returns: A Comparison between Developed and Emerging Markets

Authors: Sara Ali Alokley, Mansour Saleh Albarrak

Abstract:

This paper investigates the dependency or clustering of extremes in the financial returns data by estimating the extremal index value θ∈[0,1]. The smaller the value of θ the more clustering we have. Here we apply the method of Ferro and Segers (2003) to estimate the extremal index for a range of threshold values. We compare the dependency structure of extremes in the developed and emerging markets. We use the financial returns of the stock market index in the developed markets of US, UK, France, Germany and Japan and the emerging markets of Brazil, Russia, India, China and Saudi Arabia. We expect that more clustering occurs in the emerging markets. This study will help to understand the dependency structure of the financial returns data.

Keywords: clustring, extremes, returns, dependency, extermal index

Procedia PDF Downloads 405
22226 The Antecedent Factor Affecting the Entrepreneurs’ Decision Making for Using Accounting Office Service in Chiang Mai Province

Authors: Nawaporn Thongnut

Abstract:

The objective was to study the process and how to prepare the accounting of the Thai temples and to study the performance and quality in the accounting preparation of the temples in accordance with the regulation. The population was the accountants and individuals involved in the accounting preparation of 17 temples in the suburban Bangkok. The measurement used in this study was questionnaire. The statistics used in the analysis are the descriptive statistic. The data was presented in the form of percentage tables to describe the data on the demographic characteristics. The study found that temple wardens were responsible for the accounting and reporting of the temples. Abbots were to check the accuracy of the accounts in the monasteries. Mostly, there was no account auditing of the monasteries from the outside. The practice when receiving income for most of the monasteries had been keeping financial document in an orderly manner.

Keywords: corporate social responsibility, creating shared value, management accountant’s roles, stock exchange of Thailand

Procedia PDF Downloads 231
22225 A Study on Spatial Morphological Cognitive Features of Lidukou Village Based on Space Syntax

Authors: Man Guo, Wenyong Tan

Abstract:

By combining spatial syntax with data obtained from field visits, this paper interprets the internal relationship between spatial morphology and spatial cognition in Lidukou Village. By comparing the obtained data, it is recognized that the spatial integration degree of Lidukou Village is positively correlated with the spatial cognitive intention of local villagers. The part with a higher spatial cognitive degree within the village is distributed along the axis mainly composed of Shuxiang Road. And the accessibility of historical relics is weak, and there is no systematic relationship between them. Aiming at the morphological problem of Lidukou Village, optimization strategies have been proposed from multiple perspectives, such as optimizing spatial mechanisms and shaping spatial nodes.

Keywords: traditional villages, spatial syntax, spatial integration degree, morphological problem

Procedia PDF Downloads 52
22224 Oryzanol Recovery from Rice Bran Oil: Adsorption Equilibrium Models Through Kinetics Data Approachments

Authors: A.D. Susanti, W. B. Sediawan, S.K. Wirawan, Budhijanto, Ritmaleni

Abstract:

Oryzanol content in rice bran oil (RBO) naturally has high antioxidant activity. Its reviewed has several health properties and high interested in pharmacy, cosmetics, and nutrition’s. Because of the low concentration of oryzanol in crude RBO (0.9-2.9%) then its need to be further processed for practical usage, such as via adsorption process. In this study, investigation and adjustment of adsorption equilibrium models were conducted through the kinetic data approachments. Mathematical modeling on kinetics of batch adsorption of oryzanol separation from RBO has been set-up and then applied for equilibrium results. The size of adsorbent particles used in this case are usually relatively small then the concentration in the adsorbent is assumed to be not different. Hence, the adsorption rate is controlled by the rate of oryzanol mass transfer from the bulk fluid of RBO to the surface of silica gel. In this approachments, the rate of mass transfer is assumed to be proportional to the concentration deviation from the equilibrium state. The equilibrium models applied were Langmuir, coefficient distribution, and Freundlich with the values of the parameters obtained from equilibrium results. It turned out that the models set-up can quantitatively describe the experimental kinetics data and the adjustment of the values of equilibrium isotherm parameters significantly improves the accuracy of the model. And then the value of mass transfer coefficient per unit adsorbent mass (kca) is obtained by curve fitting.

Keywords: adsorption equilibrium, adsorption kinetics, oryzanol, rice bran oil

Procedia PDF Downloads 323
22223 Investigation of Various Variabilities of Social Anxiety Levels of Physical Education and Sports School Students

Authors: Turan Cetinkaya

Abstract:

The aim of this study is to determine the relation of the level of social anxiety to various variables of the students in physical education and sports departments. 229 students who are studying at the departments of physical education and sports teaching, sports management and coaching in Ahi Evran University, College of Physical Education and Sports participate in the research. Personal information tool and social anxiety scale consisting 30 items were used as data collection tool in the research. Distribution, frequency, t-test and ANOVA test were used in the comparison of the related data. As a result of statistical analysis, social anxiety levels do not differ according to gender, income level, sports type and national player status.

Keywords: social anxiety, undergraduates, sport, unıversty

Procedia PDF Downloads 429
22222 Effect of Sand Particle Distribution in Oil and Gas Pipeline Erosion

Authors: Christopher Deekia Nwimae, Nigel Simms, Liyun Lao

Abstract:

Erosion in pipe bends caused by particles is a major obstacle in the oil and gas fields and might cause the breakdown of production equipment. This work studied the effects imposed by flow velocity and impact of solid particles diameter in an elbow; erosion rate was verified with experimental data using the computational fluid dynamics (CFD) approach. Two-way coupled Euler-Lagrange and discrete phase model was employed to calculate the air/solid particle flow in an elbow. One erosion model and three-particle rebound models were used to predict the erosion rate on the 90° elbows. The generic erosion model was used in the CFD-based erosion model, and after comparing it with experimental data, results showed agreement with the CFD-based predictions as observed.

Keywords: erosion, prediction, elbow, computational fluid dynamics

Procedia PDF Downloads 157
22221 Arabic Text Classification: Review Study

Authors: M. Hijazi, A. Zeki, A. Ismail

Abstract:

An enormous amount of valuable human knowledge is preserved in documents. The rapid growth in the number of machine-readable documents for public or private access requires the use of automatic text classification. Text classification can be defined as assigning or structuring documents into a defined set of classes known in advance. Arabic text classification methods have emerged as a natural result of the existence of a massive amount of varied textual information written in the Arabic language on the web. This paper presents a review on the published researches of Arabic Text Classification using classical data representation, Bag of words (BoW), and using conceptual data representation based on semantic resources such as Arabic WordNet and Wikipedia.

Keywords: Arabic text classification, Arabic WordNet, bag of words, conceptual representation, semantic relations

Procedia PDF Downloads 426
22220 Heliport Remote Safeguard System Based on Real-Time Stereovision 3D Reconstruction Algorithm

Authors: Ł. Morawiński, C. Jasiński, M. Jurkiewicz, S. Bou Habib, M. Bondyra

Abstract:

With the development of optics, electronics, and computers, vision systems are increasingly used in various areas of life, science, and industry. Vision systems have a huge number of applications. They can be used in quality control, object detection, data reading, e.g., QR-code, etc. A large part of them is used for measurement purposes. Some of them make it possible to obtain a 3D reconstruction of the tested objects or measurement areas. 3D reconstruction algorithms are mostly based on creating depth maps from data that can be acquired from active or passive methods. Due to the specific appliance in airfield technology, only passive methods are applicable because of other existing systems working on the site, which can be blinded on most spectral levels. Furthermore, reconstruction is required to work long distances ranging from hundreds of meters to tens of kilometers with low loss of accuracy even with harsh conditions such as fog, rain, or snow. In response to those requirements, HRESS (Heliport REmote Safeguard System) was developed; which main part is a rotational head with a two-camera stereovision rig gathering images around the head in 360 degrees along with stereovision 3D reconstruction and point cloud combination. The sub-pixel analysis introduced in the HRESS system makes it possible to obtain an increased distance measurement resolution and accuracy of about 3% for distances over one kilometer. Ultimately, this leads to more accurate and reliable measurement data in the form of a point cloud. Moreover, the program algorithm introduces operations enabling the filtering of erroneously collected data in the point cloud. All activities from the programming, mechanical and optical side are aimed at obtaining the most accurate 3D reconstruction of the environment in the measurement area.

Keywords: airfield monitoring, artificial intelligence, stereovision, 3D reconstruction

Procedia PDF Downloads 125
22219 An Examination of Changes on Natural Vegetation due to Charcoal Production Using Multi Temporal Land SAT Data

Authors: T. Garba, Y. Y. Babanyara, M. Isah, A. K. Muktari, R. Y. Abdullahi

Abstract:

The increased in demand of fuel wood for heating, cooking and sometimes bakery has continued to exert appreciable impact on natural vegetation. This study focus on the use of multi-temporal data from land sat TM of 1986, land sat EMT of 1999 and lands sat ETM of 2006 to investigate the changes of Natural Vegetation resulting from charcoal production activities. The three images were classified based on bare soil, built up areas, cultivated land, and natural vegetation, Rock out crop and water bodies. From the classified images Land sat TM of 1986 it shows natural vegetation of the study area to be 308,941.48 hectares equivalent to 50% of the area it then reduces to 278,061.21 which is 42.92% in 1999 it again depreciated to 199,647.81 in 2006 equivalent to 30.83% of the area. Consequently cultivated continue increasing from 259,346.80 hectares (42%) in 1986 to 312,966.27 hectares (48.3%) in 1999 and then to 341.719.92 hectares (52.78%). These show that within the span of 20 years (1986 to 2006) the natural vegetation is depreciated by 119,293.81 hectares. This implies that if the menace is not control the natural might likely be lost in another twenty years. This is because forest cleared for charcoal production is normally converted to farmland. The study therefore concluded that there is the need for alternatives source of domestic energy such as the use of biomass which can easily be accessible and affordable to people. In addition, the study recommended that there should be strong policies enforcement for the protection forest reserved.

Keywords: charcoal, classification, data, images, land use, natural vegetation

Procedia PDF Downloads 365
22218 Assessing the Prevalence of Accidental Iatrogenic Paracetamol Overdose in Adult Hospital Patients Weighing <50kg: A Quality Improvement Project

Authors: Elisavet Arsenaki

Abstract:

Paracetamol overdose is associated with significant and possibly permanent consequences including hepatotoxicity, acute and chronic liver failure, and death. This quality improvement project explores the prevalence of accidental iatrogenic paracetamol overdose in hospital patients with a low body weight, defined as <50kg and assesses the impact of educational posters in trying to reduce it. The study included all adult inpatients on the admissions ward, a short stay ward for patients requiring 12-72 hour treatment, and consisted of three cycles. Each cycle consisted of 3 days of data collection in a given month (data collection for cycle 1 occurred in January 2022, February 2022 for cycle 2 and March 2022 for cycle 3). All patients given paracetamol had their prescribed dose checked against their charted weight to identify the percentage of adult inpatients <50kg who were prescribed 1g of paracetamol instead of 500mg. In the first cycle of the audit, data were collected from 83 patients who were prescribed paracetamol on the admissions ward. Subsequently, four A4 educational posters were displayed across the ward, on two separate occasions and with a one-month interval in between each poster display. The aim of this was to remind prescribing doctors of their responsibility to check patient body weight prior to prescribing paracetamol. Data were collected again one week after each round of poster display, from 72 and 70 patients respectively. Over the 3 cycles with a cumulative 225 patients, 15 weighed <50kg (6.67%) and of those, 5 were incorrectly prescribed 1g of paracetamol, yielding a 33.3% prevalence of accidental iatrogenic paracetamol overdose in adult inpatients. In cycle 1 of the project, 3 out of 6 adult patients weighing <50kg were overdosed on paracetamol, meaning that 50% of low weight patients were prescribed the wrong dose of paracetamol for their weight. In the second data collection cycle, 1 out of 5 <50kg patients were overdosed (20%) and in the third cycle, 1 out of 4 (25%). The use of educational posters resulted in a lower prevalence of accidental iatrogenic paracetamol overdose in low body weight adult inpatients. However, the differences observed were statistically insignificant (p value 0.993 and 0.995 respectively). Educational posters did not induce a significant decrease in the prevalence of accidental iatrogenic paracetamol overdose. More robust strategies need to be employed to further decrease paracetamol overdose in patients weighing <50kg.

Keywords: iatrogenic, overdose, paracetamol, patient, safety

Procedia PDF Downloads 113
22217 The Impacts of Soft and Hard Enterprise Resource Planning to the Corporate Business Performance through the Enterprise Resource Planning Integrated System

Authors: Sautma Ronni Basana, Zeplin Jiwa Husada Tarigan, Widjojo Suprapto

Abstract:

Companies have already implemented the Enterprise Resource Planning (ERP) system to increase the data integration so that they can improve their business performance. Although some companies have managed to implement the ERP well, they still need to improve gradually so that the ERP functions can be optimized. To obtain a faster and more accurate data, the key users and IT department have to customize the process to suit the needs of the company. In reality, sustaining the ERP technology system requires soft and hard ERP so it enables to improve the business performance of the company. Soft and hard ERP are needed to build a tough system to ensure the integration among departments running smoothly. This research has three questions. First, is the soft ERP bringing impacts to the hard ERP and system integration. Then, is the hard ERP having impacts to the system integration. Finally, is the business performance of the manufacturing companies is affected by the soft ERP, hard ERP, and system integration. The questionnaires are distributed to 100 manufacturing companies in East Java, and are collected from 90 companies which have implemented the ERP, with the response rate of 90%. From the data analysis using PLS program, it is obtained that the soft ERP brings positive impacts to the hard ERP and system integration for the companies. Then, the hard ERP brings also positive impacts to the system integration. Finally, the business process performance of the manufacturing companies is affected by the system integration, soft ERP, and hard ERP simultaneously.

Keywords: soft ERP, hard ERP, system integration, business performance

Procedia PDF Downloads 405
22216 Improving the Deficiencies in Entrepreneurship Training for Small Businesses in Emerging Markets

Authors: Eno Jah Tabogo

Abstract:

The aim of this research is to identify and examine current deficiencies in entrepreneurial training in improving the performance of small businesses in sub Saharan Africa economies. This research achieves this by examining the course content, training methods, and profiles of trainers and trainees of small business service providers in Sub Saharan Africa (SSA) to identify training deficiencies in improving small businesses. Data was for the analysis was collected from a sample of four entrepreneurial training providers in SSA. These four providers served an average of 1,500 trainees. Questionnaire was used to collect data via face to face and through telephone. Face validity was determined by distributing the questionnaire among a group of colleagues, followed by a group discussion to strengthen the validity of the questionnaire. Interviews were also held with managers of training programs. Content and descriptive statistics was used to analyse the data collected. The results indicated only 25% of the training content were entrepreneurial. In terms of service provided, both business, entrepreneurial, technical and after-care services were identified. It was also discovered that owners of training firms had no formal entrepreneurship background. The paper contributes by advocating for a comprehensive entrepreneurship-training program for successful small business enterprises. Recommendations that could help sustain emerging small business enterprises and direction for further research are presented.

Keywords: entrepreneurship, emerging markets, small business, training

Procedia PDF Downloads 141
22215 Safety Culture, Mindfulness and Safe Behaviours of Students Residing in the Halls of Residence of Obafemi Awolowo University, Ile Ife, Nigeria

Authors: Olajumoke Adetoun Ojeleye

Abstract:

The study assessed the safety culture, mindfulness and safe behaviors of students residing in the halls of residence of Obafemi Awolowo University (OAU), Ile Ife, Nigeria. The objectives of the study were to assess the level of safety mindfulness of students residing in the halls of residence of OAU, examine their safety culture and establish whether these students are involved in unsafe practices. The study employed a cross-sectional research design and instrument used for data collection was a self-structured, self-administered questionnaire. The questionnaire was tested for validity and reliability with its reliability coefficient at 0.71 before being used for data collection. Respondents were selected by multi-stage sampling technique and the sample size was 530. Data collection took 2 weeks and analysed using descriptive statistical techniques. Results showed that about half of the respondents’ population (49.8%) was between the ages of 20-24 years. There were more males (56.2%) than females (43.8%). Although data demonstrated that majority (91.7%) of the respondents are highly safety minded and the safety culture of an equally high proportion (83.4%) was adjudged fair, a lot of improvement is needed in the area of alerting or informing management of impending dangers and studying the hall handbook to internalize its contents. The study further showed that only 43.6% of respondents had good safety practices and behaviors and majority (56.4%) had fair safety practices and behaviors. One accidental discovery of the study is the finding that not a few of the students squat their counterparts. The study recommended the establishment of clearly written out complaint procedure that is accessible and available to all hall residents, building more hostels with adequate facilities to address the issue of overcrowding and also putting systems in place in order to encourage residents to report incidences/accidents.

Keywords: safe behaviours, safety culture, safety mindfulness, student

Procedia PDF Downloads 263
22214 Numerical Investigation of Turbulent Inflow Strategy in Wind Energy Applications

Authors: Arijit Saha, Hassan Kassem, Leo Hoening

Abstract:

Ongoing climate change demands the increasing use of renewable energies. Wind energy plays an important role in this context since it can be applied almost everywhere in the world. To reduce the costs of wind turbines and to make them more competitive, simulations are very important since experiments are often too costly if at all possible. The wind turbine on a vast open area experiences the turbulence generated due to the atmosphere, so it was of utmost interest from this research point of view to generate the turbulence through various Inlet Turbulence Generation methods like Precursor cyclic and Kaimal Spectrum Exponential Coherence (KSEC) in the computational simulation domain. To be able to validate computational fluid dynamic simulations of wind turbines with the experimental data, it is crucial to set up the conditions in the simulation as close to reality as possible. This present work, therefore, aims at investigating the turbulent inflow strategy and boundary conditions of KSEC and providing a comparative analysis alongside the Precursor cyclic method for Large Eddy Simulation within the context of wind energy applications. For the generation of the turbulent box through KSEC method, firstly, the constrained data were collected from an auxiliary channel flow, and later processing was performed with the open-source tool PyconTurb, whereas for the precursor cyclic, only the data from the auxiliary channel were sufficient. The functionality of these methods was studied through various statistical properties such as variance, turbulent intensity, etc with respect to different Bulk Reynolds numbers, and a conclusion was drawn on the feasibility of KSEC method. Furthermore, it was found necessary to verify the obtained data with DNS case setup for its applicability to use it as a real field CFD simulation.

Keywords: Inlet Turbulence Generation, CFD, precursor cyclic, KSEC, large Eddy simulation, PyconTurb

Procedia PDF Downloads 96
22213 Prediction of Temperature Distribution during Drilling Process Using Artificial Neural Network

Authors: Ali Reza Tahavvor, Saeed Hosseini, Nazli Jowkar, Afshin Karimzadeh Fard

Abstract:

Experimental & numeral study of temperature distribution during milling process, is important in milling quality and tools life aspects. In the present study the milling cross-section temperature is determined by using Artificial Neural Networks (ANN) according to the temperature of certain points of the work piece and the points specifications and the milling rotational speed of the blade. In the present work, at first three-dimensional model of the work piece is provided and then by using the Computational Heat Transfer (CHT) simulations, temperature in different nods of the work piece are specified in steady-state conditions. Results obtained from CHT are used for training and testing the ANN approach. Using reverse engineering and setting the desired x, y, z and the milling rotational speed of the blade as input data to the network, the milling surface temperature determined by neural network is presented as output data. The desired points temperature for different milling blade rotational speed are obtained experimentally and by extrapolation method for the milling surface temperature is obtained and a comparison is performed among the soft programming ANN, CHT results and experimental data and it is observed that ANN soft programming code can be used more efficiently to determine the temperature in a milling process.

Keywords: artificial neural networks, milling process, rotational speed, temperature

Procedia PDF Downloads 405
22212 Catchment Yield Prediction in an Ungauged Basin Using PyTOPKAPI

Authors: B. S. Fatoyinbo, D. Stretch, O. T. Amoo, D. Allopi

Abstract:

This study extends the use of the Drainage Area Regionalization (DAR) method in generating synthetic data and calibrating PyTOPKAPI stream yield for an ungauged basin at a daily time scale. The generation of runoff in determining a river yield has been subjected to various topographic and spatial meteorological variables, which integers form the Catchment Characteristics Model (CCM). Many of the conventional CCM models adapted in Africa have been challenged with a paucity of adequate, relevance and accurate data to parameterize and validate the potential. The purpose of generating synthetic flow is to test a hydrological model, which will not suffer from the impact of very low flows or very high flows, thus allowing to check whether the model is structurally sound enough or not. The employed physically-based, watershed-scale hydrologic model (PyTOPKAPI) was parameterized with GIS-pre-processing parameters and remote sensing hydro-meteorological variables. The validation with mean annual runoff ratio proposes a decent graphical understanding between observed and the simulated discharge. The Nash-Sutcliffe efficiency and coefficient of determination (R²) values of 0.704 and 0.739 proves strong model efficiency. Given the current climate variability impact, water planner can now assert a tool for flow quantification and sustainable planning purposes.

Keywords: catchment characteristics model, GIS, synthetic data, ungauged basin

Procedia PDF Downloads 327
22211 Theorical Studies on the Structural Properties of 2,3-Bis(Furan-2-Yl)Pyrazino[2,3-F][1,10]Phenanthroline Derivaties

Authors: Zahra Sadeghian

Abstract:

This paper reports on the geometrical parameters optimized of the stationary point for the 2,3-Bis(furan-2-yl)pyrazino[2,3-f][1,10]phenanthroline. The calculations are performed using density functional theory (DFT) method at the B3LYP/LanL2DZ level. We determined bond lengths and bond angles values for the compound and calculate the amount of bond hybridization according to the natural bond orbital theory (NBO) too. The energy of frontier orbital (HOMO and LUMO) are computed. In addition, calculated data are accurately compared with the experimental result. This comparison show that the our theoretical data are in reasonable agreement with the experimental values.

Keywords: 2, 3-Bis(furan-2-yl)pyrazino[2, 3-f][1, 10]phenanthroline, density functional theory, theorical calculations, LanL2DZ level, B3LYP level

Procedia PDF Downloads 371
22210 3D Writing on Photosensitive Glass-Ceramics

Authors: C. Busuioc, S. Jinga, E. Pavel

Abstract:

Optical lithography is a key technique in the development of sub-5 nm patterns for the semiconductor industry. We have already reported that the best results obtained with respect to direct laser writing process on active media, such as glass-ceramics, are achieved only when the energy of the laser radiation is absorbed in discrete quantities. Further, we need to clarify the role of active centers concentration in silver nanocrystals natural generation, as well as in fluorescent rare-earth nanostructures formation. As a consequence, samples with different compositions were prepared. SEM, AFM, TEM and STEM investigations were employed in order to demonstrate that few nm width lines can be written on fluorescent photosensitive glass-ceramics, these being efficient absorbers. Moreover, we believe that the experimental data will lead to the best choice in terms of active centers amount, laser power and glass-ceramic matrix.

Keywords: glass-ceramics, 3D laser writing, optical disks, data storage

Procedia PDF Downloads 298
22209 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation

Procedia PDF Downloads 245