Search results for: meteorological prediction data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25725

Search results for: meteorological prediction data

20895 Using RASCAL Code to Analyze the Postulated UF6 Fire Accident

Authors: J. R. Wang, Y. Chiang, W. S. Hsu, S. H. Chen, J. H. Yang, S. W. Chen, C. Shih, Y. F. Chang, Y. H. Huang, B. R. Shen

Abstract:

In this research, the RASCAL code was used to simulate and analyze the postulated UF6 fire accident which may occur in the Institute of Nuclear Energy Research (INER). There are four main steps in this research. In the first step, the UF6 data of INER were collected. In the second step, the RASCAL analysis methodology and model was established by using these data. Third, this RASCAL model was used to perform the simulation and analysis of the postulated UF6 fire accident. Three cases were simulated and analyzed in this step. Finally, the analysis results of RASCAL were compared with the hazardous levels of the chemicals. According to the compared results of three cases, Case 3 has the maximum danger in human health.

Keywords: RASCAL, UF₆, safety, hydrogen fluoride

Procedia PDF Downloads 204
20894 Characterising the Dynamic Friction in the Staking of Plain Spherical Bearings

Authors: Jacob Hatherell, Jason Matthews, Arnaud Marmier

Abstract:

Anvil Staking is a cold-forming process that is used in the assembly of plain spherical bearings into a rod-end housing. This process ensures that the bearing outer lip conforms to the chamfer in the matching rod end to produce a lightweight mechanical joint with sufficient strength to meet the pushout load requirement of the assembly. Finite Element (FE) analysis is being used extensively to predict the behaviour of metal flow in cold forming processes to support industrial manufacturing and product development. On-going research aims to validate FE models across a wide range of bearing and rod-end geometries by systematically isolating and understanding the uncertainties caused by variations in, material properties, load-dependent friction coefficients and strain rate sensitivity. The improved confidence in these models aims to eliminate the costly and time-consuming process of experimental trials in the introduction of new bearing designs. Previous literature has shown that friction coefficients do not remain constant during cold forming operations, however, the understanding of this phenomenon varies significantly and is rarely implemented in FE models. In this paper, a new approach to evaluate the normal contact pressure versus friction coefficient relationship is outlined using friction calibration charts generated via iterative FE models and ring compression tests. When compared to previous research, this new approach greatly improves the prediction of forming geometry and the forming load during the staking operation. This paper also aims to standardise the FE approach to modelling ring compression test and determining the friction calibration charts.

Keywords: anvil staking, finite element analysis, friction coefficient, spherical plain bearing, ring compression tests

Procedia PDF Downloads 200
20893 TNFRSF11B Gene Polymorphisms A163G and G11811C in Prediction of Osteoporosis Risk

Authors: I. Boroňová, J.Bernasovská, J. Kľoc, Z. Tomková, E. Petrejčíková, D. Gabriková, S. Mačeková

Abstract:

Osteoporosis is a complex health disease characterized by low bone mineral density, which is determined by an interaction of genetics with metabolic and environmental factors. Current research in genetics of osteoporosis is focused on identification of responsible genes and polymorphisms. TNFRSF11B gene plays a key role in bone remodeling. The aim of this study was to investigate the genotype and allele distribution of A163G (rs3102735) osteoprotegerin gene promoter and G1181C (rs2073618) osteoprotegerin first exon polymorphisms in the group of 180 unrelated postmenopausal women with diagnosed osteoporosis and 180 normal controls. Genomic DNA was isolated from peripheral blood leukocytes using standard methodology. Genotyping for presence of different polymorphisms was performed using the Custom Taqman®SNP Genotyping assays. Hardy-Weinberg equilibrium was tested for each SNP in the groups of participants using the chi-square (χ2) test. The distribution of investigated genotypes in the group of patients with osteoporosis were as follows: AA (66.7%), AG (32.2%), GG (1.1%) for A163G polymorphism; GG (19.4%), CG (44.4%), CC (36.1%) for G1181C polymorphism. The distribution of genotypes in normal controls were follows: AA (71.1%), AG (26.1%), GG (2.8%) for A163G polymorphism; GG (22.2%), CG (48.9%), CC (28.9%) for G1181C polymorphism. In A163G polymorphism the variant G allele was more common among patients with osteoporosis: 17.2% versus 15.8% in normal controls. Also, in G1181C polymorphism the phenomenon of more frequent occurrence of C allele in the group of patients with osteoporosis was observed (58.3% versus 53.3%). Genotype and allele distributions showed no significant differences (A163G: χ2=0.270, p=0.605; χ2=0.250, p=0.616; G1181C: χ2= 1.730, p=0.188; χ2=1.820, p=0.177). Our results represents an initial study, further studies of more numerous file and associations studies will be carried out. Knowing the distribution of genotypes is important for assessing the impact of these polymorphisms on various parameters associated with osteoporosis. Screening for identification of “at-risk” women likely to develop osteoporosis and initiating subsequent early intervention appears to be most effective strategy to substantially reduce the risks of osteoporosis.

Keywords: osteoporosis, real-time PCR method, SNP polymorphisms

Procedia PDF Downloads 317
20892 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder

Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen

Abstract:

Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.

Keywords: natural language inference, explanation generation, variational auto-encoder, generative model

Procedia PDF Downloads 139
20891 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing

Authors: Abdullah Bal, Sevdenur Bal

Abstract:

This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.

Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware

Procedia PDF Downloads 499
20890 Preservation Model to Process 'La Bomba Del Chota' as a Living Cultural Heritage

Authors: Lucia Carrion Gordon, Maria Gabriela Lopez Yanez

Abstract:

This project focuses on heritage concepts and their importance in every evolving and changing Digital Era where system solutions have to be sustainable, efficient and suitable to the basic needs. The prototype has to cover the principal requirements for the case studies. How to preserve the sociological ideas of dances in Ecuador like ‘La Bomba’ is the best example and challenge to preserve the intangible data. The same idea is applicable with books and music. The History and how to keep it, is the principal mission of Heritage Preservation. The dance of La Bomba is rooted on a specific movement system whose main part is the sideward hip movement. La Bomba´s movement system is the surface manifestation of a whole system of knowledge whose principal characteristics are the historical relation of Chote˜nos with their land and their families.

Keywords: digital preservation, heritage, IT management, data, metadata, ontology, serendipity

Procedia PDF Downloads 373
20889 Exploring Teachers’ Beliefs about Diagnostic Language Assessment Practices in a Large-Scale Assessment Program

Authors: Oluwaseun Ijiwade, Chris Davison, Kelvin Gregory

Abstract:

In Australia, like other parts of the world, the debate on how to enhance teachers using assessment data to inform teaching and learning of English as an Additional Language (EAL, Australia) or English as a Foreign Language (EFL, United States) have occupied the centre of academic scholarship. Traditionally, this approach was conceptualised as ‘Formative Assessment’ and, in recent times, ‘Assessment for Learning (AfL)’. The central problem is that teacher-made tests are limited in providing data that can inform teaching and learning due to variability of classroom assessments, which are hindered by teachers’ characteristics and assessment literacy. To address this concern, scholars in language education and testing have proposed a uniformed large-scale computer-based assessment program to meet the needs of teachers and promote AfL in language education. In Australia, for instance, the Victoria state government commissioned a large-scale project called 'Tools to Enhance Assessment Literacy (TEAL) for Teachers of English as an additional language'. As part of the TEAL project, a tool called ‘Reading and Vocabulary assessment for English as an Additional Language (RVEAL)’, as a diagnostic language assessment (DLA), was developed by language experts at the University of New South Wales for teachers in Victorian schools to guide EAL pedagogy in the classroom. Therefore, this study aims to provide qualitative evidence for understanding beliefs about the diagnostic language assessment (DLA) among EAL teachers in primary and secondary schools in Victoria, Australia. To realize this goal, this study raises the following questions: (a) How do teachers use large-scale assessment data for diagnostic purposes? (b) What skills do language teachers think are necessary for using assessment data for instruction in the classroom? and (c) What factors, if any, contribute to teachers’ beliefs about diagnostic assessment in a large-scale assessment? Semi-structured interview method was used to collect data from at least 15 professional teachers who were selected through a purposeful sampling. The findings from the resulting data analysis (thematic analysis) provide an understanding of teachers’ beliefs about DLA in a classroom context and identify how these beliefs are crystallised in language teachers. The discussion shows how the findings can be used to inform professional development processes for language teachers as well as informing important factor of teacher cognition in the pedagogic processes of language assessment. This, hopefully, will help test developers and testing organisations to align the outcome of this study with their test development processes to design assessment that can enhance AfL in language education.

Keywords: beliefs, diagnostic language assessment, English as an additional language, teacher cognition

Procedia PDF Downloads 192
20888 Prediction of Positive Cloud-to-Ground Lightning Striking Zones for Charged Thundercloud Based on Line Charge Model

Authors: Surajit Das Barman, Rakibuzzaman Shah, Apurv Kumar

Abstract:

Bushfire is known as one of the ascendant factors to create pyrocumulus thundercloud that causes the ignition of new fires by pyrocumulonimbus (pyroCb) lightning strikes and creates major losses of lives and property worldwide. A conceptual model-based risk planning would be beneficial to predict the lightning striking zones on the surface of the earth underneath the pyroCb thundercloud. PyroCb thundercloud can generate both positive cloud-to-ground (+CG) and negative cloud-to-ground (-CG) lightning in which +CG tends to ignite more bushfires and cause massive damage to nature and infrastructure. In this paper, a simple line charge structured thundercloud model is constructed in 2-D coordinates using the method of image charge to predict the probable +CG lightning striking zones on the earth’s surface for two conceptual thundercloud charge configurations: titled dipole and conventional tripole structure with excessive lower positive charge regions that lead to producing +CG lightning. The electric potential and surface charge density along the earth’s surface for both structures via continuously adjusting the position and the charge density of their charge regions is investigated. Simulation results for tilted dipole structure confirm the down-shear extension of the upper positive charge region in the direction of the cloud’s forward flank by 4 to 8 km, resulting in negative surface density, and would expect +CG lightning to strike within 7.8 km to 20 km around the earth periphery in the direction of the cloud’s forward flank. On the other hand, the conceptual tripole charge structure with enhanced lower positive charge region develops negative surface charge density on the earth’s surface in the range |x| < 6.5 km beneath the thundercloud and highly favors producing +CG lightning strikes.

Keywords: pyrocumulonimbus, cloud-to-ground lightning, charge structure, surface charge density, forward flank

Procedia PDF Downloads 102
20887 Evaluating Models Through Feature Selection Methods Using Data Driven Approach

Authors: Shital Patil, Surendra Bhosale

Abstract:

Cardiac diseases are the leading causes of mortality and morbidity in the world, from recent few decades accounting for a large number of deaths have emerged as the most life-threatening disorder globally. Machine learning and Artificial intelligence have been playing key role in predicting the heart diseases. A relevant set of feature can be very helpful in predicting the disease accurately. In this study, we proposed a comparative analysis of 4 different features selection methods and evaluated their performance with both raw (Unbalanced dataset) and sampled (Balanced) dataset. The publicly available Z-Alizadeh Sani dataset have been used for this study. Four feature selection methods: Data Analysis, minimum Redundancy maximum Relevance (mRMR), Recursive Feature Elimination (RFE), Chi-squared are used in this study. These methods are tested with 8 different classification models to get the best accuracy possible. Using balanced and unbalanced dataset, the study shows promising results in terms of various performance metrics in accurately predicting heart disease. Experimental results obtained by the proposed method with the raw data obtains maximum AUC of 100%, maximum F1 score of 94%, maximum Recall of 98%, maximum Precision of 93%. While with the balanced dataset obtained results are, maximum AUC of 100%, F1-score 95%, maximum Recall of 95%, maximum Precision of 97%.

Keywords: cardio vascular diseases, machine learning, feature selection, SMOTE

Procedia PDF Downloads 108
20886 Device to Alert and Fire Prevention through Temperature Monitoring and Gas Detection

Authors: Dêivisson Alves Anjos, Blenda Fonseca Aires Teles, Queitiane Castro Costa

Abstract:

Fire is one of the biggest dangers for factories, warehouses, mills, among other places, causing unimaginable damage, because besides the material damage also directly affects the lives of workers who are likely to suffer death or very serious consequences. This protection of the lives of these people should be taken seriously, always seeking safety. Thus investment in security and monitoring equipment must be high, so you can prevent or reduce the impacts of a possible fire. Our device, made in PIC micro controller monitors the temperature and the presence of gas in the environment, it sends the data via Bluetooth device to a developed in LabVIEW interface saves these data continuously and alert if the temperature exceeds the allowed or some gas is detected. Currently the device is in operation and can perform several tests, as well as use in different areas for which you need anti-fire protection.

Keywords: pic, bluetooth, fire, temperature, gas, LabVIEW

Procedia PDF Downloads 516
20885 Parametric Analysis and Optimal Design of Functionally Graded Plates Using Particle Swarm Optimization Algorithm and a Hybrid Meshless Method

Authors: Foad Nazari, Seyed Mahmood Hosseini, Mohammad Hossein Abolbashari, Mohammad Hassan Abolbashari

Abstract:

The present study is concerned with the optimal design of functionally graded plates using particle swarm optimization (PSO) algorithm. In this study, meshless local Petrov-Galerkin (MLPG) method is employed to obtain the functionally graded (FG) plate’s natural frequencies. Effects of two parameters including thickness to height ratio and volume fraction index on the natural frequencies and total mass of plate are studied by using the MLPG results. Then the first natural frequency of the plate, for different conditions where MLPG data are not available, is predicted by an artificial neural network (ANN) approach which is trained by back-error propagation (BEP) technique. The ANN results show that the predicted data are in good agreement with the actual one. To maximize the first natural frequency and minimize the mass of FG plate simultaneously, the weighted sum optimization approach and PSO algorithm are used. However, the proposed optimization process of this study can provide the designers of FG plates with useful data.

Keywords: optimal design, natural frequency, FG plate, hybrid meshless method, MLPG method, ANN approach, particle swarm optimization

Procedia PDF Downloads 358
20884 Clustering of Extremes in Financial Returns: A Comparison between Developed and Emerging Markets

Authors: Sara Ali Alokley, Mansour Saleh Albarrak

Abstract:

This paper investigates the dependency or clustering of extremes in the financial returns data by estimating the extremal index value θ∈[0,1]. The smaller the value of θ the more clustering we have. Here we apply the method of Ferro and Segers (2003) to estimate the extremal index for a range of threshold values. We compare the dependency structure of extremes in the developed and emerging markets. We use the financial returns of the stock market index in the developed markets of US, UK, France, Germany and Japan and the emerging markets of Brazil, Russia, India, China and Saudi Arabia. We expect that more clustering occurs in the emerging markets. This study will help to understand the dependency structure of the financial returns data.

Keywords: clustring, extremes, returns, dependency, extermal index

Procedia PDF Downloads 395
20883 The Antecedent Factor Affecting the Entrepreneurs’ Decision Making for Using Accounting Office Service in Chiang Mai Province

Authors: Nawaporn Thongnut

Abstract:

The objective was to study the process and how to prepare the accounting of the Thai temples and to study the performance and quality in the accounting preparation of the temples in accordance with the regulation. The population was the accountants and individuals involved in the accounting preparation of 17 temples in the suburban Bangkok. The measurement used in this study was questionnaire. The statistics used in the analysis are the descriptive statistic. The data was presented in the form of percentage tables to describe the data on the demographic characteristics. The study found that temple wardens were responsible for the accounting and reporting of the temples. Abbots were to check the accuracy of the accounts in the monasteries. Mostly, there was no account auditing of the monasteries from the outside. The practice when receiving income for most of the monasteries had been keeping financial document in an orderly manner.

Keywords: corporate social responsibility, creating shared value, management accountant’s roles, stock exchange of Thailand

Procedia PDF Downloads 221
20882 A Study on Spatial Morphological Cognitive Features of Lidukou Village Based on Space Syntax

Authors: Man Guo, Wenyong Tan

Abstract:

By combining spatial syntax with data obtained from field visits, this paper interprets the internal relationship between spatial morphology and spatial cognition in Lidukou Village. By comparing the obtained data, it is recognized that the spatial integration degree of Lidukou Village is positively correlated with the spatial cognitive intention of local villagers. The part with a higher spatial cognitive degree within the village is distributed along the axis mainly composed of Shuxiang Road. And the accessibility of historical relics is weak, and there is no systematic relationship between them. Aiming at the morphological problem of Lidukou Village, optimization strategies have been proposed from multiple perspectives, such as optimizing spatial mechanisms and shaping spatial nodes.

Keywords: traditional villages, spatial syntax, spatial integration degree, morphological problem

Procedia PDF Downloads 38
20881 Oryzanol Recovery from Rice Bran Oil: Adsorption Equilibrium Models Through Kinetics Data Approachments

Authors: A.D. Susanti, W. B. Sediawan, S.K. Wirawan, Budhijanto, Ritmaleni

Abstract:

Oryzanol content in rice bran oil (RBO) naturally has high antioxidant activity. Its reviewed has several health properties and high interested in pharmacy, cosmetics, and nutrition’s. Because of the low concentration of oryzanol in crude RBO (0.9-2.9%) then its need to be further processed for practical usage, such as via adsorption process. In this study, investigation and adjustment of adsorption equilibrium models were conducted through the kinetic data approachments. Mathematical modeling on kinetics of batch adsorption of oryzanol separation from RBO has been set-up and then applied for equilibrium results. The size of adsorbent particles used in this case are usually relatively small then the concentration in the adsorbent is assumed to be not different. Hence, the adsorption rate is controlled by the rate of oryzanol mass transfer from the bulk fluid of RBO to the surface of silica gel. In this approachments, the rate of mass transfer is assumed to be proportional to the concentration deviation from the equilibrium state. The equilibrium models applied were Langmuir, coefficient distribution, and Freundlich with the values of the parameters obtained from equilibrium results. It turned out that the models set-up can quantitatively describe the experimental kinetics data and the adjustment of the values of equilibrium isotherm parameters significantly improves the accuracy of the model. And then the value of mass transfer coefficient per unit adsorbent mass (kca) is obtained by curve fitting.

Keywords: adsorption equilibrium, adsorption kinetics, oryzanol, rice bran oil

Procedia PDF Downloads 310
20880 Investigation of Various Variabilities of Social Anxiety Levels of Physical Education and Sports School Students

Authors: Turan Cetinkaya

Abstract:

The aim of this study is to determine the relation of the level of social anxiety to various variables of the students in physical education and sports departments. 229 students who are studying at the departments of physical education and sports teaching, sports management and coaching in Ahi Evran University, College of Physical Education and Sports participate in the research. Personal information tool and social anxiety scale consisting 30 items were used as data collection tool in the research. Distribution, frequency, t-test and ANOVA test were used in the comparison of the related data. As a result of statistical analysis, social anxiety levels do not differ according to gender, income level, sports type and national player status.

Keywords: social anxiety, undergraduates, sport, unıversty

Procedia PDF Downloads 415
20879 Arabic Text Classification: Review Study

Authors: M. Hijazi, A. Zeki, A. Ismail

Abstract:

An enormous amount of valuable human knowledge is preserved in documents. The rapid growth in the number of machine-readable documents for public or private access requires the use of automatic text classification. Text classification can be defined as assigning or structuring documents into a defined set of classes known in advance. Arabic text classification methods have emerged as a natural result of the existence of a massive amount of varied textual information written in the Arabic language on the web. This paper presents a review on the published researches of Arabic Text Classification using classical data representation, Bag of words (BoW), and using conceptual data representation based on semantic resources such as Arabic WordNet and Wikipedia.

Keywords: Arabic text classification, Arabic WordNet, bag of words, conceptual representation, semantic relations

Procedia PDF Downloads 414
20878 Heliport Remote Safeguard System Based on Real-Time Stereovision 3D Reconstruction Algorithm

Authors: Ł. Morawiński, C. Jasiński, M. Jurkiewicz, S. Bou Habib, M. Bondyra

Abstract:

With the development of optics, electronics, and computers, vision systems are increasingly used in various areas of life, science, and industry. Vision systems have a huge number of applications. They can be used in quality control, object detection, data reading, e.g., QR-code, etc. A large part of them is used for measurement purposes. Some of them make it possible to obtain a 3D reconstruction of the tested objects or measurement areas. 3D reconstruction algorithms are mostly based on creating depth maps from data that can be acquired from active or passive methods. Due to the specific appliance in airfield technology, only passive methods are applicable because of other existing systems working on the site, which can be blinded on most spectral levels. Furthermore, reconstruction is required to work long distances ranging from hundreds of meters to tens of kilometers with low loss of accuracy even with harsh conditions such as fog, rain, or snow. In response to those requirements, HRESS (Heliport REmote Safeguard System) was developed; which main part is a rotational head with a two-camera stereovision rig gathering images around the head in 360 degrees along with stereovision 3D reconstruction and point cloud combination. The sub-pixel analysis introduced in the HRESS system makes it possible to obtain an increased distance measurement resolution and accuracy of about 3% for distances over one kilometer. Ultimately, this leads to more accurate and reliable measurement data in the form of a point cloud. Moreover, the program algorithm introduces operations enabling the filtering of erroneously collected data in the point cloud. All activities from the programming, mechanical and optical side are aimed at obtaining the most accurate 3D reconstruction of the environment in the measurement area.

Keywords: airfield monitoring, artificial intelligence, stereovision, 3D reconstruction

Procedia PDF Downloads 110
20877 An Experimental Investigation of the Surface Pressure on Flat Plates in Turbulent Boundary Layers

Authors: Azadeh Jafari, Farzin Ghanadi, Matthew J. Emes, Maziar Arjomandi, Benjamin S. Cazzolato

Abstract:

The turbulence within the atmospheric boundary layer induces highly unsteady aerodynamic loads on structures. These loads, if not accounted for in the design process, will lead to structural failure and are therefore important for the design of the structures. For an accurate prediction of wind loads, understanding the correlation between atmospheric turbulence and the aerodynamic loads is necessary. The aim of this study is to investigate the effect of turbulence within the atmospheric boundary layer on the surface pressure on a flat plate over a wide range of turbulence intensities and integral length scales. The flat plate is chosen as a fundamental geometry which represents structures such as solar panels and billboards. Experiments were conducted at the University of Adelaide large-scale wind tunnel. Two wind tunnel boundary layers with different intensities and length scales of turbulence were generated using two sets of spires with different dimensions and a fetch of roughness elements. Average longitudinal turbulence intensities of 13% and 26% were achieved in each boundary layer, and the longitudinal integral length scale within the three boundary layers was between 0.4 m and 1.22 m. The pressure distributions on a square flat plate at different elevation angles between 30° and 90° were measured within the two boundary layers with different turbulence intensities and integral length scales. It was found that the peak pressure coefficient on the flat plate increased with increasing turbulence intensity and integral length scale. For example, the peak pressure coefficient on a flat plate elevated at 90° increased from 1.2 to 3 with increasing turbulence intensity from 13% to 26%. Furthermore, both the mean and the peak pressure distribution on the flat plates varied with turbulence intensity and length scale. The results of this study can be used to provide a more accurate estimation of the unsteady wind loads on structures such as buildings and solar panels.

Keywords: atmospheric boundary layer, flat plate, pressure coefficient, turbulence

Procedia PDF Downloads 129
20876 An Examination of Changes on Natural Vegetation due to Charcoal Production Using Multi Temporal Land SAT Data

Authors: T. Garba, Y. Y. Babanyara, M. Isah, A. K. Muktari, R. Y. Abdullahi

Abstract:

The increased in demand of fuel wood for heating, cooking and sometimes bakery has continued to exert appreciable impact on natural vegetation. This study focus on the use of multi-temporal data from land sat TM of 1986, land sat EMT of 1999 and lands sat ETM of 2006 to investigate the changes of Natural Vegetation resulting from charcoal production activities. The three images were classified based on bare soil, built up areas, cultivated land, and natural vegetation, Rock out crop and water bodies. From the classified images Land sat TM of 1986 it shows natural vegetation of the study area to be 308,941.48 hectares equivalent to 50% of the area it then reduces to 278,061.21 which is 42.92% in 1999 it again depreciated to 199,647.81 in 2006 equivalent to 30.83% of the area. Consequently cultivated continue increasing from 259,346.80 hectares (42%) in 1986 to 312,966.27 hectares (48.3%) in 1999 and then to 341.719.92 hectares (52.78%). These show that within the span of 20 years (1986 to 2006) the natural vegetation is depreciated by 119,293.81 hectares. This implies that if the menace is not control the natural might likely be lost in another twenty years. This is because forest cleared for charcoal production is normally converted to farmland. The study therefore concluded that there is the need for alternatives source of domestic energy such as the use of biomass which can easily be accessible and affordable to people. In addition, the study recommended that there should be strong policies enforcement for the protection forest reserved.

Keywords: charcoal, classification, data, images, land use, natural vegetation

Procedia PDF Downloads 357
20875 Assessing the Prevalence of Accidental Iatrogenic Paracetamol Overdose in Adult Hospital Patients Weighing <50kg: A Quality Improvement Project

Authors: Elisavet Arsenaki

Abstract:

Paracetamol overdose is associated with significant and possibly permanent consequences including hepatotoxicity, acute and chronic liver failure, and death. This quality improvement project explores the prevalence of accidental iatrogenic paracetamol overdose in hospital patients with a low body weight, defined as <50kg and assesses the impact of educational posters in trying to reduce it. The study included all adult inpatients on the admissions ward, a short stay ward for patients requiring 12-72 hour treatment, and consisted of three cycles. Each cycle consisted of 3 days of data collection in a given month (data collection for cycle 1 occurred in January 2022, February 2022 for cycle 2 and March 2022 for cycle 3). All patients given paracetamol had their prescribed dose checked against their charted weight to identify the percentage of adult inpatients <50kg who were prescribed 1g of paracetamol instead of 500mg. In the first cycle of the audit, data were collected from 83 patients who were prescribed paracetamol on the admissions ward. Subsequently, four A4 educational posters were displayed across the ward, on two separate occasions and with a one-month interval in between each poster display. The aim of this was to remind prescribing doctors of their responsibility to check patient body weight prior to prescribing paracetamol. Data were collected again one week after each round of poster display, from 72 and 70 patients respectively. Over the 3 cycles with a cumulative 225 patients, 15 weighed <50kg (6.67%) and of those, 5 were incorrectly prescribed 1g of paracetamol, yielding a 33.3% prevalence of accidental iatrogenic paracetamol overdose in adult inpatients. In cycle 1 of the project, 3 out of 6 adult patients weighing <50kg were overdosed on paracetamol, meaning that 50% of low weight patients were prescribed the wrong dose of paracetamol for their weight. In the second data collection cycle, 1 out of 5 <50kg patients were overdosed (20%) and in the third cycle, 1 out of 4 (25%). The use of educational posters resulted in a lower prevalence of accidental iatrogenic paracetamol overdose in low body weight adult inpatients. However, the differences observed were statistically insignificant (p value 0.993 and 0.995 respectively). Educational posters did not induce a significant decrease in the prevalence of accidental iatrogenic paracetamol overdose. More robust strategies need to be employed to further decrease paracetamol overdose in patients weighing <50kg.

Keywords: iatrogenic, overdose, paracetamol, patient, safety

Procedia PDF Downloads 106
20874 Deorbiting Performance of Electrodynamic Tethers to Mitigate Space Debris

Authors: Giulia Sarego, Lorenzo Olivieri, Andrea Valmorbida, Carlo Bettanini, Giacomo Colombatti, Marco Pertile, Enrico C. Lorenzini

Abstract:

International guidelines recommend removing any artificial body in Low Earth Orbit (LEO) within 25 years from mission completion. Among disposal strategies, electrodynamic tethers appear to be a promising option for LEO, thanks to the limited storage mass and the minimum interface requirements to the host spacecraft. In particular, recent technological advances make it feasible to deorbit large objects with tether lengths of a few kilometers or less. To further investigate such an innovative passive system, the European Union is currently funding the project E.T.PACK – Electrodynamic Tether Technology for Passive Consumable-less Deorbit Kit in the framework of the H2020 Future Emerging Technologies (FET) Open program. The project focuses on the design of an end of life disposal kit for LEO satellites. This kit aims to deploy a taped tether that can be activated at the spacecraft end of life to perform autonomous deorbit within the international guidelines. In this paper, the orbital performance of the E.T.PACK deorbiting kit is compared to other disposal methods. Besides, the orbital decay prediction is parametrized as a function of spacecraft mass and tether system performance. Different values of length, width, and thickness of the tether will be evaluated for various scenarios (i.e., different initial orbital parameters). The results will be compared to other end-of-life disposal methods with similar allocated resources. The analysis of the more innovative system’s performance with the tape coated with a thermionic material, which has a low work-function (LWT), for which no active component for the cathode is required, will also be briefly discussed. The results show that the electrodynamic tether option can be a competitive and performant solution for satellite disposal compared to other deorbit technologies.

Keywords: deorbiting performance, H2020, spacecraft disposal, space electrodynamic tethers

Procedia PDF Downloads 161
20873 The Impacts of Soft and Hard Enterprise Resource Planning to the Corporate Business Performance through the Enterprise Resource Planning Integrated System

Authors: Sautma Ronni Basana, Zeplin Jiwa Husada Tarigan, Widjojo Suprapto

Abstract:

Companies have already implemented the Enterprise Resource Planning (ERP) system to increase the data integration so that they can improve their business performance. Although some companies have managed to implement the ERP well, they still need to improve gradually so that the ERP functions can be optimized. To obtain a faster and more accurate data, the key users and IT department have to customize the process to suit the needs of the company. In reality, sustaining the ERP technology system requires soft and hard ERP so it enables to improve the business performance of the company. Soft and hard ERP are needed to build a tough system to ensure the integration among departments running smoothly. This research has three questions. First, is the soft ERP bringing impacts to the hard ERP and system integration. Then, is the hard ERP having impacts to the system integration. Finally, is the business performance of the manufacturing companies is affected by the soft ERP, hard ERP, and system integration. The questionnaires are distributed to 100 manufacturing companies in East Java, and are collected from 90 companies which have implemented the ERP, with the response rate of 90%. From the data analysis using PLS program, it is obtained that the soft ERP brings positive impacts to the hard ERP and system integration for the companies. Then, the hard ERP brings also positive impacts to the system integration. Finally, the business process performance of the manufacturing companies is affected by the system integration, soft ERP, and hard ERP simultaneously.

Keywords: soft ERP, hard ERP, system integration, business performance

Procedia PDF Downloads 393
20872 Improving the Deficiencies in Entrepreneurship Training for Small Businesses in Emerging Markets

Authors: Eno Jah Tabogo

Abstract:

The aim of this research is to identify and examine current deficiencies in entrepreneurial training in improving the performance of small businesses in sub Saharan Africa economies. This research achieves this by examining the course content, training methods, and profiles of trainers and trainees of small business service providers in Sub Saharan Africa (SSA) to identify training deficiencies in improving small businesses. Data was for the analysis was collected from a sample of four entrepreneurial training providers in SSA. These four providers served an average of 1,500 trainees. Questionnaire was used to collect data via face to face and through telephone. Face validity was determined by distributing the questionnaire among a group of colleagues, followed by a group discussion to strengthen the validity of the questionnaire. Interviews were also held with managers of training programs. Content and descriptive statistics was used to analyse the data collected. The results indicated only 25% of the training content were entrepreneurial. In terms of service provided, both business, entrepreneurial, technical and after-care services were identified. It was also discovered that owners of training firms had no formal entrepreneurship background. The paper contributes by advocating for a comprehensive entrepreneurship-training program for successful small business enterprises. Recommendations that could help sustain emerging small business enterprises and direction for further research are presented.

Keywords: entrepreneurship, emerging markets, small business, training

Procedia PDF Downloads 133
20871 A Continuous Real-Time Analytic for Predicting Instability in Acute Care Rapid Response Team Activations

Authors: Ashwin Belle, Bryce Benson, Mark Salamango, Fadi Islim, Rodney Daniels, Kevin Ward

Abstract:

A reliable, real-time, and non-invasive system that can identify patients at risk for hemodynamic instability is needed to aid clinicians in their efforts to anticipate patient deterioration and initiate early interventions. The purpose of this pilot study was to explore the clinical capabilities of a real-time analytic from a single lead of an electrocardiograph to correctly distinguish between rapid response team (RRT) activations due to hemodynamic (H-RRT) and non-hemodynamic (NH-RRT) causes, as well as predict H-RRT cases with actionable lead times. The study consisted of a single center, retrospective cohort of 21 patients with RRT activations from step-down and telemetry units. Through electronic health record review and blinded to the analytic’s output, each patient was categorized by clinicians into H-RRT and NH-RRT cases. The analytic output and the categorization were compared. The prediction lead time prior to the RRT call was calculated. The analytic correctly distinguished between H-RRT and NH-RRT cases with 100% accuracy, demonstrating 100% positive and negative predictive values, and 100% sensitivity and specificity. In H-RRT cases, the analytic detected hemodynamic deterioration with a median lead time of 9.5 hours prior to the RRT call (range 14 minutes to 52 hours). The study demonstrates that an electrocardiogram (ECG) based analytic has the potential for providing clinical decision and monitoring support for caregivers to identify at risk patients within a clinically relevant timeframe allowing for increased vigilance and early interventional support to reduce the chances of continued patient deterioration.

Keywords: critical care, early warning systems, emergency medicine, heart rate variability, hemodynamic instability, rapid response team

Procedia PDF Downloads 139
20870 Safety Culture, Mindfulness and Safe Behaviours of Students Residing in the Halls of Residence of Obafemi Awolowo University, Ile Ife, Nigeria

Authors: Olajumoke Adetoun Ojeleye

Abstract:

The study assessed the safety culture, mindfulness and safe behaviors of students residing in the halls of residence of Obafemi Awolowo University (OAU), Ile Ife, Nigeria. The objectives of the study were to assess the level of safety mindfulness of students residing in the halls of residence of OAU, examine their safety culture and establish whether these students are involved in unsafe practices. The study employed a cross-sectional research design and instrument used for data collection was a self-structured, self-administered questionnaire. The questionnaire was tested for validity and reliability with its reliability coefficient at 0.71 before being used for data collection. Respondents were selected by multi-stage sampling technique and the sample size was 530. Data collection took 2 weeks and analysed using descriptive statistical techniques. Results showed that about half of the respondents’ population (49.8%) was between the ages of 20-24 years. There were more males (56.2%) than females (43.8%). Although data demonstrated that majority (91.7%) of the respondents are highly safety minded and the safety culture of an equally high proportion (83.4%) was adjudged fair, a lot of improvement is needed in the area of alerting or informing management of impending dangers and studying the hall handbook to internalize its contents. The study further showed that only 43.6% of respondents had good safety practices and behaviors and majority (56.4%) had fair safety practices and behaviors. One accidental discovery of the study is the finding that not a few of the students squat their counterparts. The study recommended the establishment of clearly written out complaint procedure that is accessible and available to all hall residents, building more hostels with adequate facilities to address the issue of overcrowding and also putting systems in place in order to encourage residents to report incidences/accidents.

Keywords: safe behaviours, safety culture, safety mindfulness, student

Procedia PDF Downloads 248
20869 Numerical Investigation of Turbulent Inflow Strategy in Wind Energy Applications

Authors: Arijit Saha, Hassan Kassem, Leo Hoening

Abstract:

Ongoing climate change demands the increasing use of renewable energies. Wind energy plays an important role in this context since it can be applied almost everywhere in the world. To reduce the costs of wind turbines and to make them more competitive, simulations are very important since experiments are often too costly if at all possible. The wind turbine on a vast open area experiences the turbulence generated due to the atmosphere, so it was of utmost interest from this research point of view to generate the turbulence through various Inlet Turbulence Generation methods like Precursor cyclic and Kaimal Spectrum Exponential Coherence (KSEC) in the computational simulation domain. To be able to validate computational fluid dynamic simulations of wind turbines with the experimental data, it is crucial to set up the conditions in the simulation as close to reality as possible. This present work, therefore, aims at investigating the turbulent inflow strategy and boundary conditions of KSEC and providing a comparative analysis alongside the Precursor cyclic method for Large Eddy Simulation within the context of wind energy applications. For the generation of the turbulent box through KSEC method, firstly, the constrained data were collected from an auxiliary channel flow, and later processing was performed with the open-source tool PyconTurb, whereas for the precursor cyclic, only the data from the auxiliary channel were sufficient. The functionality of these methods was studied through various statistical properties such as variance, turbulent intensity, etc with respect to different Bulk Reynolds numbers, and a conclusion was drawn on the feasibility of KSEC method. Furthermore, it was found necessary to verify the obtained data with DNS case setup for its applicability to use it as a real field CFD simulation.

Keywords: Inlet Turbulence Generation, CFD, precursor cyclic, KSEC, large Eddy simulation, PyconTurb

Procedia PDF Downloads 83
20868 Theorical Studies on the Structural Properties of 2,3-Bis(Furan-2-Yl)Pyrazino[2,3-F][1,10]Phenanthroline Derivaties

Authors: Zahra Sadeghian

Abstract:

This paper reports on the geometrical parameters optimized of the stationary point for the 2,3-Bis(furan-2-yl)pyrazino[2,3-f][1,10]phenanthroline. The calculations are performed using density functional theory (DFT) method at the B3LYP/LanL2DZ level. We determined bond lengths and bond angles values for the compound and calculate the amount of bond hybridization according to the natural bond orbital theory (NBO) too. The energy of frontier orbital (HOMO and LUMO) are computed. In addition, calculated data are accurately compared with the experimental result. This comparison show that the our theoretical data are in reasonable agreement with the experimental values.

Keywords: 2, 3-Bis(furan-2-yl)pyrazino[2, 3-f][1, 10]phenanthroline, density functional theory, theorical calculations, LanL2DZ level, B3LYP level

Procedia PDF Downloads 358
20867 3D Writing on Photosensitive Glass-Ceramics

Authors: C. Busuioc, S. Jinga, E. Pavel

Abstract:

Optical lithography is a key technique in the development of sub-5 nm patterns for the semiconductor industry. We have already reported that the best results obtained with respect to direct laser writing process on active media, such as glass-ceramics, are achieved only when the energy of the laser radiation is absorbed in discrete quantities. Further, we need to clarify the role of active centers concentration in silver nanocrystals natural generation, as well as in fluorescent rare-earth nanostructures formation. As a consequence, samples with different compositions were prepared. SEM, AFM, TEM and STEM investigations were employed in order to demonstrate that few nm width lines can be written on fluorescent photosensitive glass-ceramics, these being efficient absorbers. Moreover, we believe that the experimental data will lead to the best choice in terms of active centers amount, laser power and glass-ceramic matrix.

Keywords: glass-ceramics, 3D laser writing, optical disks, data storage

Procedia PDF Downloads 289
20866 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation

Procedia PDF Downloads 236