Search results for: firm performance effectiveness
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16309

Search results for: firm performance effectiveness

5809 The Effect of Nylon and Kevlar Stitching on the Mode I Fracture of Carbon/Epoxy Composites

Authors: Nisrin R. Abdelal, Steven L. Donaldson

Abstract:

Composite materials are widely used in aviation industry due to their superior properties; however, they are susceptible to delamination. Through-thickness stitching is one of the techniques to alleviate delamination. Kevlar is one of the most common stitching materials; in contrast, it is expensive and presents stitching fabrication challenges. Therefore, this study compares the performance of Kevlar with an inexpensive and easy-to-use nylon fiber in stitching to alleviate delamination. Three laminates of unidirectional carbon fiber-epoxy composites were manufactured using vacuum assisted resin transfer molding process. One panel was stitched with Kevlar, one with nylon, and one unstitched. Mode I interlaminar fracture tests were carried out on specimens from the three composite laminates, and the results were compared. Fractographic analysis using optical and scanning electron microscope were conducted to reveal the differences between stitching with Kevlar and nylon on the internal microstructure of the composite with respect to the interlaminar fracture toughness values.

Keywords: carbon, delamination, Kevlar, mode I, nylon, stitching

Procedia PDF Downloads 285
5808 Estimating Lost Digital Video Frames Using Unidirectional and Bidirectional Estimation Based on Autoregressive Time Model

Authors: Navid Daryasafar, Nima Farshidfar

Abstract:

In this article, we make attempt to hide error in video with an emphasis on the time-wise use of autoregressive (AR) models. To resolve this problem, we assume that all information in one or more video frames is lost. Then, lost frames are estimated using analogous Pixels time information in successive frames. Accordingly, after presenting autoregressive models and how they are applied to estimate lost frames, two general methods are presented for using these models. The first method which is the same standard method of autoregressive models estimates lost frame in unidirectional form. Usually, in such condition, previous frames information is used for estimating lost frame. Yet, in the second method, information from the previous and next frames is used for estimating the lost frame. As a result, this method is known as bidirectional estimation. Then, carrying out a series of tests, performance of each method is assessed in different modes. And, results are compared.

Keywords: error steganography, unidirectional estimation, bidirectional estimation, AR linear estimation

Procedia PDF Downloads 526
5807 Co-Development of an Assisted Manual Harvesting Tool for Peach Palm That Avoids the Harvest in Heights

Authors: Mauricio Quintero Angel, Alexander Pereira, Selene Alarcón

Abstract:

One of the elements of greatest importance in agricultural production is the harvesting; an activity associated to different occupational health risks such as harvesting in high altitudes, the transport of heavy materials and the application of excessive muscle strain that leads to muscular-bone disorders. Therefore, there is an urgent necessity to improve and validate interventions to reduce exposition and risk to harvesters. This article has the objective of describing the co-development under the ergonomic analysis framework of an assisted manual harvesting tool for peach palm oriented to reduce the risk of death and accidents as it avoid the harvest in heights. The peach palm is a palm tree that is cultivated in Colombia, Perú, Brasil, Costa Rica, among others and that reaches heights of over 20 m, with stipes covered with spines. The fruits are drupes of variable size. For the harvesting of peach palm, in Colombia farmers use the “Marota” or “Climber”, a tool in a closed X shape built in wood, that has two supports adjusted at the stipe, that elevate alternately until reaching a point high enough to grab the bunch that is brought down using a rope. An activity of high risk since it is done at a high altitude without any type of protection and safety measures. The Marota is alternated with a rod, which as variable height between 5 and 12 Meters with a harness system at one end to hold the bunch that is lowered with the whole system (bamboo bunch). The rod is used from the ground or from the Marota in height. As an alternative to traditional tools, the Bajachonta was co-developed with farmers, a tool that employs a traditional bamboo hook system with modifications, to be able to hold it with a rope that passes through a pulley. Once the bunch is hitched, the hook system is detached and this stays attached to the peduncle of the palm tree, afterwards through a pulling force being exerted towards the ground by tensioning the rope, the bunch comes loose to be taken down using a rope and the pulley system to the ground, reducing the risk and efforts in the operation. The bajachonta was evaluated in tree productive zones of Colombia, with innovative farmers, were the adoption is highly probable, with some modifications to improve its efficiency and effectiveness, keeping in mind that the farmers perceive in it an advantage in the reduction of death and accidents by not having to harvest in heights.

Keywords: assisted harvesting, ergonomics, harvesting in high altitudes, participative design, peach palm

Procedia PDF Downloads 401
5806 A Weighted Approach to Unconstrained Iris Recognition

Authors: Yao-Hong Tsai

Abstract:

This paper presents a weighted approach to unconstrained iris recognition. Nowadays, commercial systems are usually characterized by strong acquisition constraints based on the subject’s cooperation. However, it is not always achievable for real scenarios in our daily life. Researchers have been focused on reducing these constraints and maintaining the performance of the system by new techniques at the same time. With large variation in the environment, there are two main improvements to develop the proposed iris recognition system. For solving extremely uneven lighting condition, statistic based illumination normalization is first used on eye region to increase the accuracy of iris feature. The detection of the iris image is based on Adaboost algorithm. Secondly, the weighted approach is designed by Gaussian functions according to the distance to the center of the iris. Furthermore, local binary pattern (LBP) histogram is then applied to texture classification with the weight. Experiment showed that the proposed system provided users a more flexible and feasible way to interact with the verification system through iris recognition.

Keywords: authentication, iris recognition, adaboost, local binary pattern

Procedia PDF Downloads 218
5805 Printed Thai Character Recognition Using Particle Swarm Optimization Algorithm

Authors: Phawin Sangsuvan, Chutimet Srinilta

Abstract:

This Paper presents the applications of Particle Swarm Optimization (PSO) Method for Thai optical character recognition (OCR). OCR consists of the pre-processing, character recognition and post-processing. Before enter into recognition process. The Character must be “Prepped” by pre-processing process. The PSO is an optimization method that belongs to the swarm intelligence family based on the imitation of social behavior patterns of animals. Route of each particle is determined by an individual data among neighborhood particles. The interaction of the particles with neighbors is the advantage of Particle Swarm to determine the best solution. So PSO is interested by a lot of researchers in many difficult problems including character recognition. As the previous this research used a Projection Histogram to extract printed digits features and defined the simple Fitness Function for PSO. The results reveal that PSO gives 67.73% for testing dataset. So in the future there can be explored enhancement the better performance of PSO with improve the Fitness Function.

Keywords: character recognition, histogram projection, particle swarm optimization, pattern recognition techniques

Procedia PDF Downloads 467
5804 Automatic Facial Skin Segmentation Using Possibilistic C-Means Algorithm for Evaluation of Facial Surgeries

Authors: Elham Alaee, Mousa Shamsi, Hossein Ahmadi, Soroosh Nazem, Mohammad Hossein Sedaaghi

Abstract:

Human face has a fundamental role in the appearance of individuals. So the importance of facial surgeries is undeniable. Thus, there is a need for the appropriate and accurate facial skin segmentation in order to extract different features. Since Fuzzy C-Means (FCM) clustering algorithm doesn’t work appropriately for noisy images and outliers, in this paper we exploit Possibilistic C-Means (PCM) algorithm in order to segment the facial skin. For this purpose, first, we convert facial images from RGB to YCbCr color space. To evaluate performance of the proposed algorithm, the database of Sahand University of Technology, Tabriz, Iran was used. In order to have a better understanding from the proposed algorithm; FCM and Expectation-Maximization (EM) algorithms are also used for facial skin segmentation. The proposed method shows better results than the other segmentation methods. Results include misclassification error (0.032) and the region’s area error (0.045) for the proposed algorithm.

Keywords: facial image, segmentation, PCM, FCM, skin error, facial surgery

Procedia PDF Downloads 581
5803 Analysis and Rule Extraction of Coronary Artery Disease Data Using Data Mining

Authors: Rezaei Hachesu Peyman, Oliyaee Azadeh, Salahzadeh Zahra, Alizadeh Somayyeh, Safaei Naser

Abstract:

Coronary Artery Disease (CAD) is one major cause of disability in adults and one main cause of death in developed. In this study, data mining techniques including Decision Trees, Artificial neural networks (ANNs), and Support Vector Machine (SVM) analyze CAD data. Data of 4948 patients who had suffered from heart diseases were included in the analysis. CAD is the target variable, and 24 inputs or predictor variables are used for the classification. The performance of these techniques is compared in terms of sensitivity, specificity, and accuracy. The most significant factor influencing CAD is chest pain. Elderly males (age > 53) have a high probability to be diagnosed with CAD. SVM algorithm is the most useful way for evaluation and prediction of CAD patients as compared to non-CAD ones. Application of data mining techniques in analyzing coronary artery diseases is a good method for investigating the existing relationships between variables.

Keywords: classification, coronary artery disease, data-mining, knowledge discovery, extract

Procedia PDF Downloads 654
5802 Improving School Design through Diverse Stakeholder Participation in the Programming Phase

Authors: Doris C. C. K. Kowaltowski, Marcella S. Deliberador

Abstract:

The architectural design process, in general, is becoming more complex, as new technical, social, environmental, and economical requirements are imposed. For school buildings, this scenario is also valid. The quality of a school building depends on known design criteria and professional knowledge, as well as feedback from building performance assessments. To attain high-performance school buildings, a design process should add a multidisciplinary team, through an integrated process, to ensure that the various specialists contribute at an early stage to design solutions. The participation of stakeholders is of special importance at the programming phase when the search for the most appropriate design solutions is underway. The composition of a multidisciplinary team should comprise specialists in education, design professionals, and consultants in various fields such as environmental comfort and psychology, sustainability, safety and security, as well as administrators, public officials and neighbourhood representatives. Users, or potential users (teachers, parents, students, school officials, and staff), should be involved. User expectations must be guided, however, toward a proper understanding of a response of design to needs to avoid disappointment. In this context, appropriate tools should be introduced to organize such diverse participants and ensure a rich and focused response to needs and a productive outcome of programming sessions. In this paper, different stakeholder in a school design process are discussed in relation to their specific contributions and a tool in the form of a card game is described to structure the design debates and ensure a comprehensive decision-making process. The game is based on design patterns for school architecture as found in the literature and is adapted to a specific reality: State-run public schools in São Paulo, Brazil. In this State, school buildings are managed by a foundation called Fundação para o Desenvolvimento da Educação (FDE). FDE supervises new designs and is responsible for the maintenance of ~ 5000 schools. The design process of this context was characterised with a recommendation to improve the programming phase. Card games can create a common environment, to which all participants can relate and, therefore, can contribute to briefing debates on an equal footing. The cards of the game described here represent essential school design themes as found in the literature. The tool was tested with stakeholder groups and with architecture students. In both situations, the game proved to be an efficient tool to stimulate school design discussions and to aid in the elaboration of a rich, focused and thoughtful architectural program for a given demand. The game organizes the debates and all participants are shown to spontaneously contribute each in his own field of expertise to the decision-making process. Although the game was specifically based on a local school design process it shows potential for other contexts because the content is based on known facts, needs and concepts of school design, which are global. A structured briefing phase with diverse stakeholder participation can enrich the design process and consequently improve the quality of school buildings.

Keywords: architectural program, design process, school building design, stakeholder

Procedia PDF Downloads 400
5801 The Relationship between Democracy, Freedom and Economic Development

Authors: Ugur Karakaya, Hasan Bulent Kantarcı

Abstract:

In this study, firstly democratic thoughts which directly or indirectly affect economic development and/or the interaction between authoritarian regimes and the economic development and the direction and channels of this interaction were studied and then the study tried to determine how democracy affects economic development. It was concluded that the positive contributions of democracy to economic development were more determinant than the effects that were either negative or restrictive in terms of development. When compared to autocracy, since democracy is more successful in managing social conflicts, ensuring political stability and preventing social disasters such as famine, it contributes more to economic development. Democracy also facilitates delegation of authority, provides a stable investment environment and accelerates mobilization of resources in accordance with economic growth/development. Democracy leads to an increase in human capital accumulation and increases the growth rate through reducing income inequality. It can be said that democratic regimes are the most appropriate ones in terms of increasing economic performance and supporting economic development through their strong institutional structures and the assurance they will ensure in property rights.

Keywords: democracy, economic growth, economic freedom, autocratic regime

Procedia PDF Downloads 493
5800 Integrating Eye-Tracking Analysis to Enhance Web Usability Evaluation

Authors: Johanna Renny Octavia, Meliana Nurdin, Ignatius Kevin Kurniawan, Ricca Aksara

Abstract:

It is widely believed that usability evaluation is necessary to evaluate a website design for further improvement. Traditional methods of usability evaluation have given sufficient insights to reveal usability problems of websites. Eye-tracking analysis has been considered as a useful method that adds a powerful dimension to web usability evaluation. It allows web designers and usability researchers to understand exactly what users do and do not see on a web page, thus disclose more information on web usability and provide a more complete insights on a website design. This paper elaborates on moving beyond traditional methods of web usability evaluation by integrating eye-tracking analysis to enhance the evaluation of website design, and presents three case studies to support this approach. In these case studies, eye movement metrics such as gaze plots and fixation-derived metrics, and user performance data such as task completion times and number of errors were recorded as objective measurements that can inform the necessity for website design improvements.

Keywords: design, eye-tracking, usability evaluation, website

Procedia PDF Downloads 296
5799 Sliding Mode MRAS Observer for Optimized Backstepping Control of Induction Motor

Authors: Chaouch Souad, Abdou Latifa, Larbi Chrifi Alaoui

Abstract:

This paper deals with sensorless backstepping control of induction motor using MRAS technique associated to sliding mode approach. A high order genetic algorithm structure is used to approximate a control law designed by the Backstepping technique, and to find the best parameters globally optimized. However, the Backstepping control approach is unsuitable for high performance applications because the need of a speed sensor for increased accuracy and the absence of any error decay mechanism. In this paper a nonlinear observer, obtained by combining sliding mode structure and model reference adaptive system (MRAS), is designed for the rotor flux and rotor speed estimations. To validate the proposed method, the results are presented for showing the improved drive characteristics and performances.

Keywords: Backstepping Control, Induction Motor, Genetic Algorithm, Sliding Mode observer

Procedia PDF Downloads 725
5798 Copula-Based Estimation of Direct and Indirect Effects in Path Analysis Model

Authors: Alam Ali, Ashok Kumar Pathak

Abstract:

Path analysis is a statistical technique used to evaluate the strength of the direct and indirect effects of variables. One or more structural regression equations are used to estimate a series of parameters in order to find the better fit of data. Sometimes, exogenous variables do not show a significant strength of their direct and indirect effect when the assumption of classical regression (ordinary least squares (OLS)) are violated by the nature of the data. The main motive of this article is to investigate the efficacy of the copula-based regression approach over the classical regression approach and calculate the direct and indirect effects of variables when data violates the OLS assumption and variables are linked through an elliptical copula. We perform this study using a well-organized numerical scheme. Finally, a real data application is also presented to demonstrate the performance of the superiority of the copula approach.

Keywords: path analysis, copula-based regression models, direct and indirect effects, k-fold cross validation technique

Procedia PDF Downloads 68
5797 Data-Centric Anomaly Detection with Diffusion Models

Authors: Sheldon Liu, Gordon Wang, Lei Liu, Xuefeng Liu

Abstract:

Anomaly detection, also referred to as one-class classification, plays a crucial role in identifying product images that deviate from the expected distribution. This study introduces Data-centric Anomaly Detection with Diffusion Models (DCADDM), presenting a systematic strategy for data collection and further diversifying the data with image generation via diffusion models. The algorithm addresses data collection challenges in real-world scenarios and points toward data augmentation with the integration of generative AI capabilities. The paper explores the generation of normal images using diffusion models. The experiments demonstrate that with 30% of the original normal image size, modeling in an unsupervised setting with state-of-the-art approaches can achieve equivalent performances. With the addition of generated images via diffusion models (10% equivalence of the original dataset size), the proposed algorithm achieves better or equivalent anomaly localization performance.

Keywords: diffusion models, anomaly detection, data-centric, generative AI

Procedia PDF Downloads 78
5796 Main Chaos-Based Image Encryption Algorithm

Authors: Ibtissem Talbi

Abstract:

During the last decade, a variety of chaos-based cryptosystems have been investigated. Most of them are based on the structure of Fridrich, which is based on the traditional confusion-diffusion architecture proposed by Shannon. Compared with traditional cryptosystems (DES, 3DES, AES, etc.), the chaos-based cryptosystems are more flexible, more modular and easier to be implemented, which make them suitable for large scale-data encyption, such as images and videos. The heart of any chaos-based cryptosystem is the chaotic generator and so, a part of the efficiency (robustness, speed) of the system depends greatly on it. In this talk, we give an overview of the state of the art of chaos-based block ciphers and we describe some of our schemes already proposed. Also we will focus on the essential characteristics of the digital chaotic generator, The needed performance of a chaos-based block cipher in terms of security level and speed of calculus depends on the considered application. There is a compromise between the security and the speed of the calculation. The security of these block block ciphers will be analyzed.

Keywords: chaos-based cryptosystems, chaotic generator, security analysis, structure of Fridrich

Procedia PDF Downloads 678
5795 Removal of Aromatic Fractions of Natural Organic Matter from Synthetic Water Using Aluminium Based Electrocoagulation

Authors: Tanwi Priya, Brijesh Kumar Mishra

Abstract:

Occurrence of aromatic fractions of Natural Organic Matter (NOM) led to formation of carcinogenic disinfection by products such as trihalomethanes in chlorinated water. In the present study, the efficiency of aluminium based electrocoagulation on the removal of prominent aromatic groups such as phenol, hydrophobic auxochromes, and carboxyl groups from NOM enriched synthetic water has been evaluated using various spectral indices. The effect of electrocoagulation on turbidity has also been discussed. The variation in coagulation performance as a function of pH has been studied. Our result suggests that electrocoagulation can be considered as appropriate remediation approach to reduce trihalomethanes formation in water. It has effectively reduced hydrophobic fractions from NOM enriched low turbid water. The charge neutralization and enmeshment of dispersed colloidal particles inside metallic hydroxides is the possible mechanistic approach in electrocoagulation.

Keywords: aromatic fractions, electrocoagulation, natural organic matter, spectral indices

Procedia PDF Downloads 271
5794 A Strategy of Direct Power Control for PWM Rectifier Reducing Ripple in Instantaneous Power

Authors: T. Mohammed Chikouche, K. Hartani

Abstract:

Based on the analysis of basic direct torque control, a parallel master slave for four in-wheel permanent magnet synchronous motors (PMSM) fed by two three phase inverters used in electric vehicle is proposed in this paper. A conventional system with multi-inverter and multi-machine comprises a three phase inverter for each machine to be controlled. Another approach consists in using only one three-phase inverter to supply several permanent magnet synchronous machines. A modified direct torque control (DTC) algorithm is used for the control of the bi-machine traction system. Simulation results show that the proposed control strategy is well adapted for the synchronism of this system and provide good speed tracking performance.

Keywords: electric vehicle, multi-machine single-inverter system, multi-machine multi-inverter control, in-wheel motor, master-slave control

Procedia PDF Downloads 214
5793 Energy Efficiency Analysis of Electrical Submersible Pump on Mature Oil Field Offshore Java Sea

Authors: Marda Vidrianto, Tania Surya Utami

Abstract:

Electrical Submersible Pump (ESP) is an artificial lift of choice to produce oil on Offshore Java Sea. It is selected based on the production rate capacity and running life expectation. ESP performance in a mature field is highly affected by oil well conditions. The presence of sand, scale, gas, and low influx will create unstable ESP operation hence lowering the run life expectation and system efficiency. This paper reviews the current energy usage and efficiency on every part of the ESP system. The hydraulic and electrical losses, as well as system efficiency for each well, are calculated to identify energy losses and the possibility for improvement. It is shown that high back pressure on the system and low-efficiency pump are the major contributors to energy losses. It was found that optimized production rate and the use of advanced technology on pump and motor unit could improve energy efficiency.

Keywords: advance technology, energy efficiency, ESP, mature field, production rate

Procedia PDF Downloads 335
5792 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 72
5791 Thermodynamic Analysis of Hydrogen Plasma Reduction of TiCl₄

Authors: Seok Hong Min, Tae Kwon Ha

Abstract:

With increasing demands for high performance materials, intensive interest on the Ti has been focused. Especially, low cost production process of Ti has been extremely necessitated from wide parts and various industries. Tetrachloride (TiCl₄) is produced by fluidized bed using high TiO₂ feedstock and used as an intermediate product for the production of metal titanium sponge. Reduction of TiCl₄ is usually conducted by Kroll process using magnesium as a reduction reagent, producing metallic Ti in the shape of sponge. The process is batch type and takes very long time including post processes treating sponge. As an alternative reduction reagent, hydrogen in the state of plasma has long been strongly recommended. Experimental confirmation has not been completely reported yet and more strict analysis is required. In the present study, hydrogen plasma reduction process has been thermodynamically analyzed focusing the effects of temperature, pressure and concentration. All thermodynamic calculations were performed using the FactSage® thermodynamical software.

Keywords: TiCl₄, titanium, hydrogen, plasma, reduction, thermodynamic calculation

Procedia PDF Downloads 321
5790 A Blockchain-Based Privacy-Preserving Physical Delivery System

Authors: Shahin Zanbaghi, Saeed Samet

Abstract:

The internet has transformed the way we shop. Previously, most of our purchases came in the form of shopping trips to a nearby store. Now, it’s as easy as clicking a mouse. But with great convenience comes great responsibility. We have to be constantly vigilant about our personal information. In this work, our proposed approach is to encrypt the information printed on the physical packages, which include personal information in plain text, using a symmetric encryption algorithm; then, we store that encrypted information into a Blockchain network rather than storing them in companies or corporations centralized databases. We present, implement and assess a blockchain-based system using Ethereum smart contracts. We present detailed algorithms that explain the details of our smart contract. We present the security, cost, and performance analysis of the proposed method. Our work indicates that the proposed solution is economically attainable and provides data integrity, security, transparency, and data traceability.

Keywords: blockchain, Ethereum, smart contract, commit-reveal scheme

Procedia PDF Downloads 146
5789 Study on Filter for Semiconductor of Minimizing Damage by X-Ray Laminography

Authors: Chan Jong Park, Hye Min Park, Jeong Ho Kim, Ki Hyun Park, Koan Sik Joo

Abstract:

This research used the MCNPX simulation program to evaluate the utility of a filter that was developed to minimize the damage to a semiconductor device during defect testing with X-ray. The X-ray generator was designed using the MCNPX code, and the X-ray absorption spectrum of the semiconductor device was obtained based on the designed X-ray generator code. To evaluate the utility of the filter, the X-ray absorption rates of the semiconductor device were calculated and compared for Ag, Rh, Mo and V filters with thicknesses of 25μm, 50μm, and 75μm. The results showed that the X-ray absorption rate varied with the type and thickness of the filter, ranging from 8.74% to 49.28%. The Rh filter showed the highest X-ray absorption rates of 29.8%, 15.18% and 8.74% for the above-mentioned filter thicknesses. As shown above, the characteristics of the X-ray absorption with respect to the type and thickness of the filter were identified using MCNPX simulation. With these results, both time and expense could be saved in the production of the desired filter. In the future, this filter will be produced, and its performance will be evaluated.

Keywords: X-ray, MCNPX, filter, semiconductor, damage

Procedia PDF Downloads 413
5788 Simulation Study on Vehicle Drag Reduction by Surface Dimples

Authors: S. F. Wong, S. S. Dol

Abstract:

Automotive designers have been trying to use dimples to reduce drag in vehicles. In this work, a car model has been applied with dimple surface with a parameter called dimple ratio DR, the ratio between the depths of the half dimple over the print diameter of the dimple, has been introduced and numerically simulated via k-ε turbulence model to study the aerodynamics performance with the increasing depth of the dimples The Ahmed body car model with 25 degree slant angle is simulated with the DR of 0.05, 0.2, 0.3 0.4 and 0.5 at Reynolds number of 176387 based on the frontal area of the car model. The geometry of dimple changes the kinematics and dynamics of flow. Complex interaction between the turbulent fluctuating flow and the mean flow escalates the turbulence quantities. The maximum level of turbulent kinetic energy occurs at DR = 0.4. It can be concluded that the dimples have generated extra turbulence energy at the surface and as a result, the application of dimples manages to reduce the drag coefficient of the car model compared to the model with smooth surface.

Keywords: aerodynamics, boundary layer, dimple, drag, kinetic energy, turbulence

Procedia PDF Downloads 312
5787 Model Updating-Based Approach for Damage Prognosis in Frames via Modal Residual Force

Authors: Gholamreza Ghodrati Amiri, Mojtaba Jafarian Abyaneh, Ali Zare Hosseinzadeh

Abstract:

This paper presents an effective model updating strategy for damage localization and quantification in frames by defining damage detection problem as an optimization issue. A generalized version of the Modal Residual Force (MRF) is employed for presenting a new damage-sensitive cost function. Then, Grey Wolf Optimization (GWO) algorithm is utilized for solving suggested inverse problem and the global extremums are reported as damage detection results. The applicability of the presented method is investigated by studying different damage patterns on the benchmark problem of the IASC-ASCE, as well as a planar shear frame structure. The obtained results emphasize good performance of the method not only in free-noise cases, but also when the input data are contaminated with different levels of noises.

Keywords: frame, grey wolf optimization algorithm, modal residual force, structural damage detection

Procedia PDF Downloads 381
5786 Testing of Gas Turbine KingTech with Biodiesel

Authors: Nicolas Lipchak, Franco Aiducic, Santiago Baieli

Abstract:

The present work is a part of the research project called ‘Testing of gas turbine KingTech with biodiesel’, carried out by the Department of Industrial Engineering of the National Technological University at Buenos Aires. The research group aims to experiment with biodiesel in a gas turbine Kingtech K-100 to verify the correct operation of it. In this sense, tests have been developed to obtain real data of parameters inherent to the work cycle, to be used later as parameters of comparison and performance analysis. In the first instance, the study consisted in testing the gas turbine with a mixture composition of 50% Biodiesel and 50% Diesel. The parameters arising from the measurements made were compared with the parameters of the gas turbine with a composition of 100% Diesel. In the second instance, the measured parameters were used to calculate the power generated and the thermal efficiency of the Kingtech K-100 turbine. The turbine was also inspected to verify the status of the internals due to the use of biofuels. The conclusions obtained allow empirically demonstrate that it is feasible to use biodiesel in this type of gas turbines, without the use of this fuel generates a loss of power or degradation of internals.

Keywords: biodiesel, efficiency, KingTech, turbine

Procedia PDF Downloads 239
5785 Errors in Selected Writings of EFL Students: A Study of Department of English, Taraba State University, Jalingo, Nigeria

Authors: Joy Aworookoroh

Abstract:

Writing is one of the active skills in language learning. Students of English as a foreign language are expected to write efficiently and proficiently in the language; however, there are usually challenges to optimal performance and competence in writing. Errors, on the other hand, in a foreign language learning situation are more positive than negative as they provide the basis for solving the limitations of the students. This paper investigates the situation in the Department of English, Taraba State University Jalingo. Students are administered a descriptive writing test across different levels of study. The target students are multilingual with an L1 of either Kuteb, Hausa or Junkun languages. The essays are accessed to identify the different kinds of errors in them alongside the classification of the order. Errors of correctness, clarity, engagement, and delivery were identified. However, the study identified that the degree of errors reduces alongside the experience and exposure of the students to an EFL classroom.

Keywords: errors, writings, descriptive essay, multilingual

Procedia PDF Downloads 57
5784 Development of a Congestion Controller of Computer Network Using Artificial Intelligence Algorithm

Authors: Mary Anne Roa

Abstract:

Congestion in network occurs due to exceed in aggregate demand as compared to the accessible capacity of the resources. Network congestion will increase as network speed increases and new effective congestion control methods are needed, especially for today’s very high speed networks. To address this undeniably global issue, the study focuses on the development of a fuzzy-based congestion control model concerned with allocating the resources of a computer network such that the system can operate at an adequate performance level when the demand exceeds or is near the capacity of the resources. Fuzzy logic based models have proven capable of accurately representing a wide variety of processes. The model built is based on bandwidth, the aggregate incoming traffic and the waiting time. The theoretical analysis and simulation results show that the proposed algorithm provides not only good utilization but also low packet loss.

Keywords: congestion control, queue management, computer networks, fuzzy logic

Procedia PDF Downloads 391
5783 Online Think–Pair–Share in a Third-Age Information and Communication Technology Course

Authors: Daniele Traversaro

Abstract:

Problem: Senior citizens have been facing a challenging reality as a result of strict public health measures designed to protect people from the COVID-19 outbreak. These include the risk of social isolation due to the inability of the elderly to integrate with technology. Never before have information and communication technology (ICT) skills become essential for their everyday life. Although third-age ICT education and lifelong learning are widely supported by universities and governments, there is a lack of literature on which teaching strategy/methodology to adopt in an entirely online ICT course aimed at third-age learners. This contribution aims to present an application of the Think-Pair-Share (TPS) learning method in an ICT third-age virtual classroom with an intergenerational approach to conducting online group labs and review activities. This collaborative strategy can help increase student engagement, promote active learning and online social interaction. Research Question: Is collaborative learning applicable and effective, in terms of student engagement and learning outcomes, for an entirely online third-age ICT introductory course? Methods: In the TPS strategy, a problem is posed by the teacher, students have time to think about it individually, and then they work in pairs (or small groups) to solve the problem and share their ideas with the entire class. We performed four experiments in the ICT course of the University of the Third Age of Genova (University of Genova, Italy) on the Microsoft Teams platform. The study cohort consisted of 26 students over the age of 45. Data were collected through online questionnaires. Two have been proposed, one at the end of the first activity and another at the end of the course. They consisted of five and three close-ended questions, respectively. The answers were on a Likert scale (from 1 to 4) except two questions (which asked the number of correct answers given individually and in groups) and the field for free comments/suggestions. Results: Results show that groups perform better than individual students (with scores greater than one order of magnitude) and that most students found it helpful to work in groups and interact with their peers. Insights: From these early results, it appears that TPS is applicable to an online third-age ICT classroom and useful for promoting discussion and active learning. Despite this, our experimentation has a number of limitations. First of all, the results highlight the need for more data to be able to perform a statistical analysis in order to determine the effectiveness of this methodology in terms of student engagement and learning outcomes as a future direction.

Keywords: collaborative learning, information technology education, lifelong learning, older adult education, think-pair-share

Procedia PDF Downloads 186
5782 Using VR as a Training Tool in the Banking Industry

Authors: Bjørn Salskov, Nicolaj Bang, Charlotte Falko

Abstract:

Future labour markets demand employees that can carry out a non-linear task which is still not possible for computers. This means that employees must have well-developed soft-skills to perform at high levels in such a work environment. One of these soft-skills is presenting a message effectively. To be able to present a message effectively, one needs to practice this. To practice effectively, the trainee needs feedback on the current performance. Here VR environments can be used as a practice tool because it gives the trainee a sense of presence and reality. VR environments are becoming a cost-effective training method since it does not demand the presence of an expert to provide this feedback. The research article analysed in this study suggests that VR environment can be used and are able to provide the necessary feedback to the trainee which in turn will help the trainee become better at the task. The research analysed in this review does, however, show that there is a need for a study with larger sample size and a study which runs over a longer period.

Keywords: training, presentation, presentation skills, VR training, VR as a training tool, VR and presentation

Procedia PDF Downloads 118
5781 Persistent Organic Pollutant Level in Challawa River Basin of Kano State, Nigeria

Authors: Abdulkadir Sarauta

Abstract:

Almost every type of industrial process involves the release of trace quantity of toxic organic and inorganic compound that up in receiving water bodies, this study was aimed at assessing the Persistent Organic Pollutant Level in Challawa River Basin of Kano State, Nigeria. And the research formed the basis of identifying the presence of PCBs and PAHs in receiving water bodies in the study area, assessing the PCBs and PAHs concentration in receiving water body of Challawa system, evaluate the concentration level of PCBs and PAHs in fishes in the study area, determine the concentration level of PCBs and PAHs in crops irrigated in the study area as well as compare the concentration of PCBs and PAHs with the acceptable limit set by Nigerian, EU, U.S and WHO standard. Data were collected using reconnaissance survey, site inspection, field survey, laboratory experiment as well as secondary data source. A total of 78 samples were collected through stratified systematic random sampling (i.e., 26 samples for each of water, crops and fish) three sampling points were chosen and designated A, B and C along the stretch of the river (i.e. up, middle, and downstream) from Yan Danko Bridge to Tambirawa bridge. The result shows that the Polychlorinated biphenyls (PCBs) was not detected while, polycyclic aromatic hydrocarbons (PAHs) was detected in the whole samples analysed at the trench of Challawa River basin in order to assess the contribution of human activities to global environmental pollution. The total concentrations of ΣPAH and ΣPCB ranges between 0.001 to 0.087mg/l and 0.00 to 0.00mg/l of water samples While, crops samples ranges between 2.0ppb to 8.1ppb and fish samples ranges from 2.0 to 6.7ppb.The whole samples are polluted because most of the parameters analyzed exceed the threshold limits set by WHO, Nigerian, U.S and EU standard. The analytical results revealed that some chemicals are present in water, crops and fishes are significantly very high at Zamawa village which is very close to Challawa industrial estate and also is main effluent discharge point and drinking water around study area is not potable for consumption. Analysis of Variance was obtained by Bartlett’s test performance. There is only significant difference in water because the P < 0.05 level of significant, But there is no difference in crops concentration they have the same performance, likes wise in the fishes. It is said to be of concern to health hazard which will increase incidence of tumor related diseases such as skin, lungs, bladder, gastrointestinal cancer, this show there is high failure of pollution abatement measures in the area. In conclusion, it can be said that industrial activities and effluent has impact on Challawa River basin and its environs especially those that are living in the immediate surroundings. Arising from the findings of this research some recommendations were made the industries should treat their liquid properly by installing modern treatment plants.

Keywords: Challawa River Basin, organic, persistent, pollutant

Procedia PDF Downloads 571
5780 Deep Learning Framework for Predicting Bus Travel Times with Multiple Bus Routes: A Single-Step Multi-Station Forecasting Approach

Authors: Muhammad Ahnaf Zahin, Yaw Adu-Gyamfi

Abstract:

Bus transit is a crucial component of transportation networks, especially in urban areas. Any intelligent transportation system must have accurate real-time information on bus travel times since it minimizes waiting times for passengers at different stations along a route, improves service reliability, and significantly optimizes travel patterns. Bus agencies must enhance the quality of their information service to serve their passengers better and draw in more travelers since people waiting at bus stops are frequently anxious about when the bus will arrive at their starting point and when it will reach their destination. For solving this issue, different models have been developed for predicting bus travel times recently, but most of them are focused on smaller road networks due to their relatively subpar performance in high-density urban areas on a vast network. This paper develops a deep learning-based architecture using a single-step multi-station forecasting approach to predict average bus travel times for numerous routes, stops, and trips on a large-scale network using heterogeneous bus transit data collected from the GTFS database. Over one week, data was gathered from multiple bus routes in Saint Louis, Missouri. In this study, Gated Recurrent Unit (GRU) neural network was followed to predict the mean vehicle travel times for different hours of the day for multiple stations along multiple routes. Historical time steps and prediction horizon were set up to 5 and 1, respectively, which means that five hours of historical average travel time data were used to predict average travel time for the following hour. The spatial and temporal information and the historical average travel times were captured from the dataset for model input parameters. As adjacency matrices for the spatial input parameters, the station distances and sequence numbers were used, and the time of day (hour) was considered for the temporal inputs. Other inputs, including volatility information such as standard deviation and variance of journey durations, were also included in the model to make it more robust. The model's performance was evaluated based on a metric called mean absolute percentage error (MAPE). The observed prediction errors for various routes, trips, and stations remained consistent throughout the day. The results showed that the developed model could predict travel times more accurately during peak traffic hours, having a MAPE of around 14%, and performed less accurately during the latter part of the day. In the context of a complicated transportation network in high-density urban areas, the model showed its applicability for real-time travel time prediction of public transportation and ensured the high quality of the predictions generated by the model.

Keywords: gated recurrent unit, mean absolute percentage error, single-step forecasting, travel time prediction.

Procedia PDF Downloads 67