Search results for: vector error correction model (VECM)
17631 Camera Model Identification for Mi Pad 4, Oppo A37f, Samsung M20, and Oppo f9
Authors: Ulrich Wake, Eniman Syamsuddin
Abstract:
The model for camera model identificaiton is trained using pretrained model ResNet43 and ResNet50. The dataset consists of 500 photos of each phone. Dataset is divided into 1280 photos for training, 320 photos for validation and 400 photos for testing. The model is trained using One Cycle Policy Method and tested using Test-Time Augmentation. Furthermore, the model is trained for 50 epoch using regularization such as drop out and early stopping. The result is 90% accuracy for validation set and above 85% for Test-Time Augmentation using ResNet50. Every model is also trained by slightly updating the pretrained model’s weightsKeywords: One Cycle Policy, ResNet34, ResNet50, Test-Time Agumentation
Procedia PDF Downloads 20917630 Analysis of Surface Hardness, Surface Roughness and near Surface Microstructure of AISI 4140 Steel Worked with Turn-Assisted Deep Cold Rolling Process
Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma, K. Jagannath, Achutha Kini U.
Abstract:
In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor Hobson Talysurf tester, micro Vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer.Keywords: hardness, response surface methodology, microstructure, central composite design, deep cold rolling, surface roughness
Procedia PDF Downloads 42417629 A Compressor Map Optimizing Tool for Prediction of Compressor Off-Design Performance
Authors: Zhongzhi Hu, Jie Shen, Jiqiang Wang
Abstract:
A high precision aeroengine model is needed when developing the engine control system. Compared with other main components, the axial compressor is the most challenging component to simulate. In this paper, a compressor map optimizing tool based on the introduction of a modifiable β function is developed for FWorks (FADEC Works). Three parameters (d density, f fitting coefficient, k₀ slope of the line β=0) are introduced to the β function to make it modifiable. The comparison of the traditional β function and the modifiable β function is carried out for a certain type of compressor. The interpolation errors show that both methods meet the modeling requirements, while the modifiable β function can predict compressor performance more accurately for some areas of the compressor map where the users are interested in.Keywords: beta function, compressor map, interpolation error, map optimization tool
Procedia PDF Downloads 26917628 Bit Error Rate Monitoring for Automatic Bias Control of Quadrature Amplitude Modulators
Authors: Naji Ali Albakay, Abdulrahman Alothaim, Isa Barshushi
Abstract:
The most common quadrature amplitude modulator (QAM) applies two Mach-Zehnder Modulators (MZM) and one phase shifter to generate high order modulation format. The bias of MZM changes over time due to temperature, vibration, and aging factors. The change in the biasing causes distortion to the generated QAM signal which leads to deterioration of bit error rate (BER) performance. Therefore, it is critical to be able to lock MZM’s Q point to the required operating point for good performance. We propose a technique for automatic bias control (ABC) of QAM transmitter using BER measurements and gradient descent optimization algorithm. The proposed technique is attractive because it uses the pertinent metric, BER, which compensates for bias drifting independently from other system variations such as laser source output power. The proposed scheme performance and its operating principles are simulated using OptiSystem simulation software for 4-QAM and 16-QAM transmitters.Keywords: automatic bias control, optical fiber communication, optical modulation, optical devices
Procedia PDF Downloads 19117627 Frequency of Consonant Production Errors in Children with Speech Sound Disorder: A Retrospective-Descriptive Study
Authors: Amulya P. Rao, Prathima S., Sreedevi N.
Abstract:
Speech sound disorders (SSD) encompass the major concern in younger population of India with highest prevalence rate among the speech disorders. Children with SSD if not identified and rehabilitated at the earliest, are at risk for academic difficulties. This necessitates early identification using screening tools assessing the frequently misarticulated speech sounds. The literature on frequently misarticulated speech sounds is ample in English and other western languages targeting individuals with various communication disorders. Articulation is language specific, and there are limited studies reporting the same in Kannada, a Dravidian Language. Hence, the present study aimed to identify the frequently misarticulated consonants in Kannada and also to examine the error type. A retrospective, descriptive study was carried out using secondary data analysis of 41 participants (34-phonetic type and 7-phonemic type) with SSD in the age range 3-to 12-years. All the consonants of Kannada were analyzed by considering three words for each speech sound from the Kannada Diagnostic Photo Articulation test (KDPAT). Picture naming task was carried out, and responses were audio recorded. The recorded data were transcribed using IPA 2018 broad transcription. A criterion of 2/3 or 3/3 error productions was set to consider the speech sound to be an error. Number of error productions was calculated for each consonant in each participant. Then, the percentage of participants meeting the criteria were documented for each consonant to identify the frequently misarticulated speech sound. Overall results indicated that velar /k/ (48.78%) and /g/ (43.90%) were frequently misarticulated followed by voiced retroflex /ɖ/ (36.58%) and trill /r/ (36.58%). The lateral retroflex /ɭ/ was misarticulated by 31.70% of the children with SSD. Dentals (/t/, /n/), bilabials (/p/, /b/, /m/) and labiodental /v/ were produced correctly by all the participants. The highly misarticulated velars /k/ and /g/ were frequently substituted by dentals /t/ and /d/ respectively or omitted. Participants with SSD-phonemic type had multiple substitutions for one speech sound whereas, SSD-phonetic type had consistent single sound substitutions. Intra- and inter-judge reliability for 10% of the data using Cronbach’s Alpha revealed good reliability (0.8 ≤ α < 0.9). Analyzing a larger sample by replicating such studies will validate the present study results.Keywords: consonant, frequently misarticulated, Kannada, SSD
Procedia PDF Downloads 14117626 Using Personalized Spiking Neural Networks, Distinct Techniques for Self-Governing
Authors: Brwa Abdulrahman Abubaker
Abstract:
Recently, there has been a lot of interest in the difficult task of applying reinforcement learning to autonomous mobile robots. Conventional reinforcement learning (TRL) techniques have many drawbacks, such as lengthy computation times, intricate control frameworks, a great deal of trial and error searching, and sluggish convergence. In this paper, a modified Spiking Neural Network (SNN) is used to offer a distinct method for autonomous mobile robot learning and control in unexpected surroundings. As a learning algorithm, the suggested model combines dopamine modulation with spike-timing-dependent plasticity (STDP). In order to create more computationally efficient, biologically inspired control systems that are adaptable to changing settings, this work uses the effective and physiologically credible Izhikevich neuron model. This study is primarily focused on creating an algorithm for target tracking in the presence of obstacles. Results show that the SNN trained with three obstacles yielded an impressive 96% success rate for our proposal, with collisions happening in about 4% of the 214 simulated seconds.Keywords: spiking neural network, spike-timing-dependent plasticity, dopamine modulation, reinforcement learning
Procedia PDF Downloads 2317625 Influence of Tactile Symbol Size on Its Perceptibility in Consideration of Effect of Aging
Authors: T. Nishimura, K. Doi, H. Fujimoto, T. Wada
Abstract:
We conducted perception experiments on tactile symbols to elucidate the impact of the size of these letters on the level of perceptibility. This study was based on the accessible design perspective and aimed at expanding the availability of tactile symbols for the visually impaired who are unable to read Braille characters. In particular, this study targeted people with acquired visual impairments as users of the tactile symbols. The subjects (young and elderly individuals) in this study had normal vision. They were asked to participate in the experiments to identify tactile symbols while unable to see their hand during the experiments. This study investigated the relation between the size and perceptibility of tactile symbols based on an examination using test pieces of these letters in different sizes. The results revealed that the error rates for both young and elderly subjects converged to almost 0% when 12 mm size tactile symbols were used. The findings also showed that the error rate was low and subjects could identify the symbols in 5 s when 16 mm size tactile symbols were introduced.Keywords: accessible design, tactile sense, tactile symbols, bioinformatic
Procedia PDF Downloads 35317624 Performance of Coded Multi-Line Copper Wire for G.fast Communications in the Presence of Impulsive Noise
Authors: Israa Al-Neami, Ali J. Al-Askery, Martin Johnston, Charalampos Tsimenidis
Abstract:
In this paper, we focus on the design of a multi-line copper wire (MLCW) communication system. First, we construct our proposed MLCW channel and verify its characteristics based on the Kolmogorov-Smirnov test. In addition, we apply Middleton class A impulsive noise (IN) to the copper channel for further investigation. Second, the MIMO G.fast system is adopted utilizing the proposed MLCW channel model and is compared to a single line G-fast system. Second, the performance of the coded system is obtained utilizing concatenated interleaved Reed-Solomon (RS) code with four-dimensional trellis-coded modulation (4D TCM), and compared to the single line G-fast system. Simulations are obtained for high quadrature amplitude modulation (QAM) constellations that are commonly used with G-fast communications, the results demonstrate that the bit error rate (BER) performance of the coded MLCW system shows an improvement compared to the single line G-fast systems.Keywords: G.fast, Middleton Class A impulsive noise, mitigation techniques, Copper channel model
Procedia PDF Downloads 13317623 Comparing the SALT and START Triage System in Disaster and Mass Casualty Incidents: A Systematic Review
Authors: Hendri Purwadi, Christine McCloud
Abstract:
Triage is a complex decision-making process that aims to categorize a victim’s level of acuity and the need for medical assistance. Two common triage systems have been widely used in Mass Casualty Incidents (MCIs) and disaster situation are START (Simple triage algorithm and rapid treatment) and SALT (sort, asses, lifesaving, intervention, and treatment/transport). There is currently controversy regarding the effectiveness of SALT over START triage system. This systematic review aims to investigate and compare the effectiveness between SALT and START triage system in disaster and MCIs setting. Literatures were searched via systematic search strategy from 2009 until 2019 in PubMed, Cochrane Library, CINAHL, Scopus, Science direct, Medlib, ProQuest. This review included simulated-based and medical record -based studies investigating the accuracy and applicability of SALT and START triage systems of adult and children population during MCIs and disaster. All type of studies were included. Joana Briggs institute critical appraisal tools were used to assess the quality of reviewed studies. As a result, 1450 articles identified in the search, 10 articles were included. Four themes were identified by review, they were accuracy, under-triage, over-triage and time to triage per individual victim. The START triage system has a wide range and inconsistent level of accuracy compared to SALT triage system (44% to 94. 2% of START compared to 70% to 83% of SALT). The under-triage error of START triage system ranged from 2.73% to 20%, slightly lower than SALT triage system (7.6 to 23.3%). The over-triage error of START triage system was slightly greater than SALT triage system (START ranged from 2% to 53% compared to 2% to 22% of SALT). The time for applying START triage system was faster than SALT triage system (START was 70-72.18 seconds compared to 78 second of SALT). Consequently; The START triage system has lower level of under-triage error and faster than SALT triage system in classifying victims of MCIs and disaster whereas SALT triage system is known slightly more accurate and lower level of over-triage. However, the magnitude of these differences is relatively small, and therefore the effect on the patient outcomes is not significance. Hence, regardless of the triage error, either START or SALT triage system is equally effective to triage victims of disaster and MCIs.Keywords: disaster, effectiveness, mass casualty incidents, START triage system, SALT triage system
Procedia PDF Downloads 13417622 Impact of Marangoni Stress and Mobile Surface Charge on Electrokinetics of Ionic Liquids Over Hydrophobic Surfaces
Authors: Somnath Bhattacharyya
Abstract:
The mobile adsorbed surface charge on hydrophobic surfaces can modify the velocity slip condition as well as create a Marangoni stress at the interface. The functionalized hydrophobic walls of micro/nanopores, e.g., graphene nanochannels, may possess physio-sorbed ions. The lateral mobility of the physisorbed absorbed ions creates a friction force as well as an electric force, leading to a modification in the velocity slip condition at the hydrophobic surface. In addition, the non-uniform distribution of these surface ions creates a surface tension gradient, leading to a Marangoni stress. The impact of the mobile surface charge on streaming potential and electrochemical energy conversion efficiency in a pressure-driven flow of ionized liquid through the nanopore is addressed. Also, enhanced electro-osmotic flow through the hydrophobic nanochannel is also analyzed. The mean-filed electrokinetic model is modified to take into account the short-range non-electrostatic steric interactions and the long-range Coulomb correlations. The steric interaction is modeled by considering the ions as charged hard spheres of finite radius suspended in the electrolyte medium. The electrochemical potential is modified by including the volume exclusion effect, which is modeled based on the BMCSL equation of state. The electrostatic correlation is accounted for in the ionic self-energy. The extremal of the self-energy leads to a fourth-order Poisson equation for the electric field. The ion transport is governed by the modified Nernst-Planck equation, which includes the ion steric interactions; born force arises due to the spatial variation of the dielectric permittivity and the dielectrophoretic force on the hydrated ions. This ion transport equation is coupled with the Navier-Stokes equation describing the flow of the ionized fluid and the 3fourth-order Poisson equation for the electric field. We numerically solve the coupled set of nonlinear governing equations along with the prescribed boundary conditions by adopting a control volume approach over a staggered grid arrangement. In the staggered grid arrangements, velocity components are stored on the midpoint of the cell faces to which they are normal, whereas the remaining scalar variables are stored at the center of each cell. The convection and electromigration terms are discretized at each interface of the control volumes using the total variation diminishing (TVD) approach to capture the strong convection resulting from the highly enhanced fluid flow due to the modified model. In order to link pressure to the continuity equation, we adopt a pressure correction-based iterative SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) algorithm, in which the discretized continuity equation is converted to a Poisson equation involving pressure correction terms. Our results show that the physisorbed ions on a hydrophobic surface create an enhanced slip velocity when streaming potential, which enhances the convection current. However, the electroosmotic flow attenuates due to the mobile surface ions.Keywords: microfluidics, electroosmosis, streaming potential, electrostatic correlation, finite sized ions
Procedia PDF Downloads 7217621 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS
Authors: Eunsu Jang, Kang Park
Abstract:
In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis
Procedia PDF Downloads 40317620 Automated End of Sprint Detection for Force-Velocity-Power Analysis with GPS/GNSS Systems
Authors: Patrick Cormier, Cesar Meylan, Matt Jensen, Dana Agar-Newman, Chloe Werle, Ming-Chang Tsai, Marc Klimstra
Abstract:
Sprint-derived horizontal force-velocity-power (FVP) profiles can be developed with adequate validity and reliability with satellite (GPS/GNSS) systems. However, FVP metrics are sensitive to small nuances in data processing procedures such that minor differences in defining the onset and end of the sprint could result in different FVP metric outcomes. Furthermore, in team-sports, there is a requirement for rapid analysis and feedback of results from multiple athletes, therefore developing standardized and automated methods to improve the speed, efficiency and reliability of this process are warranted. Thus, the purpose of this study was to compare different methods of sprint end detection on the development of FVP profiles from 10Hz GPS/GNSS data through goodness-of-fit and intertrial reliability statistics. Seventeen national team female soccer players participated in the FVP protocol which consisted of 2x40m maximal sprints performed towards the end of a soccer specific warm-up in a training session (1020 hPa, wind = 0, temperature = 30°C) on an open grass field. Each player wore a 10Hz Catapult system unit (Vector S7, Catapult Innovations) inserted in a vest in a pouch between the scapulae. All data were analyzed following common procedures. Variables computed and assessed were the model parameters, estimated maximal sprint speed (MSS) and the acceleration constant τ, in addition to horizontal relative force (F₀), velocity at zero (V₀), and relative mechanical power (Pmax). The onset of the sprints was standardized with an acceleration threshold of 0.1 m/s². The sprint end detection methods were: 1. Time when peak velocity (MSS) was achieved (zero acceleration), 2. Time after peak velocity drops by -0.4 m/s, 3. Time after peak velocity drops by -0.6 m/s, and 4. When the integrated distance from the GPS/GNSS signal achieves 40-m. Goodness-of-fit of each sprint end detection method was determined using the residual sum of squares (RSS) to demonstrate the error of the FVP modeling with the sprint data from the GPS/GNSS system. Inter-trial reliability (from 2 trials) was assessed utilizing intraclass correlation coefficients (ICC). For goodness-of-fit results, the end detection technique that used the time when peak velocity was achieved (zero acceleration) had the lowest RSS values, followed by -0.4 and -0.6 velocity decay, and 40-m end had the highest RSS values. For intertrial reliability, the end of sprint detection techniques that were defined as the time at (method 1) or shortly after (method 2 and 3) when MSS was achieved had very large to near perfect ICC and the time at the 40 m integrated distance (method 4) had large to very large ICCs. Peak velocity was reached at 29.52 ± 4.02-m. Therefore, sport scientists should implement end of sprint detection either when peak velocity is determined or shortly after to improve goodness of fit to achieve reliable between trial FVP profile metrics. Although, more robust processing and modeling procedures should be developed in future research to improve sprint model fitting. This protocol was seamlessly integrated into the usual training which shows promise for sprint monitoring in the field with this technology.Keywords: automated, biomechanics, team-sports, sprint
Procedia PDF Downloads 12017619 Integrating Deterministic and Probabilistic Safety Assessment to Decrease Risk & Energy Consumption in a Typical PWR
Authors: Ebrahim Ghanbari, Mohammad Reza Nematollahi
Abstract:
Integrating deterministic and probabilistic safety assessment (IDPSA) is one of the most commonly used issues in the field of safety analysis of power plant accident. It has also been recognized today that the role of human error in creating these accidents is not less than systemic errors, so the human interference and system errors in fault and event sequences are necessary. The integration of these analytical topics will be reflected in the frequency of core damage and also the study of the use of water resources in an accident such as the loss of all electrical power of the plant. In this regard, the SBO accident was simulated for the pressurized water reactor in the deterministic analysis issue, and by analyzing the operator's behavior in controlling the accident, the results of the combination of deterministic and probabilistic assessment were identified. The results showed that the best performance of the plant operator would reduce the risk of an accident by 10%, as well as a decrease of 6.82 liters/second of the water sources of the plant.Keywords: IDPSA, human error, SBO, risk
Procedia PDF Downloads 13117618 Causal Relationship between Macro-Economic Indicators and Fund Unit Price Behaviour: Evidence from Malaysian Equity Unit Trust Fund Industry
Authors: Anwar Hasan Abdullah Othman, Ahamed Kameel, Hasanuddeen Abdul Aziz
Abstract:
In this study, an attempt has been made to investigate the relationship specifically the causal relation between fund unit prices of Islamic equity unit trust fund which measure by fund NAV and the selected macro-economic variables of Malaysian economy by using VECM causality test and Granger causality test. Monthly data has been used from Jan, 2006 to Dec, 2012 for all the variables. The findings of the study showed that industrial production index, political election and financial crisis are the only variables having unidirectional causal relationship with fund unit price. However, the global oil prices is having bidirectional causality with fund NAV. Thus, it is concluded that the equity unit trust fund industry in Malaysia is an inefficient market with respect to the industrial production index, global oil prices, political election and financial crisis. However, the market is approaching towards informational efficiency at least with respect to four macroeconomic variables, treasury bill rate, money supply, foreign exchange rate and corruption index.Keywords: fund unit price, unit trust industry, Malaysia, macroeconomic variables, causality
Procedia PDF Downloads 47117617 A Hybrid Data-Handler Module Based Approach for Prioritization in Quality Function Deployment
Authors: P. Venu, Joeju M. Issac
Abstract:
Quality Function Deployment (QFD) is a systematic technique that creates a platform where the customer responses can be positively converted to design attributes. The accuracy of a QFD process heavily depends on the data that it is handling which is captured from customers or QFD team members. Customized computer programs that perform Quality Function Deployment within a stipulated time have been used by various companies across the globe. These programs heavily rely on storage and retrieval of the data on a common database. This database must act as a perfect source with minimum missing values or error values in order perform actual prioritization. This paper introduces a missing/error data handler module which uses Genetic Algorithm and Fuzzy numbers. The prioritization of customer requirements of sesame oil is illustrated and a comparison is made between proposed data handler module-based deployment and manual deployment.Keywords: hybrid data handler, QFD, prioritization, module-based deployment
Procedia PDF Downloads 29717616 Satellite Image Classification Using Firefly Algorithm
Authors: Paramjit Kaur, Harish Kundra
Abstract:
In the recent years, swarm intelligence based firefly algorithm has become a great focus for the researchers to solve the real time optimization problems. Here, firefly algorithm is used for the application of satellite image classification. For experimentation, Alwar area is considered to multiple land features like vegetation, barren, hilly, residential and water surface. Alwar dataset is considered with seven band satellite images. Firefly Algorithm is based on the attraction of less bright fireflies towards more brightener one. For the evaluation of proposed concept accuracy assessment parameters are calculated using error matrix. With the help of Error matrix, parameters of Kappa Coefficient, Overall Accuracy and feature wise accuracy parameters of user’s accuracy & producer’s accuracy can be calculated. Overall results are compared with BBO, PSO, Hybrid FPAB/BBO, Hybrid ACO/SOFM and Hybrid ACO/BBO based on the kappa coefficient and overall accuracy parameters.Keywords: image classification, firefly algorithm, satellite image classification, terrain classification
Procedia PDF Downloads 40217615 Lexical-Semantic Processing by Chinese as a Second Language Learners
Authors: Yi-Hsiu Lai
Abstract:
The present study aimed to elucidate the lexical-semantic processing for Chinese as second language (CSL) learners. Twenty L1 speakers of Chinese and twenty CSL learners in Taiwan participated in a picture naming task and a category fluency task. Based on their Chinese proficiency levels, these CSL learners were further divided into two sub-groups: ten CSL learners of elementary Chinese proficiency level and ten CSL learners of intermediate Chinese proficiency level. Instruments for the naming task were sixty black-and-white pictures: thirty-five object pictures and twenty-five action pictures. Object pictures were divided into two categories: living objects and non-living objects. Action pictures were composed of two categories: action verbs and process verbs. As in the naming task, the category fluency task consisted of two semantic categories – objects (i.e., living and non-living objects) and actions (i.e., action and process verbs). Participants were asked to report as many items within a category as possible in one minute. Oral productions were tape-recorded and transcribed for further analysis. Both error types and error frequency were calculated. Statistical analysis was further conducted to examine these error types and frequency made by CSL learners. Additionally, category effects, pictorial effects and L2 proficiency were discussed. Findings in the present study helped characterize the lexical-semantic process of Chinese naming in CSL learners of different Chinese proficiency levels and made contributions to Chinese vocabulary teaching and learning in the future.Keywords: lexical-semantic processing, Mandarin Chinese, naming, category effects
Procedia PDF Downloads 46417614 Affordable Aerodynamic Balance for Instrumentation in a Wind Tunnel Using Arduino
Authors: Pedro Ferreira, Alexandre Frugoli, Pedro Frugoli, Lucio Leonardo, Thais Cavalheri
Abstract:
The teaching of fluid mechanics in engineering courses is, in general, a source of great difficulties for learning. The possibility of the use of experiments with didactic wind tunnels can facilitate the education of future professionals. The objective of this proposal is the development of a low-cost aerodynamic balance to be used in a didactic wind tunnel. The set is comprised of an Arduino microcontroller, programmed by an open source software, linked to load cells built by students from another project. The didactic wind tunnel is 5,0m long and the test area is 90,0 cm x 90,0 cm x 150,0 cm. The Weq® electric motor, model W-22 of 9,2 HP, moves a fan with nine blades, each blade 32,0 cm long. The Weq® frequency inverter, model WEGCFW 08 (Vector Inverter) is responsible for wind speed control and also for the motor inversion of the rotational direction. A flat-convex profile prototype of airfoil was tested by measuring the drag and lift forces for certain attack angles; the air flux conditions remained constant, monitored by a Pitot tube connected to a EXTECH® Instruments digital pressure differential manometer Model HD755. The results indicate a good agreement with the theory. The choice of all of the components of this proposal resulted in a low-cost product providing a high level of specific knowledge of mechanics of fluids, which may be a good alternative to teaching in countries with scarce educational resources. The system also allows the expansion to measure other parameters like fluid velocity, temperature, pressure as well as the possibility of automation of other functions.Keywords: aerodynamic balance, wind tunnel, strain gauge, load cell, Arduino, low-cost education
Procedia PDF Downloads 45117613 Common Orthodontic Indices and Classification in the United Kingdom
Authors: Ashwini Mohan, Haris Batley
Abstract:
An orthodontic index is used to rate or categorise an individual’s occlusion using a numeric or alphanumeric score. Indexing of malocclusions and their correction is important in epidemiology, diagnosis, communication between clinicians as well as their patients and assessing treatment outcomes. Many useful indices have been put forward, but to the author’s best knowledge, no one method to this day appears to be equally suitable for the use of epidemiologists, public health program planners and clinicians. This article describes the common clinical orthodontic indices and classifications used in United Kingdom.Keywords: classification, indices, orthodontics, validity
Procedia PDF Downloads 15417612 The Feasibility Evaluation Of The Compressed Air Energy Storage System In The Porous Media Reservoir
Authors: Ming-Hong Chen
Abstract:
In the study, the mechanical and financial feasibility for the compressed air energy storage (CAES) system in the porous media reservoir in Taiwan is evaluated. In 2035, Taiwan aims to install 16.7 GW of wind power and 40 GW of photovoltaic (PV) capacity. However, renewable energy sources often generate more electricity than needed, particularly during winter. Consequently, Taiwan requires long-term, large-scale energy storage systems to ensure the security and stability of its power grid. Currently, the primary large-scale energy storage options are Pumped Hydro Storage (PHS) and Compressed Air Energy Storage (CAES). Taiwan has not ventured into CAES-related technologies due to geological and cost constraints. However, with the imperative of achieving net-zero carbon emissions by 2050, there's a substantial need for the development of a considerable amount of renewable energy. PHS has matured, boasting an overall installed capacity of 4.68 GW. CAES, presenting a similar scale and power generation duration to PHS, is now under consideration. Taiwan's geological composition, being a porous medium unlike salt caves, introduces flow field resistance affecting gas injection and extraction. This study employs a program analysis model to establish the system performance analysis capabilities of CAES. The finite volume model is then used to assess the impact of porous media, and the findings are fed back into the system performance analysis for correction. Subsequently, the financial implications are calculated and compared with existing literature. For Taiwan, the strategic development of CAES technology is crucial, not only for meeting energy needs but also for decentralizing energy allocation, a feature of great significance in regions lacking alternative natural resources.Keywords: compressed-air energy storage, efficiency, porous media, financial feasibility
Procedia PDF Downloads 6817611 Modeling Studies on the Elevated Temperatures Formability of Tube Ends Using RSM
Authors: M. J. Davidson, N. Selvaraj, L. Venugopal
Abstract:
The elevated temperature forming studies on the expansion of thin walled tubes have been studied in the present work. The influence of process parameters namely the die angle, the die ratio and the operating temperatures on the expansion of tube ends at elevated temperatures is carried out. The range of operating parameters have been identified by perfoming extensive simulation studies. The hot forming parameters have been evaluated for AA2014 alloy for performing the simulation studies. Experimental matrix has been developed from the feasible range got from the simulation results. The design of experiments is used for the optimization of process parameters. Response Surface Method’s (RSM) and Box-Behenken design (BBD) is used for developing the mathematical model for expansion. Analysis of variance (ANOVA) is used to analyze the influence of process parameters on the expansion of tube ends. The effect of various process combinations of expansion are analyzed through graphical representations. The developed model is found to be appropriate as the coefficient of determination value is very high and is equal to 0.9726. The predicted values are found to coincide well with the experimental results, within acceptable error limits.Keywords: expansion, optimization, Response Surface Method (RSM), ANOVA, bbd, residuals, regression, tube
Procedia PDF Downloads 51017610 Improvement of Bone Scintography Image Using Image Texture Analysis
Authors: Yousif Mohamed Y. Abdallah, Eltayeb Wagallah
Abstract:
Image enhancement allows the observer to see details in images that may not be immediately observable in the original image. Image enhancement is the transformation or mapping of one image to another. The enhancement of certain features in images is accompanied by undesirable effects. To achieve maximum image quality after denoising, a new, low order, local adaptive Gaussian scale mixture model and median filter were presented, which accomplishes nonlinearities from scattering a new nonlinear approach for contrast enhancement of bones in bone scan images using both gamma correction and negative transform methods. The usual assumption of a distribution of gamma and Poisson statistics only lead to overestimation of the noise variance in regions of low intensity but to underestimation in regions of high intensity and therefore to non-optional results. The contrast enhancement results were obtained and evaluated using MatLab program in nuclear medicine images of the bones. The optimal number of bins, in particular the number of gray-levels, is chosen automatically using entropy and average distance between the histogram of the original gray-level distribution and the contrast enhancement function’s curve.Keywords: bone scan, nuclear medicine, Matlab, image processing technique
Procedia PDF Downloads 51117609 Position and Speed Tracking of DC Motor Based on Experimental Analysis in LabVIEW
Authors: Muhammad Ilyas, Awais Khan, Syed Ali Raza Shah
Abstract:
DC motors are widely used in industries to provide mechanical power in speed and torque. The position and speed control of DC motors is getting the interest of the scientific community in robotics, especially in the robotic arm, a flexible joint manipulator. The current research work is based on position control of DC motors using experimental investigations in LabVIEW. The linear control strategy is applied to track the position and speed of the DC motor with comparative analysis in the LabVIEW platform and simulation analysis in MATLAB. The tracking error in hardware setup based on LabVIEW programming is slightly greater than simulation analysis in MATLAB due to the inertial load of the motor during steady-state conditions. The controller output shows the input voltage applied to the dc motor varies between 0-8V to ensure minimal steady error while tracking the position and speed of the DC motor.Keywords: DC motor, labview, proportional integral derivative control, position tracking, speed tracking
Procedia PDF Downloads 10717608 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation
Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang
Abstract:
Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation
Procedia PDF Downloads 6917607 A Theoretical Hypothesis on Ferris Wheel Model of University Social Responsibility
Authors: Le Kang
Abstract:
According to the nature of the university, as a free and responsible academic community, USR is based on a different foundation —academic responsibility, so the Pyramid and the IC Model of CSR could not fully explain the most distinguished feature of USR. This paper sought to put forward a new model— Ferris Wheel Model, to illustrate the nature of USR and the process of achievement. The Ferris Wheel Model of USR shows the university creates a balanced, fairness and neutrality systemic structure to afford social responsibilities; that makes the organization could obtain a synergistic effect to achieve more extensive interests of stakeholders and wider social responsibilities.Keywords: USR, achievement model, ferris wheel model, social responsibilities
Procedia PDF Downloads 72517606 Artificial Intelligence in the Design of High-Strength Recycled Concrete
Authors: Hadi Rouhi Belvirdi, Davoud Beheshtizadeh
Abstract:
The increasing demand for sustainable construction materials has led to a growing interest in high-strength recycled concrete (HSRC). Utilizing recycled materials not only reduces waste but also minimizes the depletion of natural resources. This study explores the application of artificial intelligence (AI) techniques to model and predict the properties of HSRC. In the past two decades, the production levels in various industries and, consequently, the amount of waste have increased significantly. Continuing this trend will undoubtedly cause irreparable damage to the environment. For this reason, engineers have been constantly seeking practical solutions for recycling industrial waste in recent years. This research utilized the results of the compressive strength of 90-day high-strength recycled concrete. The method for creating recycled concrete involved replacing sand with crushed glass and using glass powder instead of cement. Subsequently, a feedforward artificial neural network was employed to model the compressive strength results for 90 days. The regression and error values obtained indicate that this network is suitable for modeling the compressive strength data.Keywords: high-strength recycled concrete, feedforward artificial neural network, regression, construction materials
Procedia PDF Downloads 1717605 Seismic Vulnerability Analysis of Arch Dam Based on Response Surface Method
Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong
Abstract:
Earthquake is one of the main loads threatening dam safety. Once the dam is damaged, it will bring huge losses of life and property to the country and people. Therefore, it is very important to research the seismic safety of the dam. Due to the complex foundation conditions, high fortification intensity, and high scientific and technological content, it is necessary to adopt reasonable methods to evaluate the seismic safety performance of concrete arch dams built and under construction in strong earthquake areas. Structural seismic vulnerability analysis can predict the probability of structural failure at all levels under different intensity earthquakes, which can provide a scientific basis for reasonable seismic safety evaluation and decision-making. In this paper, the response surface method (RSM) is applied to the seismic vulnerability analysis of arch dams, which improves the efficiency of vulnerability analysis. Based on the central composite test design method, the material-seismic intensity samples are established. The response surface model (RSM) with arch crown displacement as performance index is obtained by finite element (FE) calculation of the samples, and then the accuracy of the response surface model (RSM) is verified. To obtain the seismic vulnerability curves, the seismic intensity measure ??(?1) is chosen to be 0.1~1.2g, with an interval of 0.1g and a total of 12 intensity levels. For each seismic intensity level, the arch crown displacement corresponding to 100 sets of different material samples can be calculated by algebraic operation of the response surface model (RSM), which avoids 1200 times of nonlinear dynamic calculation of arch dam; thus, the efficiency of vulnerability analysis is improved greatly.Keywords: high concrete arch dam, performance index, response surface method, seismic vulnerability analysis, vector-valued intensity measure
Procedia PDF Downloads 24217604 Model Predictive Control of Three Phase Inverter for PV Systems
Authors: Irtaza M. Syed, Kaamran Raahemifar
Abstract:
This paper presents a model predictive control (MPC) of a utility interactive three phase inverter (TPI) for a photovoltaic (PV) system at commercial level. The proposed model uses phase locked loop (PLL) to synchronize TPI with the power electric grid (PEG) and performs MPC control in a dq reference frame. TPI model consists of boost converter (BC), maximum power point tracking (MPPT) control, and a three leg voltage source inverter (VSI). Operational model of VSI is used to synthesize sinusoidal current and track the reference. Model is validated using a 35.7 kW PV system in Matlab/Simulink. Implementation and results show simplicity and accuracy, as well as reliability of the model.Keywords: model predictive control, three phase voltage source inverter, PV system, Matlab/simulink
Procedia PDF Downloads 59617603 An Experiment Research on the Effect of Brain-Break in the Classroom on Elementary School Students’ Selective Attention
Authors: Hui Liu, Xiaozan Wang, Jiarong Zhong, Ziming Shao
Abstract:
Introduction: Related research shows that students don’t concentrate on teacher’s speaking in the classroom. The d2 attention test is a time-limited test about selective attention. The d2 attention test can be used to evaluate individual selective attention. Purpose: To use the d2 attention test tool to measure the difference between the attention level of the experimental class and the control class before and after Brain-Break and to explore the effect of Brain-Break in the classroom on students' selective attention. Methods: According to the principle of no difference in pre-test data, two classes in the fourth- grade of Shenzhen Longhua Central Primary School were selected. After 20 minutes of class in the third class in the morning and the third class in the afternoon, about 3-minute Brain-Break intervention was performed in the experimental class for 10 weeks. The normal class in the control class did not intervene. Before and after the experiment, the d2 attention test tool was used to test the attention level of the two-class students. The paired sample t-test and independent sample t-test in SPSS 23.0 was used to test the change in the attention level of the two-class classes around 10 weeks. This article only presents results with significant differences. Results: The independent sample t-test results showed that after ten-week of Brain-Break, the missed errors (E1 t = -2.165 p = 0.042), concentration performance (CP t = 1.866 p = 0.05), and the degree of omissions (Epercent t = -2.375 p = 0.029) in experimental class showed significant differences compared with control class. The students’ error level decreased and the concentration increased. Conclusions: Adding Brain-Break interventions in the classroom can effectively improve the attention level of fourth-grade primary school students to a certain extent, especially can improve the concentration of attention and decrease the error rate in the tasks. The new sport's learning model is worth promotingKeywords: cultural class, micromotor, attention, D2 test
Procedia PDF Downloads 13517602 Markov-Chain-Based Optimal Filtering and Smoothing
Authors: Garry A. Einicke, Langford B. White
Abstract:
This paper describes an optimum filter and smoother for recovering a Markov process message from noisy measurements. The developments follow from an equivalence between a state space model and a hidden Markov chain. The ensuing filter and smoother employ transition probability matrices and approximate probability distribution vectors. The properties of the optimum solutions are retained, namely, the estimates are unbiased and minimize the variance of the output estimation error, provided that the assumed parameter set are correct. Methods for estimating unknown parameters from noisy measurements are discussed. Signal recovery examples are described in which performance benefits are demonstrated at an increased calculation cost.Keywords: optimal filtering, smoothing, Markov chains
Procedia PDF Downloads 317